The Monthly Briefing

A new (non)normality



This week marks one month since the death of George Floyd, an African American man killed by police during an arrest in Minneapolis, Minnesota. The events that have followed both inside and outside the United States, are well known and are already part of the history of the fight against discrimination and social injustice.

This crisis faces us with an uncomfortable truth: inequality is still present in our society and has consequences in many aspects of our lives. This reality can be seen in all areas, including the application and use of technology.

Recently, IBM, Amazon and Microsoft have all reported a slowdown in the implementation of facial recognition technologies, because of the risk of racial bias. One of the main reasons cited for this pause is that these companies are waiting for a national law to be passed which will govern this technology. However, some past lobbying actions show that these companies do not actually want strict rules against facial recognition use. In the case of IBM, the company has announced the interruption of the development or research of this technology for any use that -in their own words- might favour "mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles”. In the case of Amazon, they have executed a one-year moratorium for Rekognition, its police related facial recognition technology.

However, potential risks should not be related to specific technologies, but to their specific use and applications. Citizens do not trust AI or any other technology, but the firms, organisations or services using it. Therefore, it is essential that clear and technology-agnostic governance and risk mitigation requirements are in place. Identifying AI with higher discrimination or safety risks is undermining trust in this technology and AI adoption. It should be stressed that the same technology that can be used for a bad purpose can perfectly well be applied in the opposite direction: to verify that no one is discriminated against and to guarantee fundamental rights.

The debate on the regulation of Artificial Intelligence is currently very present in Europe. The EU regulatory framework is comprehensive and strongly oriented towards the protection of citizens’ rights, making the EU a global benchmark on this issue. Therefore, authorities should focus on providing guidance on how to comply with existing regulations to avoid unfair discrimination and achieve suitable levels of explainability or interpretability, and creating a harmonized framework of regulators and supervisors’ expectations on each AI application depending on their criticality.


Coronavirus is still with us, but now it's traceable

Dmytro Varavin, Via Getty Images.

With the start of the "new normality", the main European countries are implementing tracking apps to control contagion and prevent possible outbreaks from getting out of control again. Some countries, such as Germany or Denmark, are opting for a decentralised model, while others, such as France, are collecting data on a central server.

Germany launched its application on June 16, after a cost of development of 20 million euros, and reaching 10 million downloads in just three days. The so-called Corona-Warn-app is based on Apple and Google technology, which has also been adopted by other countries such as Italy with notable success, and guarantees the privacy and anonymization of data. It works with Bluetooth technology, which allows to measure close contacts..

On the opposite side, France advocates the centralized model, that has raised some concerns about the level of data privacy. The French application also uses Bluetooth technology, but, according to Margrethe Vestager, EU Commission vice-president, "it may not be able to connect with others [apps] across the European Union because it stores data centrally". The UK has rejected the implementation of the app it had been developed after detecting technical issues with Bluetooth technology. On June 18, the government admitted that the application was faulty and that it would switch to a model developed by technology giants Apple and Google.

In Spain, the latest news indicates that the app will not be ready until after the summer, when its effectiveness has been proven. The Spanish application will be open source and based on the decentralized model.


Further reading

+ Why does the term "assured autonomy" pose a dangerous concept? (New York Times)
Ben Shneiderman, a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers, thinks robots should collaborate with humans, rather than replace them. A similar opinion is held by Missy Cummings, director of Duke University’s Humans and Autonomy Laboratory, who explains that “the degree of collaboration should be driven by the amount of uncertainty in the system and the criticality of outcomes”.

+ Experience design with ML, soooo well explained. (Apple)
This video from Apple perfectly shows how to design good Machine Learning experiences. The importance of explaining things well!

+ United States: the great catalyst of talent in Artificial Intelligence. (Macro Polo)
Macro Polo, the in-house think tank of the Paulson Institute, has launched The Global AI Talent Tracker, that shows which countries, companies and institutions are leading the application of Artificial Intelligence in the world. Surprise: Europe's got talent, but it's going else where. This reminds me this article about best universities on computer science. (BBVA)

Click on the image to access the Global AI Talent Tracker

+ "How clean is your cloud?" (Greenpeace)
A new report from Greenpeace, called Oil in the Cloud, shows how the oil and gas industry uses cloud technology from Amazon’s AWS, Microsoft’s Azure and Google’s GCP. Tech giants are using ML to optimise their energy consumption and even stop being totally dependent on energy from fossil fuels. But, at the same time, this technology and its cloud services are also used by large oil and gas companies in the 3 phases of Oil and Gas Operations: Upstream -finding and extracting oil and gas-, Midstream -transporting and storing oil and gas- and Downstream -refining, marketing and selling oil and gas-. Amazon’s AWS and Microsoft’s Azure, (33% and 18% market share respectively), come out of this report looking pretty bad. It looks like only Google’s GCP (8% market share) is taking right steps and has committed to no longer taking on new oil and gas contracts. This Vox viral video provides more information on this topic.

+ Focusing on producing data with good quality in the first place, not finding problems and fixing them. (Towards Data Science)
In this article, Stephanie Shen explains how to build a data pipeline that creates and sustains good data quality from the start.

+ References and examples in Data Science (Data Science sin humo)
Pelayo Arbués, data scientist at idealista lab, shares notes on DS projects, what this discipline brings, difficulties, solutions and tools. Here is the twitter thread.

Quote

of the

month

"In a racist society, it is not enough to be non-racist. We must be anti-racist".

Angela Davis, American political activist, philosopher, and academic.


We like very much the proposal that Emily Hadley makes about this sentence, taking the concept to our professional field: “in a racist society, it is not enough to be non-racist [data scientists]. We must be antiracist [data scientists]”. In her article 5 Steps to Take as an Antiracist Data Scientist (Medium) Hadley tells us what we can do to make a positive impact in our field. “We must confront the ways in which data and algorithms have been used to perpetuate racism, and eliminate racist decisions and algorithms in our own work.”

BBVA DATA GALAXY 🌌


Data-wrapped products

Photo of a Chicken Wrap. Via Pancho Villa recipes.

Data-wrapped products are not eaten, but according to an MIT Sloan Management Review article, they can delight customers and increase profitability. The text highlights data products developed by BBVA such as the categorizer and the features incorporated in the BBVA personal finance management app. These products are characterized by embedding data analytics capabilities as a value proposition and to offer much higher impact experiences.

Data wrapping is a distinctive data monetization approach whose main characteristics are: (1) product owners, not IT, lead the product road map, (2) economic returns result from a lift in sales, not from an internal business process improvement, and (3) it’s risky; they could confuse, irritate, offend, or drive away the customers they serve. You can read more about it in this article.


#makeitVISIBLEBBVA

We’re inviting you to take part in our photo contest! Post what DIVERSITY means to you on Instagram using #MakeItVisibleBBVA and you could win one of three amazing Polaroid Snap Touches – deadline July 5th. (Legal terms)

Happy Pride!

Happy holidays!
We'll be back in autumn. And we hope you'll still be around :)

< Read previous issue

For any question or suggestion, you can also write to hello@bbvadata.com

You can enjoy much more content related to data science, innovation, new solutions of financial analytics and how we work in our website: bbvadata.com

Let's talk about it. Join the conversation on Linkedin.

© 2020 All rights reserved. BBVA Data & Analytics. Avenida de Manoteras, 44, 28050. Madrid.