The Use of Machine Learning to Enhance Trust
Nowadays, the increasing digital footprint of our customers and our analytic capabilities provide us with a much deeper understanding and complex vision of the dynamics of big corporations, customers, and citizens. However, these technological opportunities also open risks that may drive mistrust in the long run. We must realize that technological innovations are not intrinsically good or bad. It is how humans control and use these new “superpowers” that pose ethical questions.
Risks related to dysfunctional Machine Learning solutions
— by Juan Murillo
Keeping an skeptical eye on 2017 most notorious fails on ML applications warns us about those critical issues that require care and rigor. Behind those cases often we find either:
- data quality problems (examples here, here or here)
- biases in imbalanced datasets
- methodological mistakes and bad practices in the human side.
Minimizing these three effects with quality assurance processes and algorithmic accountability is a key factor to assure that data-driven applications contribute to enhancing trust among service providers, not to the erosion of that trust. Besides, we have to be conscious of the different implications of ML failures (false positives and false negatives) in diverse fields, as the consequences and levels of responsibility are totally different whether we are recommending a song, a financial product, or diagnosing a disease.
Reliability is the biggest barrier to extend ML-based solutions, given that once people detect that an automated system failed, probably they are not going to use them a second time.
Opinions embedded in code: biases and fairness
— by Pablo Fleurquin, Roberto Maestre, Elena Alfaro and Fabien Girardin
One of the most vibrant domains inside the ML/AI community is built upon the idea that without any explicit wrongdoing ML may wind up being unfair in its decision-making. Two of the main reasons behind such a pervasive problem are sample size disparity and encoded human biases in data. The former is easy to grasp, basically minority groups are by definition under-represented in data sample which leads to higher error rates on these groups. The latter, is part of the data and in most cases is indistinguishable from it. Biases comes in many flavors: demographic, geographic, behavioral and temporal biases. Such an important issue did not go unnoticed in 2017 NIPS with a thorough tutorial by Moritz Hardt and Solon Barocas (Berkeley and Cornell University) on Fairness in Machine Learning and an inspiring talk by NYU Principal Researcher and Microsoft Research co-founder Kate Crawford on The Trouble with Bias. More introductory resources on the subject can be found on the fast growing community around the FAT/ML conference, in particular 2017 invited talk by Google Senior Researcher Margaret Mitchell on The Seen and Unseen Factors Influencing Knowledge in AI Systems. Early in 2017 Science magazine published Semantics derived automatically from language corpora contain human-like biases (A. Caliskan et al.) on pre-existing biases and stereotypes in semantically derived word associacions.
Apart from detecting an signaling unfair and biased driven decisions, the community is starting to address the issue using different approaches. Some interesting approaches to tackle this problem have been laid out by Max Planck MPI-SWS group (see Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning) and can be explored in the ideas shared in the 2017 FAT/ML invited talk by Microsoft Senior Researcher Rich Caruana on Friends Don’t Let Friends Deploy Black Box Models: Preventing Bias via Transparent Machine Learning. Hence we must not only detect but address unfair and biased learning turning the tide to generate trust and shake the community consciousness on the matter.
Machine Learning interpretability: transparency as a core value
— by Pablo Fleurquin, Manuel Ventero, Alberto Rubio and Jordi Aranda
Transparency also known as Machine Learning interpretability is a key part of the toolset to tackle mistrust in our algorithmic decision-making processes. It can be used to promote fair learning, and moreover pervade the organizational culture with ethical responsibility. As the great 20th-century physicist Richard Feynman put it: “if you cannot explain something in simple terms, you don’t understand it”. This maxima that is so accepted in the hard sciences, it is not that extended in Data Science. It implies a bidirectional association between explainability and understandability, which ultimately oppose transparency against blackbox-ness. It should be noted though, that black-box algorithms are not exclusively those of a non-linear nature; high dimensional and heavily tuned Generalized Linear Models can be also vastly opaque. Fortunately these last years, efforts have been put on developing tools that shed light over the algorithmic decision process. Beginning with model-agnostic frameworks such as LIME or exploring how input features are associated to predictions in DL, methods are popping up to clear the way and take-apart the machine to explain its pieces. Importantly, as thoroughly shown by Patrick Hall et al. in Ideas on Interpreting Machine Learning transparency starts in the exploration phase, and several visualization and statistical methods can provide global and local interpretability, without need of an interpretability framework. For those looking for an in-depth view on the subject the book Transparent Data Mining for Big and Small Data is your way to go.
Could Machine Learning be a problem for privacy?
— by Juan Murillo and Pablo Fleurquin
As our digital footprint increases, the debate around privacy protections is frequently confronted to national security issues. A balance between citizens’ fundamental rights and the role of states and corporations is being defined through new legal frameworks in Europe and China as main examples. In addition, it has been shown that Deep Learning can be used to reconstruct obfuscated human faces by supposedly privacy preserving methods such as pixelation and gaussian blurs. Another paper on membership inference showed the way to determine whether a data record was part of the training set. As the authors put it “knowing that a certain patient’s clinical record was used to train a model associated with a disease (e.g, to determine the appropriate medicine dosage or to discover the genetic basis of the disease) can reveal that the patient has this disease.”
Risks inherent to the democratization of Machine Learning as a Service in the hand of non experts
— by Juan Murillo and Pablo Fleurquin
In the last few years most of the largest tech companies have started offering what it is known as Machine Learning as a Service: Google Prediction API, Amazon Machine Learning, Microsoft Azure Machine Learning, among others. Very recently, we have witnessed the launch of Google’s Cloud AutoML, which in words of Google’s Cloud AI Chief Data Scientist Fei-Fei Li is a product that enables everyone to build their own customized ML model without much ML expertise. A big step towards Democratizing AI.
Together with this “democratization” of AI, the Machine Learning community has started to be aware of the potential pitfalls of ML. Challenges remain to be addressed, from privacy to fairness and transparency. It is thus of paramount importance that democratization of AI should be tempered with expertise and ethical and technical responsibility from the practitioner side. Democratization is welcomed, but that does not mean that anything goes. As Cathy O’Neal puts it in her 2017 Ted Talk The era of blind faith in Big Data must end: “a lot can go wrong when we put blind faith in Big Data”. The problem is that for tackling all these challenges we need domain-specificity. It would be very pretentious and certainly impossible to find an automatic ML engine that protect us against all the pitfalls mentioned.
Another lesson that the paper Membership Inference Attacks Against Machine Learning Models (see Could Machine Learning be a problem to privacy?) teaches us is that careful attention must be put on algorithm development because overfitting, and therefore less wariness on ML solution, is the most important reason why Machine Learning models leak information about their training set.
After all, competence is one of trust main pillars; therefore, one should ask herself: as a bank customer would I trust my mortgage, my financial health to an algorithm decision-making process built on low expertise?