What We Saw (and Liked) in 2017 — Part 3

BBVA Data & Analytics Team

d&a blog

A Wider Field of Machine Learning: From Network Analysis to New Interactions with Machines

Human and machine interactions

by Iskra Velitchkova

For a long time, science has been living in a comfortable bubble where complexity was the norm, but now society demands more transparency and make their processes more comprehensible, especially when it comes to the fast-growing advances in algorithms, which are forcing legislation to adapt. When we are talking about black boxes, interpretability or transparency, what we are really saying is that we need a common language between humans and machines, and to understand these interactions.

The following are references that have emerged last year as inspiration and starting points for new debates around the way we are and we will be interacting with machines. George Whitesides, professor of Chemical Biology at Harvard, look at the significance of tools beyond the purely material definition when he invites us to see, for instance, the significance of the birth control pill.

“Of course, is a birth control pill, which, in a very simple way, fundamentally changed the structure of society by changing the role of women in it by providing to them the opportunity to make reproductive choices”.

With this example, Whitesides is make us think about science in a more holistic way. This is essential these days when scientific breakthroughs are affecting in such a direct way practically every aspect of our society. We ask ourselves, what is an algorithm? Is it an instruction? is it a set of rules, or something else? Ben Cerveny, renowned designer, and UX expert, underlines the hours we spend using these new applications and opens the discussion to questioning whether the algorithms are fulfilling or not our needs. In his view, we might need new ways of communicating, a new common language, and, perhaps, new narratives.

Cathy O’Neil, the author of the provocative “Weapons of Math Destruction”, gave a Ted Talk last year about ethics and honesty. She proposes interpreting and questioning the opacity of certain algorithmic processes, in order to avoid destroying something so basic as the trust in these technologies.

In closing these lines, I would recommend taking a look at the NIPS Machine Learning for Creativity and Design Workshop 2017 a gallery that teaches us that artistic interpretations and abstractions are very useful tools to understand the scientific exploration in new fields.

Relational data, discovering our customers’ context

by Jordi Nin, Elena Tomás and Pablo Fleurquin

We live in an increasingly intertwined and blended world. In the digital era, there are almost no isolated pieces of information. Today’s data is dynamic and connected in a rich web of complex relationships ranging from IoT & smart devices to social or enterprise networks; just to name a few. From an analytic or scientific perspective Complex Network Theory (a.k.a Graph Theory) has been flourishing (again) since the beginning of the XXI century. The field is vast and advances are difficult to follow. To touch base with the theory basics you could follow the report Complex networks: Structure and dynamics. Focusing on 2017 and in particular stressing on the economic and financial subdomain this year we have witnessed a major shift in the macroeconomic modeling perspective with the Nobel laureate paper Joseph Stiglitz admitting Where Modern Macroeconomics Went Wrong and signaling the path to consider the underlying networked structure of financial and economic systems: “understanding the structures that are most conducive to stability, and the central trade-offs (e.g. between the ability to withstand small and large shocks) represents one of the areas of important advances since the crisis. These were questions not even posed within the DSGE framework—they could not be posed because they do not arise in the absence of a well-specified financial sector, and would not arise within a model with a representative financial institution.”

In this regard, the article Pathways towards instability in financial networks addresses macroeconomic modeling from a networked perspective, and the Bank of England white paper An Interdisciplinary model for macroeconomics explores complementary new ways to do so.

Genetic Algorithms

by Israel Herraiz

Training a Deep Neural Network is an optimization problem. The optimal network will minimize the value of the loss function. Traditionally, the optimization methods that applied to this optimization problem are based on gradient descent. Backpropagation, that is, a method to compute the gradient of the loss function against the weights in the network, has been a huge success in the training of deep neural networks. Thanks to this, we have seen huge leaps forward in supervised learning (computer vision, voice recognition), as well as when deep neural networks are used in reinforcement learning.

Finding an optimal value in a manifold using gradient descent has some theoretical problems: for instance, we can get stuck in a local minima. In practice, we have seen results that are good enough to make us believe that local minima are not such a big problem. By using gradient-based optimization methods, machines can get as proficient as humans playing to some classic Atari games. Not to mention larger successes, such as Alphago (see Reinforcement Learning beats itself in the first post in these series).

But are these results truly optimal? Can we do any better? We can’t tell unless we find empirical examples that perform better. Uber AI Labs has shown that the answer to the quest for a better optimization may lie in the way nature find the most optimal path for natural selection. By using Evolution Strategies and Genetic Algorithms, these researchers have found models that perform better at playing some classic Atari games, compared to previous results using Reinforcement Learning with Deep Neural Networks (deep Reinforcement Learning -RL-). In some cases, they have even found that random search can perform better than deep RL models. Were these games intrinsically hard? Or have we just been using the wrong optimization methods? Will gradient-free optimization become another tool in our deep learning belt?

Probabilistic programming

by María Hernández and César de Pablo

Last year has not only seen advances in Deep Learning but also we have witnessed the birth of new probabilistic programming frameworks like Edward and Uber Pyro, as well as advances in some of the already popular ones (Stan, PyMC3).

These new frameworks piggyback on DL frameworks like Tensorflow or PyTorch to implement recent advances on using Stochastic Gradient Descent based algorithms as posterior inference algorithms, paving the way to black-box inference but also interesting avenues for the hybridization of deep and Bayesian learning.

As financial applications would not only benefit from predicting a value but also being able to estimate the confidence on this predictions, we expect to dive deeper into it during 2018.

Bayesian deep learning

by Axel Brando

Nowadays, although Deep Learning algorithms have revolutionized a lot of fields like computer vision, these algorithms are usually unable to know how confident are about their predictions. This has a critical implication: If we want to use Deep Learning techniques for a risky decision, where we prefer to not predict if we are not confident enough, these models are not an accurate solution. To tackle this issue, one solution that has gained traction over the past year and every day more present in the Machine Learning Conferences is to mixing Bayesian point of view (that gives us tools to understand the concept of uncertainty) with the powerful predictive power of Deep Learning models. This branch of knowledge seems to be called Bayesian Deep Learning.

In order to understand the recent evolution of this branch a good starting point is the PhD Thesis of Yarin Gal about Uncertainty in Deep Learning and the Tutorials of Edward framework. In particular, some of the most impressive articles about this field were published the last year, for example, the Bayesian Deep Learning proposals of Alex Kendall for Safe AI or the application in a real disease detection problem proposed by Christian Leibig et al. in a Nature article or even models closer to concepts such as Gaussian processes like the NIPS tutorial 2017 by Neil D. Lawrence. Everything seems to indicate that this will not end here.