top of page

UNIVERSAL LEARNING: Do implementation details matter?

Loek van Rossem, 2nd year, University College London

BACKGROUND:

​

Recent leaps forward in artificial intelligence have left many to reconsider the role of AI as an integral part of our future society. Yet, as neural networks are now slowly being implemented by many companies and organisations, the truth is we barely even know how they work. Although the rules by which neural networks learn to solve tasks are easy enough to understand, the final computational algorithm they end up learning has proven almost incomprehensible for advanced AI systems. This should be no surprise as the complexity within them only reflects that of the problems they are trying to solve.

​

These difficulties are not unlike some of the ones currently faced in computational neuroscience. As measurement techniques are improving you would expect our understanding of how the brain computes to become clearer. However, the increasingly larger and higher quality datasets reveal only more layers of complexity, leaving us unsure how to even attempt analysing them.

​

So how can we deal with such seemingly insurmountable complexity? The approach I am taking is based on the idea that these two issues may not just be related, but perhaps they are the same. This is based on the remarkable observation that the structures learned across many different types of neural networks often look very similar. They even share similarities with representations found in the brain, for instance neural networks trained on visual data tend end up with neurons similar to simple cells in their early layers. Although the implementation details across neural networks, as well as the brain, vary greatly, perhaps the key mechanisms by which these systems learn to solve problems are the same. Coming from a mathematics/physics background, I can’t help but wonder if these “universal” mechanisms can somehow be modelled mathematically.

​

​

METHODOLOGY:

It may not seem clear initially how to write down a mathematical model describing the universal behaviour common among different neural networks. After all, the equations for the learning dynamics are different for each model. Nevertheless, it turns out that a part of these equations looks very similar. The part of equations which differs we can simply replace by some constant, representing the effect of the implementation details. This of course is not technically correct, or even approximately correct, but we can do it anyway simply to study the “universal” part of the equations. Doing this leaves us with a simple set of equations for representational learning dynamics, which we can study mathematically.

​

​

RESULTS:

​

When we compare this model to different neural networks we find that it can describe the neural representational dynamics quite well. In the figure below you can see neural networks learning to distinguish two datapoints (GREEN LINE: difference in representations, ORANGE LINE: difference in prediction, BLUE LINE: measure of how aligned the prediction is to the correct output). The model (dotted lines) almost perfectly fits the data, given the right constant we used to abstract away the implementation details. This suggests that some of the behaviour of these networks is indeed universal.

​

​

​

​

​

​

​

​

​

​

​

​

So, what kind of structure is learned in this model? We can solve the equations to find a mathematical expression for it. If we for instance look at the problem of digit classification, that is we train a neural network to read handwritten digits, the model says that the representations form clusters based on different digits. Indeed comparing the geometry of a handwritten digit dataset (LEFT PICTURE) to that of the representations learned in neural networks (RIGHT PICTURE), we see a clustering effect.

​

​

​

​

​

​

​

​

​

​

​

​

​

FUTURE WORK:

​

The advantage of this approach is that it does not depend on the implementation details of the learning system. So far I have only looked at structures learned in neural networks, but the analysis that I am using does not assume we are talking about a neural network, just a system that learns gradually from data. Perhaps some of this universal behaviour also applies to the brain. Since the implementation details of the brain are quite complex, an analysis that ignores these details could be significantly easier and provide more clarity.

​

​

FUNDED BY:

​

The Gatsby Charitable Foundation

​

​

​

​

​

​

​

​

CONTACT:

LoekvanRossem.jpeg
LoekvanRossem.png
LoekvanRossem.jpeg
TheGatsbyFoundation.png
  • LinkedIn

Loek van Rossem

bottom of page