Yeah, but what has ML ever done for neuroscience?

This question has been going round the neurotwitters over the past day or so.

Let’s limit ourselves to ideas that came from machine learning that have had an influence on neural implementation in the brain. Physics doesn’t count!

  • Reinforcement learning is always my go-to though we have to remember the initial connection from neuroscience! In Sutton and Barto 1990, they explicitly note that “The TD model was originally developed as a neuron like unit for use in adaptive networks”. There is also the obvious connection the the Rescorla-Wagner model of Pavlovian conditioning. But the work to show dopamine as prediction error is too strong to ignore.
  • ICA is another great example. Tony Bell was specifically thinking about how neurons represent the world when he developed the Infomax-based ICA algorithm (according to a story from Terry Sejnowski). This obviously is the canonical example of V1 receptive field construction
    • Conversely, I personally would not count sparse coding. Although developed as another way of thinking about V1 receptive fields, it was not – to my knowledge – an outgrowth of an idea from ML.
  • Something about Deep Learning for hierarchical sensory representations, though I am not yet clear on what the principal is that we have learned. Progressive decorrelation through hierarchical representations has long been the canonical view of sensory and systems neuroscience. Just see the preceding paragraph! But can we say something has flowed back from ML/DL? From Yemins and DiCarlo (and others), can we say that maximizing the output layer is sufficient to get similar decorrelation as the nervous system?

And yet… what else? Bayes goes back to Helmholtz, in a way, and at least precedes “machine learning” as a field. Are there examples of the brain implementing…. an HMM? t-SNE? SVMs? Discriminant analysis (okay, maybe this is another example)?

My money is on ideas from Deep Learning filtering back into neuroscience – dropout and LSTMs and so on – but I am not convinced they have made a major impact yet.

Advertisements

Jumping off of bridges

No man is an island, entire of itself.  Although we like to think of our decisions occurring in a vacuum, in reality we’re bombarded with information on how other people are deciding all the time.  It would be shocking if our decisions weren’t influenced by the behavior of other people – and, obviously, a wide range of studies indicate that we are (sociology).

In nature, too, the behavior of animals is dependent on what they see other animals doing.  Think of fish swimming through schools and schools of other fish.  To the right it sees a flash of a fin: is it a predatory fish? Or a friendly fish?  If it’s a predator and you misidentify it you’ve made a big mistake; but misidentify a friendly fish as a predator and you’ve just wasted a bunch of energy – and you maybe you’ve lost a friend.

You can improve your identification of a predator – or of anything really – just by listening to the crowd.  If your friends are looking out for the same things that you are, you should make your decision based on what the majority of your friends think (quorum sensing).  Not only will you make more true positive decisions, you’ll also make fewer false positive decisions: you become more perceptive as a whole.

Humans do this, too.  Just sit a bunch of people together in a room and force them to identify Group evidencewhether a short movie clip has a predator in it or not.  When they are told what percent of other people think they’ve seen a predator, they will do much better than if they just decide on their own information alone.  Even having just one other person help to you out will have a dramatic impact.  People don’t go with a simple majority opinion, rather, they base their decision on how reliable the group has been.  When the group has been more reliable in the past – when it has had more true positives – then more of the group needs to agree in order for someone to be swayed in their decision (see figure).

What is most interesting about this to me is how trivial it would to implement in a spiking neural network model.  Divisive normalization (or gain control) is a common feature of neural networks: neural activity isn’t really the sum of its inputs, but is divided by a factor relating to the total stimulation.  In other words, if there were a very strong input stimulus (say, a lot of social input) then the neural response would relate to the fraction or variation in that input.  Basically, quorum sensing.  And using group reliability to determine your quorum threshold?  It just reeks of reinforcement learning.

As I’ve been reading papers over the last year, attempting to become a ‘neuroecologist’, I’ve been trying to keep in mind how social decisions might be built into the brain.  This paper is a great example of how ideas in ecology might provide straightforward input that can advance neuroscience.

References

Wolf, M., Kurvers, R., Ward, A., Krause, S., & Krause, J. (2013). Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives Proceedings of the Royal Society B: Biological Sciences, 280 (1756), 20122777-20122777 DOI: 10.1098/rspb.2012.2777

Photo from