Nicole Rust and the brain that uses machine-learning

I love Indian sweets – they’re sugary and buttery and all over delicious. My only problem is, I can never remember what the ones that I like are called; I can usually picture them in my head, but that can make it a bit difficult to order. When I go to an Indian market that sells sweets, I hunt through the display case, looking for that one that I want. How do I decide which one to buy?

Visual information is notoriously difficult to work with. It seems like it should be easy to us because we are, after all, visual creatures. But there’s a lot of information in every image. The visual system in our brain works through it in consecutive steps, essentially going from neurons representing individual pixels to neurons representing people, objects, things. It does this through successive combinations and decorrelations.

Neurons in the retina primarily respond to dots of light: dark spots, light spots, light spots surrounded by dark spots, that sort of thing. This is then passed on to visual cortex where the spots are aligned to create so-called ‘simple cells’, or edge detectors. These neurons look for lines of dark light next to lines of bright light, or vice-versa. In other words, they align individual dark- and light-detectors to see where in a picture things suddenly change. These neural response are more decorrelated from each other because their activity become more differentiated. Images in the natural world tend to be highly correlated; not only are things roughly the same from one moment to the next, but two points near each other tend to have the same color and luminance (just look at the sky!).

This progresses in such a manner until we get to a split in the visual stream, with one pathway roughly representing ‘where’ and one pathway roughly representing ‘what’. Information tumbles down the ‘what’ stream and ends up in area IT [inferotemporal cortex], typically thought of as being used for object recognition.

Nicole Rust gave an excellent talk at UCSD yesterday about what happens when things gets to this stage of processing, how IT responds to searched-for images and what happens in the next stage of processing. Neurons in IT represent objects through a population code – there is no real ‘grandmother cell’, but rather a collection of neurons who will combine to represent that grandmother. And when a monkey is searching for an object, the neurons in IT will be able to discriminate between what the animal is searching for and other, non-interesting objects. But this isn’t a linear discrimination. The information is tangled up among the neurons in IT. It’s there, but it might be a little more difficult to get out.

neural separability

The next stage of the decision-making process is perirhinal cortex, a part of the brain that takes processed information (such as from IT) and uses it for memory-driven tasks. The information on which object to choose is also here, but now it’s easy to get to: you can take the activity of a bunch of neurons in this region and draw a straight line between them and find the patterns that for interesting and uninteresting objects. Yet when when the animal makes a mistake, and chooses something that it wasn’t looking for? It’s all jumbled up in perirhinal cortex.

How does it “dejumble”? Rust & co. suggest that a simple linear-nonlinear model can reproduce most of the important features of this demixing, including the proportion of cells that appear to respond to specific objects. They have a model for this, but what it comes down to is that the transition between IT and perirhinal cortex is essentially doing quadratic discriminant analysis! It’s interesting to ponder why this seems to happen across regions and not within layers in the same region.

If the brain is slowly demixing signals using the same techniques as in machine-learning, is there something to be learned for decision-making in general? Perhaps we need to take into account the algorithms that people are using to understand the data before we can even begin to understand how they make a decision.

Reference

Pagan M, Urban LS, Wohl MP, & Rust NC (2013). Signals in inferotemporal and perirhinal cortex suggest an untangling of visual target information. Nature neuroscience, 16 (8), 1132-9 PMID: 23792943

Advertisements

3 thoughts on “Nicole Rust and the brain that uses machine-learning

  1. Pingback: What has neuroscience done for machine intelligence? | neuroecology

  2. Pingback: Yeah, but what has ML ever done for neuroscience? | neuroecology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s