The silent majority (of neurons)

Kelly Clancy has yet another fantastic article explaining a key idea in theoretical neuroscience (here is another):

Today we know that a large population of cortical neurons are “silent.” They spike surprisingly rarely, and some do not spike at all. Since researchers can only take very limited recordings from inside human brains (for example, from patients in preparation for brain surgery), they have estimated activity rates based on the brain’s glucose consumption. The human brain, which accounts for less than 2 percent of the body’s mass, uses 20 percent of its calorie budget, or three bananas worth of energy a day. That’s remarkably low, given that spikes require a lot of energy. Considering the energetic cost of a single spike and the number of neurons in the brain, the average neuron must spike less than once per second.4 Yet the cells typically recorded in human patients fire tens to hundreds of times per second, indicating a small minority of neurons eats up the bulk of energy allocated to the brain.

There are two extremes of neural coding: Perceptions might be represented through the activity of ensembles of neurons, or they might be encoded by single neurons. The first strategy, called the dense code, would result in a huge storage capacity: Given N neurons in the brain, it could encode 2Nitems—an astronomical figure far greater than the number of atoms in the universe, and more than one could experience in many lifetimes. But it would also require high activity rates and a prohibitive energy budget, because many neurons would need to be active at the same time. The second strategy—called the grandmother code because it implies the existence of a cell that only spikes for your grandmother—is much simpler. Every object in experience would be represented by a neuron in the same way each key on a keyboard represents a single letter. This scheme is spike-efficient because, since the vast majority of known objects are not involved in a given thought or experience, most neurons would be dormant most of the time. But the brain would only be able to represent as many concepts as it had neurons.

Theoretical neuroscientists struck on a beautiful compromise between these ideas in the late ’90s.6,7In this strategy, dubbed the sparse code, perceptions are encoded by the activity of several neurons at once, as with the dense code. But the sparse code puts a limit on how many neurons can be involved in coding a particular stimulus, similar to the grandmother code. It combines a large storage capacity with low activity levels and a conservative energy budget.

 

 

She goes on to discuss the sparse coding work of Bruno Olshausen, specifically this famous paper. This should always be read in the context of Bell & Sejnowski which shows the same thing with ICA. Why are the independent components and the sparse coding result the same? Bruno Olshausen has a manuscript explaining why this is the case, but the general reason is that both are just Hebbian learning!

She ends by asking, why are some neurons sparse and some so active? Perhaps these are two separate coding strategies? But they need not be: in order for codes to be sparse in general, it could require some few specific neurons to be highly active.

Advertisements

Round 1: FIGHT

I don’t know about you, but when I was in High School, I was treated to a close-up of more than a few fights (none including me, of course).  If you’d asked me, if those fights were totally random I probably would have said no: the two guys – and it was almost always guys – had something between them which festered for a while before they eventually went at it.

Just like humans, macaques live in extremely social environments.  Also just like people, macaques engage in fights which can be one-on-one or just be a straight-up gang war.  Since we can’t just sit the macaques down and ask them what it was that made them throw down, we can turn to statistics to figure out: what makes a monkey fight?

Jessica Flack studies the patterns and dynamics of social systems.  In a paper from her lab, Daniels et al. examined the statistics of monkey fights.  There are a few ways to analyze this, so the group examined three different possible strategies that the monkeys could be using.  First, they could just go at it willy-nilly; this was encoded in the form of a maximum entropy model – a model that basically assumes there are no correlations unless absolutely required.  This model assumed the only thing important to a fight was how often that particular individual fought.  On the other hand, a monkey could get in a fight because it hated the guts of some other guy, or because it had an ally it needed to defend; this was also examined with a maximum entropy model, albeit one that included the direct interactions between two individuals.  Finally, it’s possible that there are other more complex interactions – your buddy really wants you to go fight for that third guy, even though you don’t really know him.  This was tested with a ‘sparse coding’ model, the specifics of which aren’t actually important here.

What they find is that, just like people, it’s the direct connections that matter.  On virtually every metric, the model that includes the interactions between individuals is better than the one that just assumes random acts of violence.  But not only that, the direct interactions between individuals is mostly what’s important – when you include more than that, the only thing that you can do a better job of predicting is how many individuals there are in a fight in general, though not how big a fight is given a specific individual is in it.  In other words, you recruit your allies, they don’t do recruiting for you.

One of the advantages of using these models is that they can be used to estimate how complex the socialization is.  If one of these chimps wanted to remember the details of every fight with perfect fidelity, it would take 23,500 bits – roughly equivalent to a note written using only 3000 total letters (kind of; letters in words aren’t actually uncertain so it would probably take many more than this).  But if you only need to take into account these correlations, you can compress it to 1000 bits, or only 125 letters, and still do almost as well.  Which means that maybe social interactions aren’t as complicated as you might have thought – there is a lot of structure to them.

Of course, this raises the point that the ‘good’ predictions are only right 15% of the time.  Should we call that a good prediction?  For the complexity of what we’re trying to predict, maybe, but clearly it means that there is a lot more going on than the models let on.  Social interactions happen more than just because of general feelings between individuals; they are likely triggered by specific – or spontaneous – events.  But if a simple model can explain 15% of all of a social behavior in a large group of individuals?  And give an estimate of how complex those interactions actually are?  Well I’d say that’s pretty interesting.

References

Daniels BC, Krakauer DC, & Flack JC (2012). Sparse code of conflict in a primate society PNAS DOI: 10.1073/pnas.1203021109

Photo from