Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its coding


This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.


Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243

Are silly superstitions useful because they are silly?

(Attention warning: massive speculation ahead.)

Auguries often seem made up, useless. Is that why they are useful?

Dove figured that the birds must be serving as some kind of ecological indicator. Perhaps they gravitated toward good soil, or smaller trees, or some other useful characteristic of a swidden site. After all, the Kantu’ had been using bird augury for generations, and they hadn’t starved yet. The birds, Dove assumed, had to be telling the Kantu’something about the land. But neither he, nor any other anthropologist, had any notion of what that something was.

He followed Kantu’ augurers. He watched omen birds. He measured the size of each household’s harvest. And he became more and more confused. Kantu’ augury is so intricate, so dependent on slight alterations and is-the-bird-to-my-left-or-my-right contingencies that Dove soon found there was no discernible correlation at all between Piculets and Trogons and the success of a Kantu’ crop. The augurers he was shadowing, Dove told me, ‘looked more and more like people who were rolling dice’.

Stumped, he switched dissertation topics. But the augury nagged him. He kept thinking about it for ‘a decade or two’. And then one day he realised that he had been looking at the question the wrong way all the time. Dove had been asking whether Kantu’ augury imparted useful ecological information, as opposed to being random. But what if augury was useful precisely because it was random?

It’s actually pretty hard for people to be random – famously, if you ask someone to write down a string of random numbers, they’ll be very far from actually randomness. But there are some decisions that are actually better if they are done randomly. For instance: in competitive environments if you are predictable then you are more easily beaten. If you need to determine how good something is, a randomized control trial helps eliminate bias. And so on.

The problem is that when people (and animals) make decisions, they are really bad at figuring out what they did was helpful and what was useless. There is a video, somewhere, of a mouse trying to learn how to push a bar around in order to get a reward. But the mouse doesn’t know what got it the reward, so it develops these strange, superstitious movements before each push of the lever. Some bit of that was useful, but their mouse-brains don’t know what.

And it’s really possible that this is what the augurers are doing; using a seemingly random collection of movements that seemed useful but, really, aren’t. Dove (above) argued that there is ultimately a utility to these: they generate randomness.

But a recent paper suggests another possibility. There is a similar reinforcement-learning mechanism where a person is asked to learn about the value of some random symbols by choosing between different options. When a good choice is rewarded, the person feels better about them. When the choice isn’t rewarded, the person feels worse. Yet when a positively-rewarded choice is freely chosen – instead of chosen for the person – they are reinforced more. People learn more quickly when they make the choice themselves than when other people – or other entities – make it for them.

What if this was the reason auguries and superstitions are useful? Not because they generate randomness but because they prevent learning? When the environment is random and uncorrelated, learning quickly will overfit noise. If you are choosing how much of your crop to plant each season, you want to prepare for disaster. Even after a few great seasons, there is still the chance of drought. What if these ‘silly’ superstitions are useful exactly because they are silly? Because there are some aspects of the environment that you don’t want to learn about.


Cockburn, J., Collins, A., & Frank, M. (2014). A Reinforcement Learning Mechanism Responsible for the Valuation of Free Choice Neuron DOI: 10.1016/j.neuron.2014.06.035