Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its coding


This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.


Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243