Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its coding


This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.


Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243

I have seen things you wouldn’t believe (in my mind)

When two different people perceive blue, is it the same to both of them? When two people imagine it, is it the same? Can everyone even imagine it?

If I tell you to imagine a beach, you can picture the golden sand and turquoise waves. If I ask for a red triangle, your mind gets to drawing. And mom’s face? Of course.

You experience this differently, sure. Some of you see a photorealistic beach, others a shadowy cartoon. Some of you can make it up, others only “see” a beach they’ve visited. Some of you have to work harder to paint the canvas. Some of you can’t hang onto the canvas for long. But nearly all of you have a canvas.

I don’t. I have never visualized anything in my entire life. I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on ten minutes ago. I thought “counting sheep” was a metaphor. I’m 30 years old and I never knew a human could do any of this. And it is blowing my goddamned mind…

I opened my Facebook chat list and hunted green dots like Pac-Man. Any friend who happened to be online received what must’ve sounded like a hideous pick-up line at 2 o’clock in the morning:
—If I ask you to imagine a beach, how would you describe what happens in your mind?
—Uhh, I imagine a beach. What?
—Like, the idea of a beach. Right?
—Well, there are waves, sand. Umbrellas. It’s a relaxing picture. You okay?
—But it’s not actually a picture? There’s no visual component?
—Yes there is, in my mind. What the hell are you talking about?
—Is it in color?
—How often do your thoughts have a visual element?
—A thousand times a day?
—Oh my God.

And so on. Read the whole thing, and this as well. How common is something like this? Judging by internet comments – very common, especially for other sensory modalities. I can visualize fine, though my imaginary ‘sense of place’ is probably stronger, but I cannot ‘imagine’ a taste or smell to save my life. I once went into a fancy cocktail bar and was asking the owner how he came up with the cocktails. He just thought about how two ingredients could taste together, he said, and then he combined them like that. Whoa, whoa, whoa, I said, you can imagine tastes? And combine them in your mind?

Who knows what others imagine in their mind? Does imagining a picture mean the same thing to different people? Is it vivid or faded, cartoonish or realistic?

When we do experiments with animals – how much are we relying on this supposed universality which, even among humans, is anything but?

Friday Fun: Science Combat


Someone has combined pixel art, Mortal Kombat, and famous scientists. This is actually going to be a real video game released for Superinteressante magazine at some point. Until then, gaze in awe at the mighty combatants.

Brain Prize 2016

The Brain Prize, a thing I don’t think I knew existed, just gave $1,000,000 to three neuroscientists for their work on LTP. As with most prizes, the best part is the motivation to go back and read classic papers!

The best winner was Richard Morris because he kind of revolutionized the memory field with this figure:

Morris Water Maze

Yes, he created the Morris Water Maze, used to study learning and memory in a seemingly-infinite number of papers.

water maze 2

When was the last time you went back and actually read the original Morris Water Maze paper? I know I had not ever read it before today: but I should have.

No less important was the work of Timothy Bliss (and Terje Lomo, who did not win) illustrating the induction of LTP. Most of us have probably heard “neurons that fire together, wire together” and this is the first real illustration of the phenomenon (in 1973):

LTP induction

Bliss and Lomo were able to induce long-lasting changes in the strength of connections between two neurons by a “tetanic stimulation protocol“. The above figure is seared into my brain from my first year of graduate school, where Jeff Isaacson dragged us through paper after paper that used variations on this protocol to investigate the properties of LTP.

The final winner was Graham Collingridge who demonstrated that hippocampal LTP was induced via NMDA receptors. I don’t think this was the paper that demonstrated it, but I always found his 1986 paper on slow NMDA receptors quite beautiful:


Here, he has blocked NMDA receptors with APV and sees no spiking after repeated stimulation. However, when this blocker is washed out, you see spiking only after receiving several inputs because of the slow timescale of the receptors.

While historically powerful, the focus on NMDA receptors can be misleading. LTP can be induced in many different ways depending on the specific neural type and brain region! For my money, I have always been a fan of the more generalized form, STDP. Every neuroscientist should read and understand the Markram et al (1997) paper that demonstrates it and the Bi and Poo (1998) paper that has this gorgeous figure:

bi and poo

Read about the past, and remember where your science came from.

How to create neural circuit diagrams (updated)


My diagrams are always a mess, but maybe I could start following this advice a little more carefully?

Diagrams of even simple circuits are often unnecessarily complex, making understanding brain connectivity maps difficult…Encoding several variables without sacrificing information, while still maintaining clarity, is a challenge. To do this, exclude extraneous variables—vary a graphical element only if it encodes something relevant, and do not encode any variables twice…

For neural circuits such as the brainstem auditory circuits, physical arrangement is a fundamental part of function. Another topology that is commonly necessary in neural circuit diagrams is the laminar organization of the cerebral cortex. When some parts of a circuit diagram are anatomically correct, readers may assume all aspects of the figure are similarly correct. For example, if cells are in their appropriate layers, one may assume that the path that one axon travels to reach another cell is also accurate. Be careful not to portray misleading information—draw edges clearly within or between layers, and always clearly communicate any uncertainty in the circuit.

Update: Andrew Giessel pointed me to this collection of blog posts from Nature Methods on how to visualize biological data more generally. Recommended!

#Cosyne2016, by the numbers


Cosyne is the systems and computational neuroscience conference held every year in Salt Lake City and Snow Bird. It is a pretty good representation of the direction the community is heading…though given the falling acceptance rate you have to wonder how true that will stay especially for those on the ‘fringe’. But 2016 is in the air so it is time to update the Cosyne statistics.

I’m always curious about who is most active in any given year and this year it is Xiao-Jing Wang who I dub this year’s Hierarch of Cosyne. I always think of his work on decision-making and the speed-accuracy tradeoff. He has used some very nice modeling of small circuits to show how these tasks could be implemented in nervous systems. Glancing over his posters, though, and his work this year looks a bit more varied.

Still, it is nice to see such a large clump of people at the top: the distribution of posters is much flatter this year than previously which suggests a bit of

Here are the previous ‘leaders’:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang


If you look at the total number across all years, well, Liam Paninski is still massacring everyone else. At this rate, even if Pope Paninski doesn’t submit any abstracts over the next few years and anyone submits six per year… well it will be a good half a decade before he could possibly be dethroned.

The network diagram of co-authors is interesting, as usual. Here is the network diagram for 2016 (click for PDF):


And the mess that is all-time Cosyne:



I was curious about this network. How connected is it? What is its dimensionality? If you look at the eigenvalues of the adjacency matrix, you get:


I put the first two eigenvectors at the bottom of this post, but suffice it to say the first eigenvector is basically Pouget vs. Latham! And the second is Pillow vs Paninski! So of course, I had to plot a few people in Pouget-Pillowspace:


(What does this tell us? God knows, but I find it kind of funny. Pillowspace.)

Finally, I took a bunch of abstracts and fed them through a Markov model to generate some prototypical Cosyne sentences. Here are abstracts that you can submit for next year:

  • Based on gap in the solution with tighter synchrony manifested both a dark noise [and] much more abstract grammatical rules.
  • Tuning curves should not be crucial for an approximate Bayesian inference which would shift in sensory information about alternatives
  • However that information about 1 dimensional latent state would voluntarily switch to odor input pathways.
  • We used in the inter vibrissa evoked responses to obtain time frequency use of power law in sensory perception such manageable pieces have been argued to simultaneously [shift] acoustic patterns to food reward to significantly shifted responses
  • We obtained a computational capacity that is purely visual that the visual information may allow ganglion cells [to] use inhibitory coupling as NMDA receptors, pg iii, Dynamical State University
  • Here we find that the drifting gratings represent the performance of the movement.
  • For example, competing perceptions thereby preserve the interactions between network modalities.
  • This modeling framework of goal changes uses [the] gamma distribution.
  • Computation and spontaneous activity at the other stimulus saliency is innocuous and their target location in cortex encodes the initiation.
  • It is known as the presentation of the forepaw target reaction times is described with low dimensional systems theory Laura Busse Andrea Benucci Matteo Carandini Smith-Kettlewell Eye Research.

Note: sorry about the small font size. This is normally a pet peeve of mine. I need to get access to Illustrator to fix it and will do so later…

The first two eigenvectors:

ev1 ev2

Punctuation in novels

Faulkner versus McCarthy

I found some beautiful posters that showed the punctation in different novels the other day. I was immediately curious if I could do something similar and wrote a little script (code here) to extract the punctation and print out a compressed representation of my favorite novels.

Then, like any proper scientist, I looked at the data and did some simple stats. Go see on Medium!

Here are a few of the (almost) full sets of punctuation from a couple of novels. For Pride And Prejudice, note the zoom-in versus the zoom-out:




Here are the files for Pride and Prejudice (alternate), A Doll’s House, and Romeo and Juliet.

Please let me know in the comments how totally I was wrong in my Medium analysis, and if there is anything you would like to see.

Update: Here a couple I thought were interesting. First, part of the Tractatus Logico Philosophicus:


And then Ulysses, the difference between the beginning of the book (first) and the end of the book (second):



Posted in Art

The ballerina illusion

No matter how hard I try, I cannot get the spinning dancer illusion to flip. Looks like someone has found a much more powerful version of the illusion:


What I especially like about this version of the illusion is I can totally get why it is happening. There aren’t great depth cues so there is a powerful prior that if you can see a face then that face is looking in your direction – hence direction flipping.

(via kottke)

Posted in Art

These are the Computational [and Systems] Neuroscience Blogs (updated)

I was recently asked which blogs deal with Computational Neuroscience. There aren’t a lot of them – most neuroscience blogs are very psych/cog focused because, honestly, that’s what the majority of the public cares about. Here are all of the ones that I know of (I am including Systems Neuro because it can be hard to disambiguate these things):

Interesting (Computational) Neuroscience Papers

Pillow Lab Blog


Anne Churchland

Bradley Voytek


Quasiworking memory

Paxon Frady’s blog

Its Neuronal

Romaine Brett’s Blog

There is one other that I am blanking on and cannot find in my feedly right now. I will update later, and would welcome any suggestions!


RIP Marvin Minsky, 1927-2016

Marvin Minsky in Detroit

I awoke to sad news this morning – Marvin Minsky passed away at the age of 88. Minsky’s was the first serious work on artificial intelligence that I ever read and one of the reasons I am in neuroscience today.

Minsky is infamously known for his book Perceptrons, which most famously showed that the neural networks at the time had problems with computations such as XOR (here is the solution, which every neuroscientist should know!).

Minsky is also known for the Dartmouth Summer Research Conference, whose proposal is really worth reading in full.

Fortunately, Minsky put many of his writings online which I have been rereading this morning. You could read his thoughts on communicating with Alien Intelligence:

All problem-solvers, intelligent or not, are subject to the same ultimate constraints–limitations on space, time, and materials. In order for animals to evolve powerful ways to deal with such constraints, they must have ways to represent the situations they face, and they must have processes for manipulating those representations.

ECONOMICS: Every intelligence must develop symbol-systems for representing objects, causes and goals, and for formulating and remembering the procedures it develops for achieving those goals.

SPARSENESS: Every evolving intelligence will eventually encounter certain very special ideas–e.g., about arithmetic, causal reasoning, and economics–because these particular ideas are very much simpler than other ideas with similar uses.

He also mentions this, which sounds fascinating. I was not aware of this but cannot find the actual paper. If anyone can send me the citation, please leave a comment!

A TECHNICAL EXPERIMENT. I once set out to explore the behaviors of all possible processes–that is, of all possible computers and their programs. There is an easy way to do that: one just writes down, one by one, all finite sets of rules in the form which Alan Turing described in 1936. Today, these are called “Turing machines.” Naturally, I didn’t get very far, because the variety of such processes grows exponentially with the number of rules in each set. What I found, with the help of my student Daniel Bobrow, was that the first few thousand such machines showed just a few distinct kinds of behaviors. Some of them just stopped. Many just erased their input data. Most quickly got trapped in circles, repeating the same steps over again. And every one of the remaining few that did anything interesting at all did the same thing. Each of them performed the same sort of “counting” operation: to increase by one the length of a string of symbols–and to keep repeating that. In honor of their ability to do what resembles a fragment of simple arithmetic, let’s call these them “A-Machines.” Such a search will expose some sort of “universe of structures” that grows and grows. For our combinations of Turing machine rules, that universe seems to look something like this:

minsky turing machines

In Why Most People Think Computers Can’t, he gets off a couple of cracks at people who think computers can’t do anything humans can:

Most people assume that computers can’t be conscious, or self-aware; at best they can only simulate the appearance of this. Of course, this
assumes that we, as humans, are self-aware. But are we? I think not. I
know that sounds ridiculous, so let me explain.

If by awareness we mean knowing what is in our minds, then, as every  clinical psychologist knows, people are only very slightly self-aware, and  most of what they think about themselves is guess-work. We seem to build  up networks of theories about what is in our minds, and we mistake these  apparent visions for what’s really going on. To put it bluntly, most of  what our “consciousness” reveals to us is just “made up”. Now, I don’t  mean that we’re not aware of sounds and sights, or even of some parts of  thoughts. I’m only saying that we’re not aware of much of what goes on inside our minds.

Finally, he has some things to say on Symbolic vs Connectionist AI:

Thus, the present-day systems of both types show serious limitations. The top-down systems are handicapped by inflexible mechanisms for retrieving knowledge and reasoning about it, while the bottom-up systems are crippled by inflexible architectures and organizational schemes. Neither type of system has been developed so as to be able to exploit multiple, diverse varieties of knowledge.

Which approach is best to pursue? That is simply a wrong question. Each has virtues and deficiencies, and we need integrated systems that can exploit the advantages of both. In favor of the top-down side, research in Artificial Intelligence has told us a little—but only a little—about how to solve problems by using methods that resemble reasoning. If we understood more about this, perhaps we could more easily work down toward finding out how brain cells do such things. In favor of the bottom-up approach, the brain sciences have told us something—but again, only a little—about the workings of brain cells and their connections.

Apparently, he viewed the symbolic/connectionist split like so:

minsky connectionist vs symbolic