Monday Open Question: How many types of neurons are there in the brain?

How many types of neurons are in the brain? Not just number, but classes that represent some fundamental unit of computation? I tweeted an article about this a couple days ago and (justly) got pilloried for saying it counted classes in the brain rather than in two cortical regions. So what is the answer for the whole brain?

Obviously the answer depends on the brain that you are talking about. In the nematode C. elegans, we know that every hermaphrodite has 302 neurons and every male has 381. I believe these specifically male neurons get pruned in the developmental process if the animal does not become a male. These neurons tend to come in symmetric pairs or quartets, one showing up on each side of the body, so the number of neural ‘classes’ is on the order of 118 – though there is evidence that some neurons can be slightly different between their left and right side (ASEL and ASER, for example). Fruit flies (Drosophila) also show sex-specific neurons, with the genes Fruitless and Doublesex controlling whether certain neurons are masculinized or feminized. So not only are there going to be different classes of neurons in males and females, we know that there are single (or, again, symmetric) neurons that control single behaviors. On the other hand, in the fruit fly retina there are definitely distinct classes of neurons that are tiled across the eye. This should frame our thinking about the number of neural classes – there are classes with large numbers of neurons where convolution is useful (repeating the same computation across some space, such as visual or auditory or even musculature space) but perhaps neural function becomes more specific and class-less once specific outputs are needed.

The fruit fly brain may seem a bit silly, why bother comparing it to us cortical mammals? But adult Drosophila have roughly the same number of neurons as larval zebrafish, a vertebrate animal with a cerebrum that is a popular organism to study in neuroscience. So do we think that the zebrafish has just as many pre-planned neurons as Drosophila? Or is its neural structure somewhat looser, more patterned? I don’t have an answer here but I think it is worth thinking about the similarities and differences in these organisms that have similar numbers of neurons but quite different environmental and developmental needs.

Let’s turn to mammals. The area with the most well-defined number of cell classes is probably the retina. I’m not sure of the up-to-date estimates for number of cell classes but the classic description has two classes (rods/cones) in the input layer of the retina which can be further split depending on the number of colors an animal can see – for instance, humans have S, M, and L cones roughly corresponding to blue, green, and red light. This review roughly estimates that further into the eye there are two types of horizontal cells (first layer), ~12 types of bipolar cells (second layer), ~30 types of amacrine cells (third layer). From other sources we think there is something on the order of ~30 types of retinal ganglion cells, the output from the eye into the brain. Interestingly, this is roughly the same number of defined classes that we think the fruit fly has! But again, there may be species specificity here; something on the order of 95%+ of the output layer of monkey retina is a single cell class. So the eye alone has at least 80 classes of neurons and quite probably more.

The olfactory bulb is probably more complex. In mice, at least, the number of olfactory glomuleri that exist is probably on the order of one or two thousand? Though I would expect that once past this layer the classification will look more like retina or cortex – on the order of tens of subtypes.

 

Now let’s think about the cortex. The paper that inspired this post tried to estimate the number of cell classes by using single-cell RNA-sequencing in mice to identify the transcripts that are present in different cells and then attempts to cluster them into distinct sub-classes. It should be clear up front that the number of clusters you identify (1) may not be categorical but could be continuous between types of neurons and (2) may be different than if you clustered with a different method or with different types of data – functional responses, for instance. The authors in this paper make clear that they certainly find cells that look ‘intermediate’ between their clusters so whatever categories we get may not be very firm. For instance, in the following figure the size of the circles represents the number of cells they identify in a particular cluster and the thickness of the line between two circles is how many cells are intermediate between two clusters.

 

Without getting into too many details, they find that in two distinct anatomical regions they find roughly 50 inhibitory neurons that are common in their transcript types between the regions suggesting that the types of inhibition may be a common, repeated pattern across the brain. However, the types of 50 excitatory neurons were essentially unique to each of the two regions..Chuck Stevens has an interesting paper where he attempts to find lower and upper bounds on the number of possible cell classes in cortex. Let’s say that we accept the tiling principle, that the same types of cells are repeated again and again in a motif:

This argument can be extended to the neocortex. Underneath 1 mm2 of most regions of the primate cortical surface are about 105 neurons — the striate cortex is an exception with twice the number — each of which covers say 0.05 mm2 with its dendritic arbor (assumed to be 0.25 mm in diameter). Twenty neurons with dendritic arbors of this size would be required to cover a square millimetre of cortex, so the upper limit on number of cell types, if each must tile the cortex, is 105/20 = 5000, or an average of 1000 per layer. Now assume that the cortex has 10 times more neurons of each type than required to cover the cortex, a redundancy factor of 10 as guessed above for hippocampus: we thus would have about 100 neuron types per layer. If we believe there are a dozen ganglion cell types, two dozen amacrine cell types, and four dozen different kinds of inhibitory neurons in the CA1 region of hippocampus, 100 cell types per layer of neucortex seems like a reasonable number – not good news for the micromodelers.

Let’s update this estimate; we think that there may be 25 excitatory cell types per region. I don’t actually know off-hand the percentage of mouse cortex that these two regions encompass (a motor region, ALM, and a visual region, VISp) but let’s say they are roughly 10% of the cortical area each (this could be grossly wrong so feel free to correct me). We then might believe that cortex has on the order of 25 * 10 ~ 250 excitatory cell classes and ~50 inhibitory cell classes. Does this feel right? 300 classes for all of cortex?

But the cortex is the minority of the number of cells in the brain – the majority is in a single structure, the cerebellum. I don’t know of an estimate of the number of neural classes but a structure that is known for its beautiful tiling neurons seems more likely to have a fair bit of structure in its number of cell classes. What would we estimate here? Something similar to a primary sensory area, with ~50-100 cell classes? Something more something less?

And what about other subcortical regions in the brain like hypothalamus that are more directly responsible for specific behavior? Should we expect many thousands of distinct subtypes for each of the behaviors or something more patterned?

Tell me where I’m wrong.

Advertisements

Communication by virus

‘Some half-baked conceptual thoughts about neuroscience’ alert

In the book Snow Crash, Neil Stephenson explores a future world that is being infected by a kind of language virus. Words and ideas have power beyond their basic physical form: they have the ability to cause people to do things. They can infect you, like a song that you just can’t get out of your head. They can make you transmit them to other people. And the book supposes a language so primal and powerful it can completely and totally take you over.

Obviously that is just fiction. But communication in the biological world is complicated! It is not only about transmitting information but also about convincing them of something. Humans communicate by language and by gesture. Animals sing and hiss and hoot. Bacteria communicate by sending signaling molecules to each other. Often these signals are not just to let someone know something but also to persuade them to do something else. Buy my book, a person says; stay away from me, I’m dangerous, the rattlesnake says; come over here and help me scoop up some nutrients, a bacteria signals.

And each of these organisms are made up of smaller things also communicating with each other. Animals have brains made up of neurons and glia and other meat, and these cells talk to each other. Neurons send chemicals across synapses to signal that they have gotten some information, processed it, and just so you know here is what it computed. The signals it sends aren’t always simple. They can be exciting to another neuron or inhibiting, a kind of integrating set of pluses and minuses for the other neuron to work on. But they can also be peptides and hormones that, in the right set of other neurons, will set new machinery to work, machinery that fundamentally changes how the neuron computes. In all of these scenarios, the neuron that receives the signal has some sort of receiving protein – a receptor – that is specially designed to detect those signaling molecules.

This being biology, it turns out that the story is even more complicated than we thought. Neurons are cells and just like every other cell it has internal machinery that uses mRNAs to provide instructions for building the protein machinery needed to operate. If you need more of one thing, the neuron will synthesize more of the mRNA and transcribe it into new proteins. Roughly, the more mRNA you have the more of that protein – tiny little machines that live inside the cell – you will produce.

This synthesis and transcription is behind much of how neurons learn. The saying goes that the neurons that fire together wire together, so that when they respond to things at the same time (such as being in one location at the same time you feel sad) they will tend to strengthen the link between them to create memories. And the physical manifestation of this is transcribing proteins for a specific receptor (say) so that now the same signal will activate more receptors and result in a stronger link.

And that was pretty much the story so far. But it turns out that there is a new wrinkle to this story: neurons can directly ship mRNAs into each other in a virus-like fashion, avoiding the need for receptors altogether. There is a gene called Arc which is involved in many different pieces of the plasticity puzzle. Looking at the sequence of the gene, it turns out that there is a portion of the code that creates a virus-like structure that can encapsulate RNAs and bury through other cells’ walls. This RNA is then released into the other cell. And this mechanism works. This Arc-mediated signaling actually causes strengthening of synapses.

Who would have believed this? That the building blocks for little machines are being sent directly into another cell? If classic synaptic transmission is kind of like two cells talking, this is like just stuffing someone else’s face with food or drugs. This isn’t in the standard repertoire of how we think about communication; this is more like an intentional mind-virus.

There is this story in science about how the egg was traditionally perceived to be a passive receiver during fertilization. In reality, eggs are able to actively choose which sperm they accept – they have a choice!

The standard way to think about neurons is somewhat passive. Yes, they can excite or inhibit the neurons they communicate with but, at the end of the day, they are passively relaying whatever information they contain. This is true not only in biological neurons but also in artificial neural networks. The neuron at the other end of the system is free to do whatever it wants with that information. Perhaps a reconceptualization is in order. Are neurons more active at persuasion than we had thought before? Not just a selfish gene but selfish information from selfish neurons? Each neuron, less interested in maintaining its own information than in maintaining – directly or homeostatically – properties of the whole network? Neurons do not simply passively transmit information: they attempt to actively guide it.

Yeah, but what has ML ever done for neuroscience?

This question has been going round the neurotwitters over the past day or so.

Let’s limit ourselves to ideas that came from machine learning that have had an influence on neural implementation in the brain. Physics doesn’t count!

  • Reinforcement learning is always my go-to though we have to remember the initial connection from neuroscience! In Sutton and Barto 1990, they explicitly note that “The TD model was originally developed as a neuron like unit for use in adaptive networks”. There is also the obvious connection the the Rescorla-Wagner model of Pavlovian conditioning. But the work to show dopamine as prediction error is too strong to ignore.
  • ICA is another great example. Tony Bell was specifically thinking about how neurons represent the world when he developed the Infomax-based ICA algorithm (according to a story from Terry Sejnowski). This obviously is the canonical example of V1 receptive field construction
    • Conversely, I personally would not count sparse coding. Although developed as another way of thinking about V1 receptive fields, it was not – to my knowledge – an outgrowth of an idea from ML.
  • Something about Deep Learning for hierarchical sensory representations, though I am not yet clear on what the principal is that we have learned. Progressive decorrelation through hierarchical representations has long been the canonical view of sensory and systems neuroscience. Just see the preceding paragraph! But can we say something has flowed back from ML/DL? From Yemins and DiCarlo (and others), can we say that maximizing the output layer is sufficient to get similar decorrelation as the nervous system?

And yet… what else? Bayes goes back to Helmholtz, in a way, and at least precedes “machine learning” as a field. Are there examples of the brain implementing…. an HMM? t-SNE? SVMs? Discriminant analysis (okay, maybe this is another example)?

My money is on ideas from Deep Learning filtering back into neuroscience – dropout and LSTMs and so on – but I am not convinced they have made a major impact yet.

Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its place.network coding

 

This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.

References

Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243

These are the Computational [and Systems] Neuroscience Blogs (updated)

I was recently asked which blogs deal with Computational Neuroscience. There aren’t a lot of them – most neuroscience blogs are very psych/cog focused because, honestly, that’s what the majority of the public cares about. Here are all of the ones that I know of (I am including Systems Neuro because it can be hard to disambiguate these things):

Interesting (Computational) Neuroscience Papers

Pillow Lab Blog

Memming

Anne Churchland

Bradley Voytek

xcorr

Quasiworking memory

Paxon Frady’s blog

Its Neuronal

Romaine Brett’s Blog

There is one other that I am blanking on and cannot find in my feedly right now. I will update later, and would welcome any suggestions!

 

Neuroscience podcasts

I have a long drive to work each day so I listen to a lot of podcasts. I have been enjoying the new Unsupervised Thinking podcast from some computational neuroscience graduate students at Columbia. So far they have discussed: Blue Brain, Brain-Computer Interfaces, and Neuromorphic Computing. Where else would you find that?

I also found out that I got a shout-out on the Data Skeptic podcast (episode: Neuroscience from a Data Scientist’s perspective).

Update: I should also mention that I quite like the Neurotalk podcast. The grad students (?) interview neuroscientists who come to give talks at Stanford. Serious stuff. Raw Data was also recommended to me as up-my-alley but I have not yet had a chance to listen to it. YMMV.

Ben Carson is not a neuroscientist

16635957336_eb48b84689_z

Photo by Gage Skidmore

As every neuroscientist can tell you, most people don’t understand the difference between neuroscientists, neurosurgeons, and neurologists.

Neurologist: A medical doctor who diagnoses and treats diseases of the nervous system

Neurosurgeon: A medical doctor who slices your brain up in order to heal it

Neuroscientist: A scientist who studies how the nervous systems of all animals work. Most work at a level so abstract it seems pointless (but it isn’t!)

What does this mean? A neurologist listens to your symptoms and will try to figure out what has gone wrong in your brain; a neuroscientist tries to understand how the brain and nervous system work down to the finest detail, no matter how useless-seeming that detail might be; a neurosurgeon specializes in performing very technically challenging surgical procedures to cure disorders of the nervous system.

Ben Carson, a neurosurgeon and presidential candidate, published a tweet showcasing what he knows about neuroscience:

…the brain can process two million bits of information per second. It remembers everything you’ve ever seen, everything you’ve ever heard…

This is, if not wrong, then just plain old made up.

Let’s break this down:

“the brain can process two million bits of information per second”

Now two million bits per second certainly sounds like a lot! 2 million bits is what you might know as 2 megabits or roughly 200 kilobytes. For comparison, here is a picture of a Corgi in a Mario costume that is a little more than 200KB:

super_mario_corgi_costume

Is that too much information for you? Does it blow your mind??

It is actually really hard to calculate how much information a nervous system is ‘processing’. In fact, I can only find one paper that even attempts to answer a small part of that question: how much does the eye tell the brain? By recording from neurons in the retina, these scientists were able to estimate that one retina will transmit ~800KB/sec. This may be a bit of an overstatement [see (1) below for discussion], but obviously – the visual system is transmitting a lot of information.

But your eye is not the only thing that transmits information to the brain! You have ears, you can touch, you can sense how hungry you are or how sick you feel. All the while you are making decisions and thinking about the past and the future. Your brain is computing a lot.

In other words, while it may seem at first like ‘the brain can process two million bits of information per second’ is an overstatement, it is actually an understatement. And probably by a lot. But more importantly: we don’t know, we don’t have any clue or guess, and I have no idea where Ben Carson pulled this number from. It is plain old made up.

“It remembers everything you’ve ever seen, everything you’ve ever heard”

This makes everyone sound a bit like Santa Claus! In reality, what we know points in the opposite direction. Although it is popular to describe the brain as a computer, it is not. From the moment of perception, the brain begins by filtering filtering filtering. Your eye receives a barrage of light – and much of this is filtered away. This image gets sent to the brain – and much of this is filtered away. Your mind does its best to infer what is occurring in the world – and in the process, much is filtered away or simply assumed. It is tragically easy to force someone to perceive something that is not there. In other words, right now you cannot remember everything that you saw two seconds ago! It is simply not available to your conscious mind.

But we can be a little generous – what about memories? Could we at least recall everything we consciously perceived? Everyone knows that is not true: who can remember being a baby? And even as adults we do not remember everything. Memories are not just photos to be peered at; they are a dense connection of associations. These associations can be activated together, but always in the context of whatever else going on in the brain (and this is not even getting into totally false memories).

This is a particular problem with eyewitness testimony. It is pretty well-known at this point that eyewitness memory is unreliable and prone to manipulation. Simply asking a witness to describe someone seems to modify the  memory – leaving the original gone forever.

The Inside Out view of memory as a discrete collection of little movies is wrong – though even in this movie they know that we can forget things forever! – and is based on an incorrect view of how the brain works. Memories are not crystalline balls ready to be sent up to consciousness at any moment, but a web of connections that can easily be rewired. Based on what we know about learning and memory, Ben Carson’s quote is wrong, and almost certainly made up.

From everything we know, Ben Carson is a phenomenal neurosurgeon. But Ben Carson is not a neuroscientist.

Addendum:

(1) How much information can the nervous system process? This is a really interesting question! It may seem straightforward, but this question actually has a lot of different interpretations. Let’s just take the example of our two eyes, each looking out onto the world. The eyes do not see totally different parts of the world, but an overlapping scene; just close each eye in succession and you will see much of the world the same.

In a sense, this means they are processing the same information about the world. Both can see the laptop (or phone/etc) in front of you and so much of what they see is redundant. If the left eye sees, say, 1MB of visual information, and the other does as well, does that mean the two eyes are processing 2MB of information? Or are they simply processing 1.25MB in parallel (the other 0.75 MB being the same thing in each eye – redundant overall)?

Behold, The Blue Brain

The Blue Brain project releases their first major paper today and boy, it’s a doozy. Including supplements, it’s over 100 pages long, including 40 figures and 6 tables. In order to properly understand everything in the paper, you have to go back and read a bunch of other papers they have released over their years that detail their methods. This is not a scientific paper: it’s a goddamn philosophical treatise on The Nature of Neural Reconstruction.

The Blue Brain Project – or should I say Henry Markram? it is hard to say where the two diverge – aims to simulate absolutely everything in a complete mammalian brain. Except right now it sits at middle-ground: other simulations have replicated more neurons (Izhikevich had a model with 10^11 neurons of 21 subtypes). At the other extreme, MCell has completely reconstructed everything about a single neuron – down to the diffusion of single atoms – in a way that Blue Brain does not.

The focus of Blue Brain right now is a certain level of simulation that derives from a particular mindset in neuroscience. You see, people in neuroscience work at all levels: from the individual molecules to flickering ion channels to single neurons up to networks and then whole brain regions. Markram came out of Bert Sakmann’s lab (where he discovered STDP) and has his eye on the ‘classical’ tradition that stretches back to Hodgkin and Huxley. He is measuring distributions of ion channels and spiking patterns and extending the basic Hodgkin-Huxley model into tinier compartments and ever more fractal branching patterns. In a sense, this is swimming against the headwinds of contemporary neuroscience. While plenty of people are still doing single-cell physiology, new tools that allow imaging of many neurons simultaneously in behaving animals have reshaped the direction of the field – and what we can understand about neural networks.

Some very deep questions arise here: is this enough? What will this tell us and what can it not tell us? What do we mean when we say we want to simulate the brain? How much is enough? We don’t really know – though the answer to the first question is assuredly no – and we assuredly don’t know enough to even begin to answer the second set of questions.

m-types

The function of the new paper is to collate in one place all of the data that they have been collecting – and it is a doozy. They report having recorded and labeled >14,000 (!!!!!) neurons from somatosensory cortex of P14 rats with full reconstruction of more than 1,000 of these neurons. That’s, uh, a lot. And they use a somewhat-convoluted terminology to describe all of these, throwing around terms like ‘m-type’ and ‘e-type’ and ‘me-type’ in order to classify the neurons. It’s something, I guess.

e-types

Since the neurons were taken from different animals at different times, they do a lot of inference to determine connectivity, ion channel conductance, etc. And that’s a big worry because – how many parameters are being fit here? How many channels are being missed? You get funny sentences in the paper like:

[We compared] in silico (edmodeled) PSPs with the corresponding in vitro (ed – measured in a slice prep) PSPs. The in silico PSPs were systematically lower (ed– our model was systematically different from the data). The results suggested that reported conductances are about 3-fold too low for excitatory connections, and 2-fold too low for inhibitory connections.

And this worries me a bit; are they not trusting their own measurements when it suits them? Perhaps someone who reads the paper more closely can clarify these points.

They then proceed to run these simulated neurons and perform ‘in silico experiments’. They first describe lowering the extracellular calcium level and finding that the network transitions from a regularly spiking state to a variable (asynchronous) state. And then they go, and do this experiment on biological neurons and get the same thing! That is a nice win for the model; they made a prediction and validated it.

On the other hand you get statements like the following:

We then used the virtual slice to explore the behavior of the microcircuitry for a wide range of tonic depolarization and Ca2+ levels. We found a spectrum of network states ranging from one extreme, where neuronal activity was largely synchronous, to another, where it was largely asynchronous. The spectrum was observed in virtual slices, constructed from all 35 individual instantiations of the microcircuit  and all seven instantiations of the average microcircuit.

In other words, it sounds like they might be able to find everything in their model.

But on the other hand…! They fix their virtual networks and ask: do we see specific changes in our network that experiments have seen in the past? And yes, generally they do. Are we allowed to wonder how many of these experiments and predictions did they do that did not pan out? It would have been great to see a full-blown failure to understand where the model still needs to be improved.

I don’t want to understand the sheer amount of work that was done here, or the wonderful collection of data that they now have available. The models that they make will be (already are?) available for anyone to download and this is going to be an invaluable resource. This is a major paper, and rightly so.

On the other hand – what did I learn from this paper? I’m not sure. The network wasn’t really doing anything, it just kind of…spiked. It wasn’t processing structured information like an animal’s brain would, it was just kind of sitting there, occasionally having an epileptic fit (note that at one point they do simulate thalamic input into the model, which I found to be quite interesting).

This project has metamorphosed into a bit of a social conundrum for the field. Clearly, people are fascinated – I had three different people send me this paper prior to its publication, and a lot of others were quite excited and wanted access to it right away. And the broader Blue Brain Project has had a somewhat unhappy political history. A lot of people – like me! – are strong believers in computation and modeling, and would really like it see it succeed. Yet what the chunk of neuroscience that they have bitten off, and the way they have gone about it, lead many to worry. The field had been holding its breath a bit to see what Blue Brain was going to release – and I think they will need to hold their breath a bit longer.

Reference

Markram et al (2015). Reconstruction and Simulation of Neocortical Microcircuitry Cell

(link)

The silent majority (of neurons)

Kelly Clancy has yet another fantastic article explaining a key idea in theoretical neuroscience (here is another):

Today we know that a large population of cortical neurons are “silent.” They spike surprisingly rarely, and some do not spike at all. Since researchers can only take very limited recordings from inside human brains (for example, from patients in preparation for brain surgery), they have estimated activity rates based on the brain’s glucose consumption. The human brain, which accounts for less than 2 percent of the body’s mass, uses 20 percent of its calorie budget, or three bananas worth of energy a day. That’s remarkably low, given that spikes require a lot of energy. Considering the energetic cost of a single spike and the number of neurons in the brain, the average neuron must spike less than once per second.4 Yet the cells typically recorded in human patients fire tens to hundreds of times per second, indicating a small minority of neurons eats up the bulk of energy allocated to the brain.

There are two extremes of neural coding: Perceptions might be represented through the activity of ensembles of neurons, or they might be encoded by single neurons. The first strategy, called the dense code, would result in a huge storage capacity: Given N neurons in the brain, it could encode 2Nitems—an astronomical figure far greater than the number of atoms in the universe, and more than one could experience in many lifetimes. But it would also require high activity rates and a prohibitive energy budget, because many neurons would need to be active at the same time. The second strategy—called the grandmother code because it implies the existence of a cell that only spikes for your grandmother—is much simpler. Every object in experience would be represented by a neuron in the same way each key on a keyboard represents a single letter. This scheme is spike-efficient because, since the vast majority of known objects are not involved in a given thought or experience, most neurons would be dormant most of the time. But the brain would only be able to represent as many concepts as it had neurons.

Theoretical neuroscientists struck on a beautiful compromise between these ideas in the late ’90s.6,7In this strategy, dubbed the sparse code, perceptions are encoded by the activity of several neurons at once, as with the dense code. But the sparse code puts a limit on how many neurons can be involved in coding a particular stimulus, similar to the grandmother code. It combines a large storage capacity with low activity levels and a conservative energy budget.

 

 

She goes on to discuss the sparse coding work of Bruno Olshausen, specifically this famous paper. This should always be read in the context of Bell & Sejnowski which shows the same thing with ICA. Why are the independent components and the sparse coding result the same? Bruno Olshausen has a manuscript explaining why this is the case, but the general reason is that both are just Hebbian learning!

She ends by asking, why are some neurons sparse and some so active? Perhaps these are two separate coding strategies? But they need not be: in order for codes to be sparse in general, it could require some few specific neurons to be highly active.