Is the idea that neurons perform ‘computations’ in any way meaningful?

I wrote this up two months ago and then forgot to post it. Since then, two different arguments about ‘computation’ have flared up on Twitter. For instance:

I figured that meant I should finally post it to help clarify some things. I will have more comments on the general question tomorrow.

Note that I am pasting twitter conversations into wordpress and hoping that it will convert it appropriately. If you read this via an RSS reader, it might be better to see the original page.

The word ‘computation’, when used to refer to neurons, has started to bother me. It often seems to be thrown out as a meaningless buzzword, as if using the word computation makes scientific research seem more technical and more interesting. Computation is interesting and important but most of the time it is used to mean ‘neurons do stuff’.

In The Future of the Brain (review here), Gary Marcus gives a nice encapsulation of what I am talking about:

“In my view progress has been stymied in part by an oft-repeated canard — that the brain is not “a computer” — and in part by too slavish a devotion to the one really good idea that neuroscience has had thus far, which is the idea that cascades of simple low level “feature detectors” tuned to perceptual properties, like difference in luminance and orientation, percolate upward in a hierarchy, toward progressively more abstract elements such as lines, letters, and faces.”

Which pretty much sums up how I feel: either brains aren’t computers, or they are computing stuff but let’s not really think about what we mean by computation too deeply, shall we?

So I asked about all this on twitter then I went to my Thanksgiving meal, forgot about it, and ended up getting a flood of discussion that I haven’t been able to digest until now:

(I will apologize to the participants for butchering this and reordering some things slightly for clarity. I hope I did not misrepresent anyone’s opinion.)

The question

Let’s first remember that the very term ‘computation’ is almost uselessly broad.

Neurons do compute stuff, but we don’t actually think about them like we do computers

Just because it ‘computes’, does that tell us anything worthwhile?

The idea helps distinguish them from properties of other cells

Perhaps we just mean a way of thinking about the problem

There are, after all, good examples in the literature of computation

We need to remember that there are plenty of journals that cover this: Neural Computation, Biological Cybernetics and PLoS Computational Biology.

I have always had a soft spot for this paper (how do we explain what computations a neuron is performing in the standard framework used in neuroscience?).

What do we mean when we say it?

Let’s be rigorous here: what should we mean?

A billiard ball can compute. A series of billiard balls can compute even better. But does “intent” matter?

Computation=information transformation

Alright, let’s be pragmatic here.


Michael Hendricks hands me my next clickbait post on a silver platter.

Coming to a twitter/RSS feed near you in January 2015…


The bigger problem with throwing the word ‘computation’ around like margaritas at happy hour is it adds weight to


Cordelia Fine and Feminism in neuroscience

When I first started my PhD in neuroscience, a philosophically-inclined friend of mine started expounding on Feminist critiques of science. To most people, this would seem irrelevant to the science I was investigating: theory and modeling on a computer, before moving to hermaphroditic C. elegans. No females or males were being studied here! But the ideas are both insightful and important:

Emily Martin examines the metaphors used in science to support her claim that science reinforces socially constructed ideas about gender rather than objective views of nature. In her study about the fertilization process, for example, she asserts that classic metaphors of the strong dominant sperm racing to an idle egg are products of gendered stereotyping rather than portraying an objective truth about human fertilization. The notion that women are passive and men are active are socially constructed attributes of gender which, according to Martin, scientists have projected onto the events of fertilization and so obscuring the fact that eggs do play an active role.

Martin describes working with a team of sperm researchers at Johns Hopkins to illustrate how language in reproductive science adheres to social constructs of gender despite scientific evidence to the contrary: “even after having revealed…the egg to be a chemically active sperm catcher, even after discussing the egg’s role in tethering the sperm, the research team continued for another three years to describe the sperm’s role as actively penetrating the egg.”

Concepts are linked in our minds, consciously or not; the metaphors that we use matter (think of a Hopfield network). It would behoove all scientists to think deeply about Feminist critiques and their broader implications. The above example is canonical for a reason: the difference between two interacting agents (sperm, egg) with one decision-maker (sperm) is very different from that of two decision-makers (sperm and egg). But preconceived gender notions prevented us from noticing this simple fact!

Cordelia Fine is the most prominent scientist articulating these views in neuroscience today. This month, she has had two good interviews. If you take one big point away, it is that males and females may have different population means (though this interacts with social circumstances), but there is substantial population overlap. But humans like to see things in binary opposition so we either simply don’t recognize the amount of overlap that exists or blow up small differences.

One is with Uta Frith:

Cordelia: Happily, the perspectives are definitely not that polarized. One thing that’s worth stressing though is that criticisms of this area of research don’t stem from a belief that it’s intrinsically problematic to look at the effects of biological sex on the brain. But implicit assumptions about female/male differences in brain and behavior do influence research design and interpretation. They do this in ways that can give rise to misleading conclusions that additionally reinforce harmful gender stereotypes….

Cordelia: Yes, and long before the buzz about neuroplasticity, feminist neurobiologists were writing about this ‘entanglement’: the fact that the social phenomenon of gender (which systematically affects an individual’s psychological, physical, social and material experiences) is literally incorporated, shaping the brain and endocrine system. One of the recommendations of our article is for researchers to attempt to incorporate the principle of entanglement into their research models, including more and/or different categories of independent variables that include ways of capturing the role of the environment.

And with the Neurocritic at the PLoS Neuroscience community:

With regards to sample size, different implicit models of sex/gender and the brain will give rise to different intuitions or assumptions about what is an adequate sample size. According to implicit essentialist assumptions, there are there are distinctively different ‘male’ and ‘female’ brains. But non-human animal research has shown that biological sex interacts in complex ways with many different factors (hormones, stress, maternal care, and so on) to influence brain development. Because of the complexity and idiosyncrasy of these sex influences, this doesn’t give rise to distinctive female and male brains, but instead, heterogeneous mosaics of ‘female’ and ‘male’ (statistically defined) characteristics…

As for publication bias for positive findings, this has long been argued to be particularly acute when it comes to sex differences. It’s ubiquitous for the sex of participants to be collected and available, and the sexes may be routinely compared with only positive findings reported. As Anelis Kaiser and her colleagues have pointed out, this emphasis on differences over similarities is also institutionalized in databases, that only allow searches for sex/gender differences.

How do you do your science?

I spend too much time thinking about what the best way is to do science. How should I structure my experiments if I want to maximize the likelihood that what I discover is both true and useful to other people? And how many different experiments do I need to do? Especially as a theoroexperimentalist.

The background philosophy of science chatter has picked up a bit over the last week, and I was spurred by something said on Noahpinion:

I don’t see why we should insist that any theory be testable. After all, most of the things people are doing in math departments aren’t testable, and no one complains about those, do they? I don’t see why it should matter if people are doing math in a math department, a physics department, or an econ department.

Also (via Vince Buffalo)

As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired from ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque the danger signal is up. It would be easy to give examples, to trace specific evolutions into the baroque and the very high baroque, but this would be too technical. In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this is a necessary condition to conserve the freshness and the vitality of the subject, and that this will remain so in the future. ::: John von Neumann

Right before I left the Salk Institute, I was chatting with an older scientist and he said something along the following lines (paraphrasing):

The best science is done when you can identify a clear mechanism that you can test; [anonymized scientist] was known for having a lot of confirmatory evidence that pointed at some result, but nothing conclusive, no mechanism. Pretty much all of it ended up being wrong.

Basically, he was of the opinion that you can provide evidence of a direct mechanism, or you can provide evidence for a general idea that is consistent and points to that mechanism like so:

Experimental designWhere you should you collect your evidence? Each situation makes one of these arrows easier to collect than others.

But if you want to maximize your likelihood of making a lasting impact on knowledge, where do you want to place your bets? Can theories come before mechanism?

I don’t know.


Canonical circuits in neuroscience

Gary Marcus, Adam Marblestone, and Thomas Dean have a nice perspective piece in Science this week on the atoms of neural computation (gated):

One hypothesis is that cortical neurons form a single, massively repeated “canonical” circuit, characterized as a kind of a “nonlinear spatiotemporal filter with adaptive properties”. In this classic view, it was “assumed that these…properties are identical for all neocortical areas.” Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking…

What would it mean for the cortex to be diverse rather than uniform? One possibility is that neuroscience’s quarry should be not a single canonical circuit, but a broad array of reusable computational primitives—elementary units of processing akin to sets of basic instructions in a microprocessor—perhaps wired together in parallel, as in the reconfigurable integrated circuit type known as the field-programmable gate array.

Candidate computational primitives might include circuits for shifting the focus of attention, for encoding and manipulating sequences, and for normalizing the ratio between the activity of an individual neuron and a set of neurons. These might also include circuits for switching or gating information flow between different parts of cortex, and for working memory storage, decision-making, storage and transformation of information via population coding and the manipulation and encoding of variables, alongside machinery for hierarchical pattern recognition…

And so on. People have long proposed that ‘all cortex is the same’ or some such rubbish, it being all made of cortical columns that are the same from one bit of tissue to another. I’m not sure how many people really believe that, but you see the statement a lot (and it is a big reason people studying visual cortex claim they’re just interested in ‘cortex’).

Of course, on a smaller scale there has been a long interest in more primitive ‘microcircuits’. A few examples:

The Reichardt (motion) detector:

Reichardt detector

There is strong experimental evidence for this in drosophila (the fly), and is a repeated motif across visual space.

(see source for review)

Inhibitory circuit motifs:


(see source)

Circuit motif for flexible categorization:

flexible categorization motif

(see source)

Since we know all the circuitry of the worm C. elegans, you can look at which motifs are overrepresented:

c elegans 4 neuron circuit motifs

(see source)

Qian, J., Hintze, A., & Adami, C. (2011). Colored Motifs Reveal Computational Building Blocks in the C. elegans Brain PLoS ONE, 6 (3) DOI: 10.1371/journal.pone.0017013

Borst, A. (2007). Correlation versus gradient type motion detectors: the pros and cons Philosophical Transactions of the Royal Society B: Biological Sciences, 362 (1479), 369-374 DOI: 10.1098/rstb.2006.1964

Pfeffer, C. (2014). Inhibitory Neurons: Vip Cells Hit the Brake on Inhibition Current Biology, 24 (1) DOI: 10.1016/j.cub.2013.11.001

Mysore, S., & Knudsen, E. (2012). Reciprocal Inhibition of Inhibition: A Circuit Motif for Flexible Categorization in Stimulus Selection Neuron, 73 (1), 193-205 DOI: 10.1016/j.neuron.2011.10.037

Marcus, G., Marblestone, A., & Dean, T. (2014). The atoms of neural computation Science, 346 (6209), 551-552 DOI: 10.1126/science.1261661

The beauty of brain science


Photo by Jason Snyder

There has recently been a few articles on a “theory of the brain”. Gary Marcus started us off with an editorial in the NYT concerning the Blue Brain Project:

Biologists — neuroscientists included — can’t hope for that kind of theory. Biology isn’t elegant the way physics appears to be. The living world is bursting with variety and unpredictable complexity, because biology is the product of historical accidents, with species solving problems based on happenstance that leads them down one evolutionary road rather than another.

Vaughan Bell had a good commentary:

This reflects a common belief in cognitive science that there is a ‘missing law’ to be discovered that will tell us how mind and brain are linked – but it is quite possible there just isn’t one to be discovered.

And around the same time Neuroskeptic reviewed a paper on a similar topic, asking how we reconcile single neuron views of information transfer with network (oscillatory) views).

I take issue with the idea that neuroscientists can’t hope for beautiful theories of the brain. Just look at that picture of the hippocampus above! Does this look like a disheveled, random assortment of neurons to you? The brain is just bursting with structure – but the tools and investigations into that structure are too young to know everything about it. So far.

I wrote a piece on Medium (because it formats purty pictures well) on the beauty of the brain, and what a ‘theory of the brain’ would mean:

At first glance, the brain is a mess. More like a tangled ball of yarn than a finely woven tapestry, every combination of neuron-to-neuron is in there, somewhere. Yet look a little closer and this complex structure devolves into very clear regularity. I could take you on a tour of the waves of Purkinje cells, straight-backed like military men, reaching their arms out to passing fibers shooting up from a distant province. I could show you the shapes of the hippocampus where memories are created, messages washing down step by step. I could show you the round columns of barrel cortex, clear to your eye, that precisely mirrors the pattern of whiskers that eventually stimulate them. There is so much visible structure in here that we’re still attempting to unlock.

The points I was trying to make are:

  1. Brain science is super young! There’s still tons to know
  2. We actually do have some pretty good candidates for theories of the brain, though the list is far from complete
  3. One key to creating any theory is to understand the boundary conditions, ie the physical constants and constraints on the system. This is as true in Physics as it is in Biology, and we’re very far from understanding them (note also: this is a big problem with Blue Brain – it’s just an epileptic cortical column with no inputs!)
  4. A ‘theory of the brain’ will ultimately be meaningless, or pointless. It won’t tell us what we want to know; rather, we will need multiple overlapping theories for them to have any use.

Monday open thread: Are these the equations of the brain?

Biologists — neuroscientists included — can’t hope for that kind of theory. Biology isn’t elegant the way physics appears to be. The living world is bursting with variety and unpredictable complexity, because biology is the product of historical accidents, with species solving problems based on happenstance that leads them down one evolutionary road rather than another. No overarching theory of neuroscience could predict, for example, that the cerebellum (which is involved in timing and motor control) would have vastly more neurons than the prefrontal cortex (the part of the brain most associated with our advanced intelligence).

Gary Marcus wants to know: what does a theory of the brain look like? The final sentence is a bit troubling – the best predictor of brain size across animals is body mass; given that this suggests motor control is really important, why would it be surprising that the cerebellum would have more neurons than PFC? So I thought I’d try to compile the current “theories of the brain”, or something like it. I know this is very incomplete so please fill in what I’m missing.

1. Sensory neurons maximize information


There are a lot of ways that the neurons that sense the external world could be responding to the world. An influential theory from Horace Barlow is that neurons are trying to represent the world as well as it is physically possible to do. In mathematical terms, they want to maximize the information that they transmit about whatever they are sensing. In successive stages of the nervous system, this happens through decorrelation: neurons at each stage of neural processing are less like others at that stage.

What this also suggests is that the nervous system needs to know the statistics of the natural world, ie the boundary conditions. In fact, the primary sensory neurons tend to act like they are maximizing their information about the world; it’s been pretty successful at describing the sensory nervous system.

2. Value learning happens through reinforcement (Temporal difference learning)


We all know the story of Pavlov’s dog: a man rings a bell every time he gives a dog some food and pretty soon ringing the bell, even in the absence of food, will get the dog salivating. In 1972, Rescorla and Wagner decided to write this down in mathematical form. In a slightly different form, the equation says that you learn how valuable an action or an object or a thing is by updating your guess a little higher when it was better than expected or a little lower when it was worse than expected. This model of behavior has a very clear implementation in the brain – have some set of neurons that are only active when there is an unpredictably high or low reward. And in a structure called the basal ganglia, this is exactly what you see! There are collections of dopamine neurons that send a reinforcing signal that is proportional to how much better or worse something was than expected. And these dopamine neurons, they reinforce the value by changing the activity of the neurons they are talking to. Temporal-difference learning is another pretty successful theory.

3. Predictive coding and the Bayesian Brain

The world is a tough place, full of constant simulation but a whole lot of useless noise. If the brain had to respond to constantly signal every little thing you would be immensely tired. After all, the brain already uses about 20% of your calories and it is immensely energy efficient. In order to save on internal electricity, it often responds to changes in the world rather than the exact details.

This is one way of implementing another popular theory: the Bayesian Brain. The Bayesian Brain hypothesis proposes that the nervous system is implementing Bayes Theorem in order to optimally learn about statistical signals of the environment (this is very related to theory #1). It can even explain many optical illusions!

3. Association happens through Hebbian plasticity


The paragraph in the book proposing Hebb’s rule has been called “the most cited and least read” paragraph in neuroscience, which is probably true (any contenders? Laughlin, maybe?). Hebb’s rule is summed up as, “those that fire together, wire together”. The biological implementation is through spike-timing dependent plasticity (STDP).

4. Decisions are made through accumulation of evidence


When forced to make a quick decision, how do you decide to decide? What’s the best way to combine the evidence that you have and determine – I have enough? The optimal decision-making rule is called Evidence Accumulation, simply enough, and can be well-described by a drift-diffusion model. In simpler terms, evidence slowly accumulates until it hits some sort of ‘threshold’: enough evidence is in there, it’s time to make a decision! This type of rule does a really good job of describing human decisions in a wide range of contexts. Even better, there appears to be just such a signal in various areas of the brain, the most well-studied being area LIP.

5. Hodgkin-Huxley neurons

What would a list of the theories of the brain be without the Hodgkin-Huxley model?  The Nobel Prize-winning work from the 50s that – when extended – almost perfectly describes the ionic mechanisms underlying spiking? That predicted the role of ion channels before we knew about ion channels?

What else?

Like I said, this is an incredibly incomplete list. What else should we add? Neural sparsity? Divisive normalization?

I’ll admit I was a bit inspired by the equations that explain the world, so I thought I’d try to compile the Equations of the Brain:

equations of the brain2


I thought about using the backpropagation version of TD-learning but decided against it…


Watch ALL the neurons in a brain: Ahrens and Freeman continue their reign of terror

Okay, not quite all of them. But it looks like Misha Ahrens and Jeremy Freeman are going to continue their reign of terror, imaging the whole zebrafish brain as if it’s no big deal. Yeah they’ve got almost every neuron of a vertebrate, so what?

Besides figuring out that not shooting light at the eyes might be a good idea (I think it may have been a little more complicated than that…), they released software for analysis of these kind of big data sets. Beyond Ahrens and Freeman, I know of at least two other labs using the same type of microscope to image all of the fly and can count five labs doing the same in worms. And that’s probably both a huge undercount, as well as the tip of the iceberg that will be a coming tidal wave of massively-large neural data sets. This is something that is so important, DARPA is throwing huge amounts of money at it (or at least wants to).

Their software, called thunder, is freely available and open-sourced, and available at a really slick website. It has a really great tutorial to analyze data and make sweet figures. This kind of openness is really Science Done Right.

Seriously, look at these bad boys:

running mice make neurons go fast

Mice running make mice neurons go fast

neurons in phase space


Neural activity floats around in their own not-so-metaphorical dimensions.the whole brain of the zebrafish is tuned for direction


Neurons are tuned for motion, with different colors representing different motions.ze brafish



Freeman, J., Vladimirov, N., Kawashima, T., Mu, Y., Sofroniew, N., Bennett, D., Rosen, J., Yang, C., Looger, L., & Ahrens, M. (2014). Mapping brain activity at scale with cluster computing Nature Methods DOI: 10.1038/nmeth.3041

Vladimirov, N., Mu, Y., Kawashima, T., Bennett, D., Yang, C., Looger, L., Keller, P., Freeman, J., & Ahrens, M. (2014). Light-sheet functional imaging in fictively behaving zebrafish Nature Methods DOI: 10.1038/nmeth.3040

As a butterfly flaps its wings in Tokyo, a neuron in your head veers slightly heavenward…

When you look at the edge of a table, there is a neuron in your head that goes from silence to pop pop pop. As you extend your arm, a nerve commanding the muscle does the same thing. Your retina has neurons whose firing rate goes up or down depending on whether it detects a light spot or a dark spot. The traditional view of the nervous system descends from experiments that have supported this view of neural activity. And perhaps it is true at the outer edges of the nervous system, near the sensory inputs and the motor outputs. But things get murkier once you get inside.

Historically, people began thinking about the brain in terms of how single neurons represent the physical world. The framework they settled on had neurons responding to a specific set of things out in the world, with the activity of those neurons increasing when they saw those specific things and decreasing when they saw their opposite. As time flowed by, this neural picture became jumbled up with questions about whether overall activity level or specific timing of an individual spike was what was important.

When it comes to multiple neurons, a similar view has generally prevailed: activity levels go up or down. Perhaps each neuron has some (noisy) preference for something in the world; now just think of the population as the conjunction of each of their activity. Then the combination of all of the neurons is less noisy than any individual. But still: it’s all about activity going up or down. Our current generation of tools for manipulating neural activity unconsciously echoes this idea of how the nervous system functions. Optogenetics cranks the activity of cells – though often specific subpopulations of cells – to move their activity up or down in aggregate.

An alternate view which I has been pushed primarily by Krishna Shenoy and Mark Churchland takes a dynamic perspective of neural activity, and I think comes from taking a premotor view of the nervous system. Generally, nervous  activity is designed to control our physical behavior: moving, shouting, breathing, looking, remaining silent. But that is a lot to have to control, and selection of the correct set of behaviors has to take a huge numbers of factors into account and has a lot to prepare for. What have I seen? How much do I like that? What am I afraid of? How hungry am I? This means that premotor cortical activity is probably representing many things simultaneously in order to choose among them.

The problem can be approached by looking at the population of activity and asking how many different things it could represent, without necessarily knowing what those are. Perhaps the population is considering six different things at the same time (a noted mark of genius)! Now that’s a slightly different perspective: it’s not about the up or down of overall activity, but how that activity flows through possibilities on the level of the whole population.

These streams of possible action must converge into a river somewhere. There are many possible options for how this could happen. They could be lying in wait, just below threshold, building up until they overcome the dam holding their behavior at bay. They could also be gated off, allowed to turn on when some other part of the system decides to allow movement.

But when we stop and consider the dynamics required in movement, in behavior, another possibility emerges. Perhaps there is just a dynamical system churning away, evolving to produce some reaching or jumping. Then these streams of preparatory activity could be pushing the state of the dynamical system in one direction or another to guide its later evolution. Its movement, its decision.

Churchland and Shenoy have worked on providing evidence for this happening in motor cortex as well as prefrontal cortex: neurons there may be tuned to move their activity in some large space, where only the joint activity of all the neurons is meaningful. In this context, we cannot think usefully about the individual neuron but instead must consider the whole population simultaneously. It is not the cog that matters, but the machine.


Kaufman MT, Churchland MM, Ryu SI, & Shenoy KV (2014). Cortical activity in the null space: permitting preparation without movement. Nature neuroscience, 17 (3), 440-8 PMID: 24487233

Mante V, Sussillo D, Shenoy KV, & Newsome WT (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503 (7474), 78-84 PMID: 24201281

Churchland, M., Cunningham, J., Kaufman, M., Foster, J., Nuyujukian, P., Ryu, S., & Shenoy, K. (2012). Neural population dynamics during reaching Nature DOI: 10.1038/nature11129

Shenoy KV, Sahani M, & Churchland MM (2013). Cortical control of arm movements: a dynamical systems perspective. Annual review of neuroscience, 36, 337-59 PMID: 23725001

Ames KC, Ryu SI, & Shenoy KV (2014). Neural dynamics of reaching following incorrect or absent motor preparation. Neuron, 81 (2), 438-51 PMID: 24462104

Churchland, M., Cunningham, J., Kaufman, M., Ryu, S., & Shenoy, K. (2010). Cortical Preparatory Activity: Representation of Movement or First Cog in a Dynamical Machine? Neuron, 68 (3), 387-400 DOI: 10.1016/j.neuron.2010.09.015

“The members of the HBP are saddened by the open letter posted on” (updated x2)

Truly, the Human Brain Project has become sad 😦 Here is their response to the neurofuture petition that I talked about on Monday (in PDF only, for some reason):

The members of the HBP are saddened by the open letter posted on on 7 July 2014, as we feel that it divides rather than unifies our efforts to understand the brain. However, we recognize that the signatories have important concerns about the project…

What are the concerns of the open letter? The open letter expresses the concern that these goals are so unrealistic that they will damage all of neuroscience, and states that not enough is known to take on such a challenge. We share this uncertainty. However we contend that no one really knows how much neuroscience data is currently available because it has never been organized, and that no-one even knows how much data is needed to begin such an endeavour. Reconstructing and simulating the human brain is a vision, a target; the benefits will come from the technology needed to get there. That technology, developed by the HBP, will benefit all of neuroscience as well as related fields. Many other areas of science have demonstrated that simulation can be a tool to create new knowledge, not just to confirm existing results.

Take that for what you will; it’s a fairly corporate/academic response. Meanwhile, neurofuture has a comments section which is fairly interesting. Here are some good ones:

The first is scientific: the leadership of the Human Brain Project has no experience in creating mathematical formalisms for representation of dynamical systems on multiple temporal and spatial scales. Without such formalisms, it is very unlikely that the complexity of neural models will be manageable, and the existing ad-hoc methods for modeling will remain firmly in place. The project statement on "Mathematical and Theoretical Foundations of Brain Research" claims that the theoretical research will magically come from "outside" the Human Brain Project but this appears to be mostly magical thinking. The current organization structure makes it obvious that independent thought from outside cannot possibly penetrate the upper echelons of power of the Human Brain Project. Which brings me to my second point. The second issue is organizational: the Blue Brain Project has earned a reputation of secrecy and extremely hierarchical authoritarian approach to scientific management, which suggests that rather than the stated goal of unbiased and objective collection of data and tools, the project is likely to result in promoting the agendas and pet project of a small group of people at the top of the hierarchy. There simply is no evidence for an open-minded and exploratory culture in the existing Blue Brain Project, and there is no chance for such culture to emerge without a complete remake of the organization structure, from pyramid to a flat decentralized structure. Without a way to promote diversity in thinking, the Human Brain Project will mostly be about control and power, rather than any meaningful scientific goals.

July 7, 2014, 2:07 p.m.
Ivan Raikov. Okinawa Institute of Science and Technology. Japan

Research on the human brain of this magnitude should be inclusive of all approaches and technologies that have been providing advancements on understanding the brain. The current suggested approach of a bottom up simulation is akin to trying to understand the laws of gases by simulating the collisions of an Avogadro number (6.022×10^{23}) of particles. This is unnecessary and also fruitless: there was thermodynamics before statistical mechanics and even the latter does not derive the laws of gases from simulations; it uses first principles, validated against the phenomenological theory of thermodynamics. For the brain, the approach should also be two-pronged: a top-down (from function and behavior to structure)-- "the thermodynamics" part, and a bottom up (neuronal level interactions)-- the "statistical mechanics" part. Simulations aid both directions, but they are only useful within the context of experimental evidence.

July 7, 2014, 9:40 p.m.
Zoltan Toroczkai. University of Notre Dame. United States

Personally, I considered applying to one of the partner-projects, but found the goal to be unclear and the decision process to be absolutely not independent, I therefore considered this would be a pure waste of time. If Europe wants to move on in this area of research, then they do not only need to focus on learning more about the human brain, but also using it to do something useful with. Generally, a lot of this work is done at a very low level, while higher level understanding may be sufficient to do other things with, e.g. neural networks have been around for ages, while we actually don't properly understand how they work in the brain. There are however other models of the how the human brain works that are at a higher level and seem to work pretty well. The project should cover more work that is around using the outcomes of or deals with creating a human brain without making a one-by-one copy of the brain itself, while still offering the same functionality.

July 8, 2014, 9:23 a.m.
Wim Melis. University of Greenwich. United Kingdom

It is surprising that in a project whose goals are to simulate the human brain, a developmental part is totally missing. Thinking that the long childhood observed in the human species has nothing to do with the cognitive success of this species is neglecting one of the main characteristic of the studied species and of its "educated" brain. This lack of developmental studies, both in humans and animals, reveals a major scientific flaw. It misses the opportunity to understand the organizing principles of the human brain and its specificities compared to other animals, and to develop new learning algorithms based on the understanding of the mechanisms used by the fantastic learner who is the human child. 
Furthermore given the clinical and societal issues pushed forward to justify HBP, it is a strategic mistake not to include developmental studies as numerous neurologic and psychiatric diseases have their origin during development (e.g. drug addiction, autism, schizophrenia, epilepsia), and consequences of preterm births (6 to 10% of births, 15 million babies each year in the world) and of other brain insults, neural and cognitive developmental deficits (global and specific), impact of low SES on cognitive development are concerning an important percentage of our fellow citizens (e.g. 20% of the young adults are described as non-efficient readers in national French evaluations!) preventing them to obtain a correct and stable work.  Without research on human and animal brain development, it is doubtful that solutions for these problems will be proposed whereas the economic impact in the EU (and elsewhere) is huge.
Finally giving up on data acquisition is a huge mistake when the recent development of non-invasive brain imaging techniques just unlocks the access to the child brain revealing unexpected results (e.g frontal activation in infants, no specific activation to faces in the fusiform gyrus until late childhood), pointing to our ignorance of even the simplest principles which might explain how an assembly of cells can give rise to thoughts.

July 9, 2014, 7:57 a.m.
Ghislaine Dehaene-Lambertz. INSERM. France

Update: via Prerana Shresthra, there are a couple of other good explanations of the problems some have with the Human Brain Project on Quora:

One of the big dreams of my life are to eventually simulate brains, so a priori I love the idea. Here I will just list some of my objections. I do not speak for anyone else and do not claim full knowledge about the HBP. But I have been following the publicly visible parts for a while. I believe that it is premature because

1) We lack the knowledge needed to build meaningful bottom up models and I will just give a few examples:
a) We know something about a small number of synapses but not how they interact
b) We know something about a small number of cell types, but not about the full multimodal statistics (genes, connectivity, physiology)
c) We know something about a small number of cell-cell connections, but a tiny fraction of all the existing ones
d) We know a few things about how a neuron’s dynamics relates to its inputs, but only for a tiny number of cells and conditions.
e) We know a few aspects of a few neurons that change over time, but again for a tiny number of cells and conditions.
The degree of the lack of knowledge is mindboggling. There are far more neurons in the brain than people on the planet. Any planned bottom-up simulations of the human brain are akin to simulating the entire human society on the planet based on say a random 100,000 word documents sampled from the internet. For simulations, the output is only as good as the knowledge about the system that you put in. Hence, large scale simulations are bound to lead to poor results. In my judgment, the data will not become available in sufficient amounts before the termination of the HBP.

…Understanding the brain is different than going to the moon. We knew where the moon was. We do not know how a simulation of the brain should look like. Any simulation techniques developed at the moment may end up being useless for the kinds of models of the brain that we will eventually need.

(and there’s more!) Go visit Quora to see the rest.

Update the second: Neuroskeptic has a good interview with Zach Mainen, the man responsible for organizing the Neurofuture petition.

Monday open thread: Rebellion against the Human Brain Project (updated x3, now with more gossip)

FENS, the major European neuroscience meeting, is currently under way. That makes today a good time to announce a European rebellion against the Human Brain Project (HBP). HBP is something like a European-equivalent of the BRAIN Initiative that has people in such a fuss over here in the US except it’s been underway for a year and has a more narrow focus.

It has long been my impression that the HBP has been something of a “give Henry Markram money” project, and the twitter feed kind of reinforces that view. Markram, for those not aware, runs the lab that is working on the Blue Brain Project, an attempt to simulate the human brain – or at least, one cortical column’s worth of it (pieces of the brain not being simulated, to my knowledge: any sensory input, glia, blood flow, the extracellular matrix, and more). The core of the HBP is in a similar vein: informatics, computation, that sort of thing. I’m about as sympathetic as you could get to diverting funding to computation and theory, but I’ve been pretty flabbergasted by some of the overselling that they’ve done to get it.

Look at the response the proposal got a couple years ago in a Nature commentary:

As the response at the meeting made clear, however, there is deep unease about Markram’s vision. Many neuroscientists think it is ill-conceived, not least because Markram’s idiosyncratic approach to brain simulation strikes them as grotesquely cumbersome and over-detailed. They see the HBP as overhyped, thanks to breathless media reports about what it will accomplish. And they’re not at all sure that they can trust Markram to run a project that is truly open to other ideas.

 However, the HBP has been controversial and divisive within the European neuroscience community from the beginning. Many laboratories refused to join the project when it was first submitted because of its focus on an overly narrow approach, leading to a significant risk that it would fail to meet its goals. Further attrition of members during the ramp-up phase added to this narrowing.In June, a Framework Proposal Agreement (FPA) for the second round of funding for the HBP was submitted. This, unfortunately, reflected an even further narrowing of goals and funding allocation, including the removal of an entire neuroscience subproject and the consequent deletion of 18 additional laboratories, as well as further withdrawals and the resignation of one member of the internal scientific advisory board…In this context, we wish to express the view that the HBP is not on course and that the European Commission must take a very careful look at both the science and the management of the HBP before it is renewed. We strongly question whether the goals and implementation of the HBP are adequate to form the nucleus of the collaborative effort in Europe that will further our understanding of the brain.

The letter is fronted by Zach Mainen and seems to be signed by the entirety of the European neuroscience community not named Henry Markram (I kid, I kid).

Anyone know which subproject was deleted, and which laboratories were affected?

Anyone have a defense of the HBP?

Given the BRAIN Initiative, is there actually a ‘moon shot’ that could be achieved in neuroscience?

Frankly, if I wanted to try simulating the complete nervous system of an animal, I’d start with something smaller and more manageable.
Update: some other coverage. Hilarious quote:
Richard Frackowiak, director of clinical neuroscience at the University Hospital of Lausanne, and co-leader of a strand of the Human Brain Project focusing on “future medicine”, said that many of the complaints were “irrational sniping” from scientists who were ill-informed, or wanted the funds to pursue their own research agendas.
Update 2: I asked one of the signatories about what had spurred the letter, especially given that some of them had been big supporters of the project in the past. The signatory said, “Some were optimistic that any big neuroscience project was better than none; recent changes that make that view less tenable”…I think they were referring to:
Central to the latest controversy are recent changes made by Henry Markram, head of the Human Brain Project at the Swiss Federal Institute for Technology in Lausanne. The changes sidelined cognitive scientists who study high-level brain functions, such as thought and behaviour. Without them, the brain simulation will be built from the bottom up, drawing on more fundamental science, such as studies of individual neurons.
Update 3: Science Magazine has some good quotes about this whole kerfuffle:
“The notion that we know enough about the brain to know what we should simulate is crazy, quite frankly,” Dayan says…
The nixed subproject, called Cognitive Architectures and headed by French neuroscientist Stanislas Dehaene, represented all the neuroscience in Europe that isn’t working on a molecular or synaptic level, says Zachary Mainen of the Champalimaud Centre for the Unknown in Lisbon, one of the authors of the letter. HBP “is not a democracy, it’s Henry’s game, and you can either be convinced by his arguments or else you can leave,” Mainen says.
BAM! I love strong quotes. Indeed, talking to a few people who know more about this than I do, it sounds a lot like this is the after-effect of a power play by Markram. It seems as if by jettisoning the cognitive science portion of HBP, this left him in control of a more computationally-focused project (that he probably wanted the most). Someone suggested that he may lose some of the funds, but he would retain full control of most of the rest.
Markram has a strong personality and people seem worried that he is wresting control of the project to focus on his particular research interests. Obviously, this isn’t sitting well with a lot of the European community…