Monday open thread: Are these the equations of the brain?

Biologists — neuroscientists included — can’t hope for that kind of theory. Biology isn’t elegant the way physics appears to be. The living world is bursting with variety and unpredictable complexity, because biology is the product of historical accidents, with species solving problems based on happenstance that leads them down one evolutionary road rather than another. No overarching theory of neuroscience could predict, for example, that the cerebellum (which is involved in timing and motor control) would have vastly more neurons than the prefrontal cortex (the part of the brain most associated with our advanced intelligence).

Gary Marcus wants to know: what does a theory of the brain look like? The final sentence is a bit troubling – the best predictor of brain size across animals is body mass; given that this suggests motor control is really important, why would it be surprising that the cerebellum would have more neurons than PFC? So I thought I’d try to compile the current “theories of the brain”, or something like it. I know this is very incomplete so please fill in what I’m missing.

1. Sensory neurons maximize information


There are a lot of ways that the neurons that sense the external world could be responding to the world. An influential theory from Horace Barlow is that neurons are trying to represent the world as well as it is physically possible to do. In mathematical terms, they want to maximize the information that they transmit about whatever they are sensing. In successive stages of the nervous system, this happens through decorrelation: neurons at each stage of neural processing are less like others at that stage.

What this also suggests is that the nervous system needs to know the statistics of the natural world, ie the boundary conditions. In fact, the primary sensory neurons tend to act like they are maximizing their information about the world; it’s been pretty successful at describing the sensory nervous system.

2. Value learning happens through reinforcement (Temporal difference learning)


We all know the story of Pavlov’s dog: a man rings a bell every time he gives a dog some food and pretty soon ringing the bell, even in the absence of food, will get the dog salivating. In 1972, Rescorla and Wagner decided to write this down in mathematical form. In a slightly different form, the equation says that you learn how valuable an action or an object or a thing is by updating your guess a little higher when it was better than expected or a little lower when it was worse than expected. This model of behavior has a very clear implementation in the brain – have some set of neurons that are only active when there is an unpredictably high or low reward. And in a structure called the basal ganglia, this is exactly what you see! There are collections of dopamine neurons that send a reinforcing signal that is proportional to how much better or worse something was than expected. And these dopamine neurons, they reinforce the value by changing the activity of the neurons they are talking to. Temporal-difference learning is another pretty successful theory.

3. Predictive coding and the Bayesian Brain

The world is a tough place, full of constant simulation but a whole lot of useless noise. If the brain had to respond to constantly signal every little thing you would be immensely tired. After all, the brain already uses about 20% of your calories and it is immensely energy efficient. In order to save on internal electricity, it often responds to changes in the world rather than the exact details.

This is one way of implementing another popular theory: the Bayesian Brain. The Bayesian Brain hypothesis proposes that the nervous system is implementing Bayes Theorem in order to optimally learn about statistical signals of the environment (this is very related to theory #1). It can even explain many optical illusions!

3. Association happens through Hebbian plasticity


The paragraph in the book proposing Hebb’s rule has been called “the most cited and least read” paragraph in neuroscience, which is probably true (any contenders? Laughlin, maybe?). Hebb’s rule is summed up as, “those that fire together, wire together”. The biological implementation is through spike-timing dependent plasticity (STDP).

4. Decisions are made through accumulation of evidence


When forced to make a quick decision, how do you decide to decide? What’s the best way to combine the evidence that you have and determine – I have enough? The optimal decision-making rule is called Evidence Accumulation, simply enough, and can be well-described by a drift-diffusion model. In simpler terms, evidence slowly accumulates until it hits some sort of ‘threshold’: enough evidence is in there, it’s time to make a decision! This type of rule does a really good job of describing human decisions in a wide range of contexts. Even better, there appears to be just such a signal in various areas of the brain, the most well-studied being area LIP.

5. Hodgkin-Huxley neurons

What would a list of the theories of the brain be without the Hodgkin-Huxley model?  The Nobel Prize-winning work from the 50s that – when extended – almost perfectly describes the ionic mechanisms underlying spiking? That predicted the role of ion channels before we knew about ion channels?

What else?

Like I said, this is an incredibly incomplete list. What else should we add? Neural sparsity? Divisive normalization?

I’ll admit I was a bit inspired by the equations that explain the world, so I thought I’d try to compile the Equations of the Brain:

equations of the brain2


I thought about using the backpropagation version of TD-learning but decided against it…


12 thoughts on “Monday open thread: Are these the equations of the brain?

  1. I’ll have to read that Marcus piece. I agree that trying to find “laws” the way physics seems to have them may not be the best way forward for the brains & behavior set. The issue of unpredictable complexity doesn’t seem like a wall preventing us from forming a theory, just making us take a different route to get around the way. The use of equations, and especially complex systems methods, seem like a good way forward to address this. As for his question, I think it’s a bad one. That’s like asking a physicists if F = ma could predict the formation of the milky-way. Sure if you had access to all information.

    • Yeah, although we always have to worry about over-reliance on equations (I’m looking at you, economists!). But I think the biggest thing hanging over our heads is not understanding the boundary conditions, ie, what are the constraints? Natural scene statistics and…?

  2. Compact yet comprehensive.
    I would maybe add mean field theory as the most useful way of justifying the functional equations (1-4) from the mechanistic ones (5-6).

  3. Are there any theories out there that suggest that our brains evolved to maximize happiness or basically the “feel good” factor in the brain?

  4. Pingback: Revue des sciences août 2014 | Jean Zin

  5. I think this is kinda flawed.

    I mean, I do disagree with the opening quote. It’s true that biology is complex, for sometimes-inscrutable evolutionary reasons, but physics is too. Non-equilibrium thermodynamics, spin glasses and stuff, can kind of get close to the data everywhere of neuroscience, while still being describable pretty succinctly with mathematics.

    The “equations that changed the world” are themselves very simplified: “relativity” doesn’t include even the more complete energy-momentum relation; the second law of thermodynamics doesn’t have any of the probabilistic issues; and the logistic map is not even an equation about chaos theory itself, exactly, it’s just a really useful analyte for that field.

    Usually Maxwell’s equations form the basis for these mathematical beauty ideas; e.g. Kay described a metacircular evaluator as “the Maxwell’s equations of programming” ( So if you want to replicate that for another field it’s worth looking at those equations in particular in detail, I think.

    Importantly, they have a nontrivial conceptual background. Saying “the divergence of the electric field equals the charge density divided by vacuum permitivitty” won’t help anyone without a solid idea of electric fields, charge density, etc. understand physical phenomena. So if you want a neural equivalent, you should closely consider what concepts are in the background.

    Secondly, Maxwell’s equations are pretty complete. They’re fine at describing the electromagnetic fields of everything from RC circuits to magnetars to superconductors. Laws of immediate relevance to non-physicists, such as Kirchoff’s, are derivable directly from only them, though of course the derivation can take some work.

    As an example of something that I think would be equivalent to this power in a neuroscience context, we could demand that our equations allow the complete approximate description of dynamics of central pattern generators. Looking around a bit, this has been partly done, e.g. in, with HH-type models coupled with various synaptic equations, and of course there’s lots of work like this on more general circuits.

    I think that, maybe, HH type equations plus some synaptic models could be something Maxwell-like for cellular dynamics. But how complete is that, really? What if the dynamics in a dendritic tree are important? You probably need to modify the equations to cover action potentials throughout space rather than just at an idealized point soma in order to model actually existing circuits (note: cable equation was added since I started writing this). And so on and so on. What about neuroendocrine action? How much does it take to accurately model a group of cells, really?

    There’s a question of how much you demand the equations work for. Maxwell’s equations don’t cover some quantum stuff, and you can observe this in real life, by figuring the magnetic moment of an electron or something like that. So maybe we can decide to skip out on some of this, but if we pick as a target “non-mutant undamaged neurons” or something, we still have to cover quite a lot.

    Even there you have a lot of specialized stuff. Over the summer I got to work on auditory system models, and there you have some pretty unique system equations, like (doi 10.1121/1.3238250). I don’t think this synapse model (or generalizations thereof) is used in other neural modeling, though I could be wrong.

    Of course this is only one part of neuroscience, like electric fields are only part of what electrodynamics-concerned physicists study. Maybe we’d want to be able to derive the “higher-level” equations (e.g. learning) from the mechanistic spike/synapse rules. This would require addition to our conceptual repertoire, in the same way Ohm’s law does. Maxwell’s equations after all say nothing about “resistors”, but you can define a resistor in a Maxwell-compatible way or at a “higher level” once I-V relations etc. are defined and reason about them independently – and we’d like to do that with Hebb’s law and such, probably.

    That’s kind of a problem in itself. You can do plenty of modeling from the alternate perspectives of information theory. Spike trains as Poisson-ish processes and all that. To some extent I think you can go with that without bothering with the HH-type dynamics “underlying” it, so that’s a bit like doing thermodynamics without bothering with the Newtonian particles it’s made out of. In equations terms that means having a whole different level of rules for what can be to some extent a separate field of study from the ion dynamics.

    Other subfields of neuroscience might then have their own, partially independent, sets of equations. Evolution, for instance. Population genetics already involves plenty of deep equations, like Hardy-Weinberg. Maybe it would be possible to write out some mathematical explanation of the proportions of the brain devoted to different tasks based on the animal’s ethology, who knows.

    I hope this isn’t too rambly. I like mathematical biology but I think a theory as solid as Maxwell’s requires more than picking out a few useful equations, I guess? You need unified concepts and some bedrock solidity. If you search just on the basis of “beauty” you’re leaving out a lot of the reason people actually like summary equations.

  6. Pingback: Revue des sciences septembre 2014 | Jean Zin

  7. Pingback: Introduction to THE BOOK | Frithmind

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s