Information theory of behavior

Biology can tell us what but theory tells us why. There is a new issue of Current Opinion in Neurobiology that focuses on the theory and computation in neuroscience. There’s tons of great stuff there, from learning and memory to the meaning of a spike to the structure of circuitry. I have an article in this issue and even made the cover illustration! It’s that tiny picture to the left; for some reason I can’t find a larger version but oh well…

Our article is “Information theory of adaptation in neurons, behavior, and mood“. Here’s how it starts:

Recently Stephen Hawking cautioned against efforts to contact aliens [1], such as by beaming songs into space, saying: “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.” Although one might wonder why we should ascribe the characteristics of human behavior to aliens, it is plausible that the rules of behavior are not arbitrary but might be general enough to not depend on the underlying biological substrate. Specifically, recent theories posit that the rules of behavior should follow the same fundamental principle of acquiring information about the state of environment in order to make the best decisions based on partial data

Bam! Aliens. Anyway, it is an opinion piece where we try to push the idea that behavior can be seen as an information-maximization strategy. Many people have quite successfully pushed the idea that sensory neurons are trying to maximize their information about the environment so that they can represent it as well as possible. We suggest that maybe it makes sense to extend that up the hierarchy of biology. After all, people generally hate uncertainty, a low information environment, because it is hard to predict what is going to happen next.

Here is an unblocked copy of the article for those who don’t have access.

References

Sharpee, T., Calhoun, A., & Chalasani, S. (2014). Information theory of adaptation in neurons, behavior, and mood Current Opinion in Neurobiology, 25, 47-53 DOI: 10.1016/j.conb.2013.11.007

How should you judge a theoretical model?

When faced with a model of the world (in physics, neuroscience, economics, ecology), how should you judge that theory? Cyrus Samii suggests 5 ways. Here is number 2:

2. If any result can be engineered then results themselves have no special ontological status.

This is another way of asking whether a model has empirical content, which we typically take as falsifiability. Yet Karl Popper suggested:

The empirical content of a statement increases with its degree of falsifiability: the more a statement forbids, the more it says about the world of experience.

And he suggested “two criteria determine the empirical content of a theory are their level of universality (Allgemeinheit) and their degree of precision (Bestimmtheit).”

I also really like the question at the start of number 4:

How complicated can the problems be that we allow our agents to solve in a model? Is a dynamic program ever admissible as a reasonable assumption on the objective function of an agent?

Charles Krebs (or Judy Myers) says:

Recommendation – no paper on models should be published or talked about unless it makes specific, testable predictions of how the model can be tested.

I actually disagree with this rather strenuously. There are several reasons to make models, only one of which is to make predictions. Another is to confirm hypotheses.

Let’s say that you think that honeybees are dying because of the excessive use of mint toothpaste and you collect data to prove it. The problem is that data is simply a collection of facts (or “facts”) with no organizing structure. A model can give those facts that structure: you put what you know together with some of the data, and see if what you know is sufficient to replicate the observations of the world. Of course, you have to interpret these types of models carefully; they are not predictive models in the sense that they tell you anything about the world. Rather, they tell you about whether you have a consistent and complete story. But it’s still just a story.

The Connectionists

Labrigger (via Carson Chow) pointed to a sprawling debate on the Connectionist mailing list concerning Big Data and theory in neuroscience. See the list here (“Brain-like computing fanfare and big data fanfare”). There seem to be three main threads of debate:

(1) Is Big Data what we need to understand the brain?

(2) What is the correct level of detail?

(3) Can we fruitfully do neuroscience in the absence of models? Do we have clearly-posed problems?

Here are some key comments you can read.

There are few points that need to be made. First, one of the ongoing criticisms through the thread concerns the utility of Deep Learning models. It is repeatedly asserted that one beauty of the brain is that it doesn’t necessarily needs gobs of data to be able to perform many important behaviors. This is actually not true in the slightest: the data has been collected through many generations of evolution. In fact, Deep Learning ‘assembles’ its network through successive training of layers in a manner vaguely reminiscent of the development of the nervous system.

In terms of the correct level of detail, James Bower is ardent in promoting the idea that we need to go down to the nitty-gritty. In cerebellum, for instance, you need to understand the composition of ion channels on the dendrites to understand the function of the cells. Otherwise, you miss the compartmentalized computations being performed there. And someone else points out that, in fact, from another view this is not even reduced enough; why aren’t they considering transcription? James Bower responds with:

One of the straw men raised when talking about realistic models is always: “at what level do you stop, quantum mechanics?”. The answer is really quite simple, you need to model biological systems at the level that evolution (selection) operates and not lower. In some sense, what all biological investigation is about, is how evolution has patterned matter. Therefore, if evolution doesn’t manipulate at a particular level, it is not necessary to understand how the machine works.

…although genes are clearly a level that selection operates on…

But I think the underlying questions here really are:

(1) What level of detail do we need to understand in order to predict behavior of [neurons/networks/organisms]?

(2) Do we understand enough of the nervous system – or general organismal biology – to make theoretical predictions that we can test experimentally?

I think Geoff Hinton’s comment is a good answer:

A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won’t know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach.

Does theory in neuroscience have any empirical content?

I once took an economics class where the professor, a theorist, spent most of time harping on whether a given theory had “empirical content.” That is, he wanted to know whether a theory was falsifiable. After all, if we want to say something is scientific then it must be falsifiable.

Let’s remember, too, that this isn’t purely something that theorists should be concerned about. Any time that an experimentalist suggests a working model for how something works, they are proposing a theory, whether that theory contains moving mathematical parts or not.

This came up when we were discussing a theory paper last night. This paper suggested that populations of neurons can use excitatory feedback to reduce noise without worrying about runaway excitation. Interesting, but when I asked how we would falsify these people were a bit stumped. The lack of clarity on what are the exact assumptions and when it breaks down are common in theoretical papers in neuroscience. Does adding in an additional (biologically-plausible) set of recurrent connections break the model? Does this require a gaussian distribution of connections? And in many models: does this come from a specific set of parameters and is there anything the model could not fit?

The problem with theories lacking empirical content is that they cannot be tested. And if they cannot be tested, why listen to them?

As part of a great series of posts, Dynamic Ecology tries to answer whether ecological theory is useful for solving practical problems and whether the ecological literature is ‘idea free’ (ie, whether experiments are driven by a desire to test theoretical predictions.) If neuroscience had to face that test, I think it would do decently on the first question but horrifically on the second. How often do you read papers that are directly responding to theoretical predictions?

Of course a theory doesn’t have to be correct to be useful. It could carry its utility by propagating interesting ideas and concepts that are later included in other theories. But a lot of theory in neuroscience seems to be in the vein of ‘this is possible’. How do we convert that into a community that says, ‘this is probable?’