When did we start using information theory in neuroscience?

This question came up in journal club a little while ago.

The hypothesis that neurons in the brain are attempting to maximize their information about the world is a powerful one. Although usually attributed to Horace Barlow, the idea arose almost immediately after Shannon formalized his theory of information.

Remember, Shannon introduced information theory in 1948. Yet only four years later, MacKay and McCulloch (of the McCulloch-Pitts neuron!) published an article analyzing neural coding from the perspective of information theory. By assuming that a neuron is a communication channel, they wanted to understand what is the best ‘code’ for a neuron to use – a question which was already controversial in the field (it seems as if the dead will never die…). Specifically, they wanted to compare whether the occurrence of a spike was the informative signal or whether it was the time since the previous spike. They found, based on information theory, that it is the interval from the previous spike that can signal the most information.

And for those who want to break into the analog vs digital coding they have this to say:

nor is it our purpose in the following investigation to reopen the “analogical versus digital” question, which we believe to represent an unphysiological antithesis. The statistical nature of nervous activity must preclude anything approaching a realization in practice of the potential information capacity of either mechanism, and in our view the facts available are inadequate to justify detailed theorization at the present time

Around the same time, Von Neumann – of course it would be Von Neumann! – delivered a series of lectures analyzing coding from the perspective of idealized neurons of the McCulloch-Pitts variety. Given that these were lectures around the time of the publication of the work in the preceding paragraph, I am guessing that he knew of their work – but maybe not!

In 1954, Attneave looked at how visual perception is affected by information and the redundancy in the signal. He provides by far the most readable paper of the bunch. Here is the opening:

In this paper I shall indicate some of the ways in which the concepts and techniques of information theory may clarify our understanding of visual perception. When we begin to consider perception as an information-handling process, it quickly becomes clear that much of the information received by any higher organism is redundant. Sensory events are highly interdependent in both space and time: if we know at a given moment the states of a limited number of receptors (i.e., whether they are firing or not firing), we can make better-than-chance inferences with respect to the prior and subsequent states of these receptors, and also with respect to the present, prior, and subsequent states of other receptors.

He also has this charming figure:

Attneave's cat

What Attneave’s Cat demonstrates is that most of the information in the visual image of the cat – the soft curves, the pink of the ears, the flexing of the claws – are totally irrelevant to the detection of the cat. All you need is a few points with straight lines connecting them, and this redundancy is surely what the nervous system is relying on.

Finally, in 1955 there was a summer research school thingamajig hosted by Shannon, Minsky, McCarthy and Rochester with this as one of the research goals:

1. Application of information theory concepts to computing machines and brain models. A basic problem in information theory is that of transmitting information reliably over a noisy channel. An analogous problem in computing machines is that of reliable computing using unreliable elements. This problem has been studies by von Neumann for Sheffer stroke elements and by Shannon and Moore for relays; but there are still many open questions. The problem for several elements, the development of concepts similar to channel capacity, the sharper analysis of upper and lower bounds on the required redundancy, etc. are among the important issues. Another question deals with the theory of information networks where information flows in many closed loops (as contrasted with the simple one-way channel usually considered in communication theory). Questions of delay become very important in the closed loop case, and a whole new approach seems necessary. This would probably involve concepts such as partial entropies when a part of the past history of a message ensemble is known.

Shannon of course tried to have is cake and eat it too by warning of the dangers of misused information theory. If you are interested in more on the topic, Dimitrov, Lazar and Victor have a great review.

So there you go – it is arguably MacKay, McCulloch, Von Neumann, and Attneave who are the progenitors of Information Theory in Neuroscience.

References

Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61 (3), 183-193 DOI: 10.1037/h0054663

Dimitrov, A., Lazar, A., & Victor, J. (2011). Information theory in neuroscience Journal of Computational Neuroscience, 30 (1), 1-5 DOI: 10.1007/s10827-011-0314-3

MacKay, D., & McCulloch, W. (1952). The limiting information capacity of a neuronal link The Bulletin of Mathematical Biophysics, 14 (2), 127-135 DOI: 10.1007/BF02477711

von Neumann (1956). Probabilistic logics and the synthesis of reliable organisms from unreliable components Automata Studies

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s