It was the anniversary of John von Neumann’s death last Sunday. If I had an intellectual hero it would be von Neumann; he basically was an expert in *everything*. Like many of those involved with the Manhattan Project, he died fairly young (53) of cancer. From On This Day In Math:

John von Neumann (28 Dec 1903, 8 Feb 1957 at age 53) Hungarian-American mathematician who made important contributions in quantum physics, logic, meteorology, and computer science. He invented game theory, the branch of mathematics that analyses strategy and is now widely employed for military and economic purposes. During WW II, he studied the implosion method for bringing nuclear fuel to explosion and he participated in the development of the hydrogen bomb. He also set quantum theory upon a rigorous mathematical basis. In computer theory, von Neumann did much of the pioneering work in logical design, in the problem of obtaining reliable answers from a machine with unreliable components, the function of “memory,” and machine imitation of “randomness.”

While he was in the hospital being treated for cancer, he worked on notes that became The Computer and the Brain. On twitter, @mxnmnkmnd pointed me to work that von Neumann had done on computability in neural networks. Claude Shannon wrote a review of this work:

One important part of von Neumann’s work on automata relates to the problem of designing reliable machines using unreliable components…Given a set of building blocks with some positive probability of malfunctioning, can one by suitable design construct arbitrarily large and complex automata for which the overall probability of incorrect output is kept under control? Is it possible to obtain a probability of error as small as desired, or at least a probability of error not exceeding some fixed value (independent of the particular automaton) ?

We have, in human and animal brains, examples of very large and relatively reliable systems constructed from individual components, the neurons, which would appear to be anything but reliable, not only in individual operation but in fine details of interconnection… Merely performing the same calculation many times and then taking a majority vote will not suffice. The majority vote would itself be taken by unreliable components and thus would have to be taken many times and majority votes taken of the majority votes. And so on. We are face to face with a “Who will watch the watchman” type of situation.

So how do we do it? von Neumann offers two solutions. The first is what I would call the “mathematician’s” approach:

This solution involves the construction from three unreliable sub-networks, together with certain comparing devices, of a large and more reliable sub-network to perform the same function. By carrying this out systematically throughout some network for realizing an automaton with reliable elements, one obtains a network for the same behavior with unreliable elements….In the first place, the final reliability cannot be made arbitrarily good but only held at a certain level. If the individual components are quite poor the solution, then, can hardly be considered satisfactory. Secondly, and even more serious from the point of view of application, the redundancy requirements for this solution are fantastically high in typical cases. The number of components required increases exponentially…

The second approach is the more statistical one, and is probably an important link between the McCulloch-Pitts school of computers as logic devices and the information-theoretic approach that is more relevant to today:

The second approach involves what von Neumann called the multiplex trick. This means representing a binary output in the machine not by one line but by a bundle of N lines, the binary variable being determined by whether nearly all or very few of the lines carry the binary value 1. An automaton design based on reliable components is, in this scheme, replaced by one where each line becomes a bundle of lines, and each component is replaced by a subnetwork which operates in the corresponding fashion between bundles of input and output lines…

He also makes some estimates of the redundancy requirements for certain gains in reliability. For example, starting with an unreliable “majority” organ whose probability of error is 1/200, by a redundancy of 60,000 to 1 a sub-network representing a majority organ for bundles can be constructed whose probability of error is 10^-20 . Using reasonable figures this would lead to an automaton of the complexity and speed of the brain operating for a hundred years with expectation about one error. In other words, something akin to this scheme is at least possible as the basis of the brain’s reliability.

So not only is the second approach more feasible, it’s just plain better.

This is still extremely relevant. I went to a very nice talk two weeks ago on fruit fly larva. These are worms, just like the nematode C. elegans, that move around and do a lot of the same things that C. elegans do. Yet they have orders of magnitude more neurons! Why do they need so many? It does not seem like they do *that* much more behavior (I don’t know, maybe they do). It could be pattern separation – perhaps they can break the world into tinier pieces – but it seems more like a better candidate may be error correction. I would hazard a guess that the C. elegans nervous system is more sensitive to noise. The fact that neural responses seem *slower* – there are no spikes, and neurons respond over seconds rather than milliseconds – would indicate that they solve the problem through temporal integration. Whatever works.

The original paper is very readable and full of quite interesting ideas; go read it.

Two other quotes I like from these:

If we think of the brain as some kind of computing machine it is perfectly possible that the external language we use in communicating with each other may be quite different from the internal language used for computation (which includes, of course, all the logical and information-processing phenomena as well as arithmetic computation). In fact von Neumann gives various persuasive arguments that we are still totally unaware of the nature of the primary language for mental calculation. He states “Thus logics and mathematics in the central nervous system, when viewed as languages, must be structurally essentially different from those languages to which our common experience refers.

and

“It also ought to be noted that the language here involved may well correspond to a short code in the sense described earlier, rather than to a complete code: when we talk mathematics, we may be discussing a secondary language, built on the primary language truly used by the central nervous system. Thus the outward forms of our mathematics are not absolutely relevant from the point of view of evaluating what the mathematical or logical language truly used by the central nervous system is. However, the above remarks about reliability and logical and arithmetic depth prove that whatever the system is, it cannot fail to differ considerably from what we consciously and explicitly consider as mathematics.”

Pingback: When did we start using information theory in neuroscience? | neuroecology