The Connectionists

Labrigger (via Carson Chow) pointed to a sprawling debate on the Connectionist mailing list concerning Big Data and theory in neuroscience. See the list here (“Brain-like computing fanfare and big data fanfare”). There seem to be three main threads of debate:

(1) Is Big Data what we need to understand the brain?

(2) What is the correct level of detail?

(3) Can we fruitfully do neuroscience in the absence of models? Do we have clearly-posed problems?

Here are some key comments you can read.

There are few points that need to be made. First, one of the ongoing criticisms through the thread concerns the utility of Deep Learning models. It is repeatedly asserted that one beauty of the brain is that it doesn’t necessarily needs gobs of data to be able to perform many important behaviors. This is actually not true in the slightest: the data has been collected through many generations of evolution. In fact, Deep Learning ‘assembles’ its network through successive training of layers in a manner vaguely reminiscent of the development of the nervous system.

In terms of the correct level of detail, James Bower is ardent in promoting the idea that we need to go down to the nitty-gritty. In cerebellum, for instance, you need to understand the composition of ion channels on the dendrites to understand the function of the cells. Otherwise, you miss the compartmentalized computations being performed there. And someone else points out that, in fact, from another view this is not even reduced enough; why aren’t they considering transcription? James Bower responds with:

One of the straw men raised when talking about realistic models is always: “at what level do you stop, quantum mechanics?”. The answer is really quite simple, you need to model biological systems at the level that evolution (selection) operates and not lower. In some sense, what all biological investigation is about, is how evolution has patterned matter. Therefore, if evolution doesn’t manipulate at a particular level, it is not necessary to understand how the machine works.

…although genes are clearly a level that selection operates on…

But I think the underlying questions here really are:

(1) What level of detail do we need to understand in order to predict behavior of [neurons/networks/organisms]?

(2) Do we understand enough of the nervous system – or general organismal biology – to make theoretical predictions that we can test experimentally?

I think Geoff Hinton’s comment is a good answer:

A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won’t know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach.