The Connectionists

Labrigger (via Carson Chow) pointed to a sprawling debate on the Connectionist mailing list concerning Big Data and theory in neuroscience. See the list here (“Brain-like computing fanfare and big data fanfare”). There seem to be three main threads of debate:

(1) Is Big Data what we need to understand the brain?

(2) What is the correct level of detail?

(3) Can we fruitfully do neuroscience in the absence of models? Do we have clearly-posed problems?

Here are some key comments you can read.

There are few points that need to be made. First, one of the ongoing criticisms through the thread concerns the utility of Deep Learning models. It is repeatedly asserted that one beauty of the brain is that it doesn’t necessarily needs gobs of data to be able to perform many important behaviors. This is actually not true in the slightest: the data has been collected through many generations of evolution. In fact, Deep Learning ‘assembles’ its network through successive training of layers in a manner vaguely reminiscent of the development of the nervous system.

In terms of the correct level of detail, James Bower is ardent in promoting the idea that we need to go down to the nitty-gritty. In cerebellum, for instance, you need to understand the composition of ion channels on the dendrites to understand the function of the cells. Otherwise, you miss the compartmentalized computations being performed there. And someone else points out that, in fact, from another view this is not even reduced enough; why aren’t they considering transcription? James Bower responds with:

One of the straw men raised when talking about realistic models is always: “at what level do you stop, quantum mechanics?”. The answer is really quite simple, you need to model biological systems at the level that evolution (selection) operates and not lower. In some sense, what all biological investigation is about, is how evolution has patterned matter. Therefore, if evolution doesn’t manipulate at a particular level, it is not necessary to understand how the machine works.

…although genes are clearly a level that selection operates on…

But I think the underlying questions here really are:

(1) What level of detail do we need to understand in order to predict behavior of [neurons/networks/organisms]?

(2) Do we understand enough of the nervous system – or general organismal biology – to make theoretical predictions that we can test experimentally?

I think Geoff Hinton’s comment is a good answer:

A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won’t know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach.

The straw that broke the camel’s back

One of the most interesting things in neuroscience is that we find again and again that the different nervous systems come up with the same solutions to related problems.  Take the ability to make a decision – something that is about as basic and fundamental as you get, while needing to be applied to all sorts of situations.  In monkeys deciding whether to look at or away from an object, if you track neurons in one area of the brain (LIP), you see that the activity fluctuates up and down until it gets to some threshold and the decision is made.

This principle extends beyond simple decisions to include what may seem to be more complex decisions such as the decision to fight or (at a later time) flee.  Although it may not be the first example to leap to mind, cricket fighting can give us plenty of insight into how this decision might be represented in the brain.  Cricket fights have been a popular past time in China for over a millenia (though imho beetle fighting is much more entertaining).  Crickets make for great subjects for scientific study: they’re small, don’t take a lot of resources, don’t complain too much, and have highly stereotyped behaviors which make quantifiable analysis simple.  When two male crickets meet, they will often fence with their antennae (pictured above), and as fights become more intense will move to engaging with their mandibles and eventually some pretty intense wrestling-like grappling.  The winner will then sing the loser off to prove his might.

All cricket fights are required to start with antennal fencing.  If their antennae are removed, the poor little guys will not fight.  They still recognize each other – they can still court – but there is no fighting.  Of course, not every cricket will want to fight every other cricket.  They have some sense of hierarchy, so a highly dominant cricket will be more likely to run off a highly submissive cricket.  And if a cricket is placed in a tiny little home, as soon as 2 minutes later they will be more likely to get aggressive in order to defend their home.  There’s something very anthropomorphic and sweet about that, I think.

Aggressiveness is represented in the brain through the neuromodulator octopamine, and this can have surprising side effects.  See, octopamine is the insect equivalent of neuroadrenaline and it is released by physical movements.  The fans of Chinese Cricket Fighting will already know this; it has long been suggested that if your cricket isn’t being aggressive enough for your taste you should just chuck the guy in the air.  And what do you know?  He’ll be more likely to put up a fight now.  Even better is it to make him fly for a while in a wind tunnel.  So we see that the representation of aggression can have surprising side effects.

The flip side of fight is flight, and a cricket needs to know when in a fight to switch to flight.  One can begin to determine how a cricket knows when to flee by mangling the cricket.  Sorry guys, that’s science for you.  You can blacken their eyes so they cannot see, lame their mandibles so that they cannot bight, and clip their claws so that they cannot tear, then mamed crickets to fight and see how they do.

Blinded crickets who fight crickets with mamed mandibles will win 98% of the time.  That’s quite a lot!  These blinded animals will not feel much of a physical blow from their opponents, and will not be receiving any visual social input either.  Remove either of these conditions – make a nonblinded cricket fight a lamed one, or a blinded cricket fight a healthy one – and the healthy one will probably win.  So how do these crickets know when to flee?  By the steady accumulation of visual and physical input.  Once enough of this input is received – possibly represented in the form of some hormone or peptide – it’s time to fly for the cricket!

It wouldn’t be surprising if something like this happened in humans, too.  We already have a proverb for it after all: the straw that broke the camel’s back.  Crickets will continue to fight after a serious injury, only to retreat seconds later for no apparent reason.  So too are humans known to accept plenty of punishment and grit out, only to have something small cause them to cry and give up when their threshold is reached.  This is one of the the fundamental lessons of decision neuroscience so far: discrete decisions are made when information has accumulated up to some threshold.  It’s just not always easy to tell what our thresholds are.

References

Stevenson PA, & Rillich J (2012). The decision to fight or flee – insights into underlying mechanism in crickets. Frontiers in neuroscience, 6 PMID: 22936896

Photo from