Behavioral quantification: running is part of learning

One of the most accessible ways to study a nervous system is to understand how it generates behavior – its outputs. You can watch an animal and instantly get a sense of what it is doing and maybe even why it is doing it. Then you reach into the animal’s brain and try to find some connection that explains the what and the why.

Take the popular ‘eyeblink conditioning’ task that is used to study learning. You can puff a harmless bit of air at an animal and it will blink in response (wouldn’t you?). Like Pavlov’s dog, you can then pair it with another signal – a tone, a light, something like that – and the animal will slowly learn to associate the two. Eventually you just show the animal the other signal, flashing the light at them, and they will blink as if they were expecting an air puff coming. Simple enough but obviously not every animal is the same. There is a lot of variability in the behavior which could be due to any of a number of unexplored factors, from individual differences in experience to personality. If this is what we are using to investigate the underlying neuroscience, then, it places a fundamental limit on what we can know about the nervous system.

How can we neuroscientists overcome this? One very powerful technique has been to improve our behavioral quantification. I saw a fantastic example of this from Megan Carey when she visited Princeton earlier this year to talk about her work on cerebellum and learning. She had tons of interesting stuff but there was one figure she presented that simply blew me away.

First a bit of history is in order (apologies if I get some of this a bit wrong, my notes here are hazy). When experimenters first tried to get eyeblink conditioning to work with mice, they had trouble. Even though it seems like such a simple reflex the mice were performing very poorly on the task. Eventually, someone (?) found that allowing the mice to walk on a treadmill while experiencing the cues resulted in a huge increase in performance. Was this because they were unhappy being fixed in one place? Was it that they were unable to associate an puff of air to their eye with an environment when they were unable to manipulate their environment?

But there is still a lot of variability. Where does it come from? What you can now do is measure as much about the behavior as possible. Not just how much the animal blinks its eye, but how much it moves and how fast it moves, and how much it does all sorts of other stuff. And it turns out that if you measure the speed that the animal is walking there is a clear linear correlation with how long it takes the animal to learn.

Look at this figure – on the left, you can see how often each individual animal is responding to the air puff with an eyeblink (y-axis) as it is trained through time (x-axis). And on the right is how long it takes to reach some performance benchmark (y-axis) given the average speed the animal walks (x-axis).

So how do you test this? Make sure it is a causation not a meaningless correlation? Put them on a motorized treadmill and control the speed that they walk at. And BAM, most of the variability is gone! Look at the mess of lines in the behavior above and the clearly-delineated behavior below.

There’s a lesson here: when we study a ‘behavior’, there are a lot of other things that an animal is doing at the same time. We think they are irrelevant – we hope they are irrelevant – but often they are part of one bigger whole. If you want to study a behavior that an animal is performing, how else can you do it but by understanding as much about what the animal is doing as possible? How else but seeing how the motor output of the animal is linked together to become one complex form? Time and again, quantifying as many aspects of behavior as possible has revealed that it is in fact finely tuned but driven by some underlying variable that can be measured – once you figure out what it is.

Advertisements

Behavioral quantification: mapping the neural substrates of behavior

A new running theme on the blog: cool uses of behavioral quantification.

One of the most exciting directions in behavioral science are the advances in behavioral quantification. Science often advances by being able to perform ever more precise measurements from ever-increasing amounts of data. Thanks to the increasing power of computers and advances in machine learning, we are now able to automatically extract massive amounts of behavioral data at a level of detail that was previously unobtainable.

A great example of this is a recently published paper out of Janelia Farm. Using an absolutely shocking 400,000 flies, the authors systematically activated small subsets of neurons and then observed what behaviors they performed. First, can you imagine a human scoring every moment of four hundred thousand animals as they behaved over fifteen minutes? That is 12.1 billion frames of data to sort through and classify.

Kristan Branson – the corresponding author on the paper – has been developing two pieces of software that allows for efficient and fast estimation of behavior. The first, Ctrax, tracks individual animals as they move around a small arena and assigns a position, an orientation, and various postural features (for instance, since they are fruit flies we can extract the angle of each wing). The second, JAABA, then uses combinations of these features, such as velocity, interfly distance, and so on, in order to identify behaviors. Users annotate videos with examples of when an animal is performing a particular behavior, and then the program will generate examples in other videos that it believes are the same behavior. An iterative back-and-forth between user and machine gradually narrows down what counts as a particular behavior and what doesn’t, eventually allowing fully-automated classification of behavior in new videos.

Then once you have this pipeline you can just stick a bunch of animals into a plate under a camera, activate said neural populations, let them do whatever they feel like doing, and get gobs and gobs of data. This allows you to understand at neural precision which neurons are responsible for any arbitrary behavior you desire. This lets you build maps – maps that help you understand where information is flowing through the brain. And since you know which of these lines are producing which behaviors, you can then go and find even more specific subsets of neurons that let you identify precise neural pathways.

Here are two examples. Flies sometimes walk backwards (moonwalking!). If you look at the image below, you can see (on the bottom) all the different neurons labeled in a fly brain that had an effect on this backward locomotion, and in the upper-right the more specific areas where the neurons are most likely located. In fruit fly brains, the bulbous protrusions where these colors are found are the eyes of the animal, with a couple flecks in the central brain.

This turns out to be incredibly accurate. Some of this moonwalking circuit was recently dissected and a set of neurons from the eye into the brain was linked to causing this behavior. The neurons (in green below) are in exactly the place you’d expect from the map above! They link to a set of neurons known as the ‘moonwalker descending neurons’ which sends signals to the nerve (spinal) cord that cause the animal to walk backwards.

Of course, sometimes it can be more complicated. When a male fly is courting a female fly, he will extend one wing and vibrate it to produce a song. Here are the neurons related to that behavior (there are a lot):

There are two key points from this quantification. First, the sheer amount and quality of data it is possible to gain access to and score these days is allowing us to have immense statistical precision on when and in which contexts behaviors are occurring. Second, the capacity to find new things is increasing because we can be increasingly agnostic to what we are looking for (so it is easier to find surprises in the data!).

References

Mapping the Neural Substrates of Behavior. Robie et al 2017.

See also: Big behavioral data: psychology, ethology and the foundations of neuroscience. Gomez-Marin et al 2014.