An ethology reading list

At a meeting in New York last week [edit: many months ago by the time I got around to posting this], we were discussing the recent push in neuroscience for more naturalistic behaviors. One of the problems, someone pointed out, is that they are difficult to analyze. But surely there must be whole fields devoted to understanding natural behaviors? Why do we, as neuroscientists, not interact with them?

When I started this blog I named it neuroecology for exactly that reason: there was this whole field of ecology that has thought about natural behaviors very deeply for a long, long time and going over those papers on a blog seemed like a great way to understand them. What I didn’t understand at the time was that I was using the wrong word; it wasn’t ecology that I was looking for it was ethologyEcology is more generally about broad interactions between animals and environments. Ethology is the specific study of animal behavior.

So: ethologists. The studiers of natural animal behavior. What can neuroscientists learn from these mythical beings? I tried to collect as many syllabi as I could find (1, 2, 3, 4, 5, 6,  with thanks to Bence Ölveczky for sending me theirs in personal communication) to find papers that neuroscientists will find relevant for understanding how to analyze natural behaviors (with a few that I think are relevant thrown into the mix).

Consider this post a “living document” that I will update over time. Mostly it is a big list of papers that I have separated into sub-topics that drastically need to be cleaned up. If I’m missing something, let me know!

Continue reading

Behold, the behaving brain!

In my opinion, THE most important shift in neuroscience over the past few years has been the focus on how behavior changes neural function across the whole brain. Even the sensory systems – supposedly passive passers-on of perfectly produced pictures of the world – will be shifted in unique ways by behavior. An animal walking will have different responses to visual stimuli than an animal that is just sitting around. Almost certainly, other behaviors will have other effects on the animal.

A pair of papers this week have made that point rather elegantly. First, Carsen Stringer and Marius Pachitariu from the Carandini/Harris labs have gobs of data from when they were recording ~10,000 neurons simultaneously. Marius Pachitariu has an excellent twitter thread explaining the work. I just want to take one particular point from this paper which is that you can explain a surprising amount of variance in the primary visual cortex – and all across the brain – simply by looking at the movement of the animal’s face.

In the figures below, they have taken movies of an animal’s face, extracted the motion energy (roughly, how much movement there is at that location in the video), and then used PCA to find the common ways that you can describe that movement. Using this kind of common motion, they then tried to predict the activity of individual neurons – while ignoring the traditional sensory or task information that you would normally be looking at.

The other paper is from Simon Musall and Matt Kaufman in Anne Churchland’s lab. He also has a nice twitter description of their work. Here, they used a technique that is able to image the whole brain simultaneously (though I am not sure to what depth), though at the cost of resolution (individual neurons are not identifiable but are averaged together). The animals are doing a task where they need to tell the difference between two tones, or two flashes of light. You can look for the brain areas involved in choice, or the areas involved in responding to vision or audio, and they are there (choice, kind of?).  But if you look at where movement is being represented it is everywhere.

The things that you would normally look for – the amount of brain activity you can explain by an animal’s decisions or its sensory responses – explain very little unique information.
This latter point is really important. If you had looked at the data and ignored the movement, you would have certainly found neurons that were correlated with decision-making. But once you take into account movement, that correlation drops away – the decisions are too correlated with general movement variables. People need to start thinking about how much of their neural data is responding to the task the animal is doing and how much is due to movement variables that are aligned to the task. This is really important! Simple averaging will not wash away this movement.

There is a lot more to both of these papers and both will be more than worth your time to dig into.

I’m not sure if you would have noticed this effect in either case if they weren’t recording from massive numbers of neurons simultaneously. This is a brave new world of neuroscience. How do we deal with this massively complex behavioral data at the same time that we deal with massive neural populations?

In my mind, the gold standard for how to analyze this data comes from Eva Naumann and James Fitzgerald in a paper out of the Engert lab. They are analyzing data from the whole brain of the zebrafish as it fictively swims around and responds to some moving background. Rather than throwing up their hands at the complexity of this data and the number of moving pieces what they did was very precisely quantify one particular aspect of the behavior. Then they followed the circuit step by step and tried to understand how the quantified behavior was transformed in the circuit. How did the visual stimuli guide the fish’s orientation in the water? What were the different ways the retina represented that visual information? How was this transformed by the relays into the brain? How was this information further transformed in the next step? How did the motor centers generate the different types of behavior that were quantified?

The brain evolved to produce behavior. In my opinion there is no way to understand the brain – any of it – if you don’t understand the behavior that the animal is producing.

Behavioral quantification: running is part of learning

One of the most accessible ways to study a nervous system is to understand how it generates behavior – its outputs. You can watch an animal and instantly get a sense of what it is doing and maybe even why it is doing it. Then you reach into the animal’s brain and try to find some connection that explains the what and the why.

Take the popular ‘eyeblink conditioning’ task that is used to study learning. You can puff a harmless bit of air at an animal and it will blink in response (wouldn’t you?). Like Pavlov’s dog, you can then pair it with another signal – a tone, a light, something like that – and the animal will slowly learn to associate the two. Eventually you just show the animal the other signal, flashing the light at them, and they will blink as if they were expecting an air puff coming. Simple enough but obviously not every animal is the same. There is a lot of variability in the behavior which could be due to any of a number of unexplored factors, from individual differences in experience to personality. If this is what we are using to investigate the underlying neuroscience, then, it places a fundamental limit on what we can know about the nervous system.

How can we neuroscientists overcome this? One very powerful technique has been to improve our behavioral quantification. I saw a fantastic example of this from Megan Carey when she visited Princeton earlier this year to talk about her work on cerebellum and learning. She had tons of interesting stuff but there was one figure she presented that simply blew me away.

First a bit of history is in order (apologies if I get some of this a bit wrong, my notes here are hazy). When experimenters first tried to get eyeblink conditioning to work with mice, they had trouble. Even though it seems like such a simple reflex the mice were performing very poorly on the task. Eventually, someone (?) found that allowing the mice to walk on a treadmill while experiencing the cues resulted in a huge increase in performance. Was this because they were unhappy being fixed in one place? Was it that they were unable to associate an puff of air to their eye with an environment when they were unable to manipulate their environment?

But there is still a lot of variability. Where does it come from? What you can now do is measure as much about the behavior as possible. Not just how much the animal blinks its eye, but how much it moves and how fast it moves, and how much it does all sorts of other stuff. And it turns out that if you measure the speed that the animal is walking there is a clear linear correlation with how long it takes the animal to learn.

Look at this figure – on the left, you can see how often each individual animal is responding to the air puff with an eyeblink (y-axis) as it is trained through time (x-axis). And on the right is how long it takes to reach some performance benchmark (y-axis) given the average speed the animal walks (x-axis).

So how do you test this? Make sure it is a causation not a meaningless correlation? Put them on a motorized treadmill and control the speed that they walk at. And BAM, most of the variability is gone! Look at the mess of lines in the behavior above and the clearly-delineated behavior below.

There’s a lesson here: when we study a ‘behavior’, there are a lot of other things that an animal is doing at the same time. We think they are irrelevant – we hope they are irrelevant – but often they are part of one bigger whole. If you want to study a behavior that an animal is performing, how else can you do it but by understanding as much about what the animal is doing as possible? How else but seeing how the motor output of the animal is linked together to become one complex form? Time and again, quantifying as many aspects of behavior as possible has revealed that it is in fact finely tuned but driven by some underlying variable that can be measured – once you figure out what it is.

Behavioral quantification: mapping the neural substrates of behavior

A new running theme on the blog: cool uses of behavioral quantification.

One of the most exciting directions in behavioral science are the advances in behavioral quantification. Science often advances by being able to perform ever more precise measurements from ever-increasing amounts of data. Thanks to the increasing power of computers and advances in machine learning, we are now able to automatically extract massive amounts of behavioral data at a level of detail that was previously unobtainable.

A great example of this is a recently published paper out of Janelia Farm. Using an absolutely shocking 400,000 flies, the authors systematically activated small subsets of neurons and then observed what behaviors they performed. First, can you imagine a human scoring every moment of four hundred thousand animals as they behaved over fifteen minutes? That is 12.1 billion frames of data to sort through and classify.

Kristan Branson – the corresponding author on the paper – has been developing two pieces of software that allows for efficient and fast estimation of behavior. The first, Ctrax, tracks individual animals as they move around a small arena and assigns a position, an orientation, and various postural features (for instance, since they are fruit flies we can extract the angle of each wing). The second, JAABA, then uses combinations of these features, such as velocity, interfly distance, and so on, in order to identify behaviors. Users annotate videos with examples of when an animal is performing a particular behavior, and then the program will generate examples in other videos that it believes are the same behavior. An iterative back-and-forth between user and machine gradually narrows down what counts as a particular behavior and what doesn’t, eventually allowing fully-automated classification of behavior in new videos.

Then once you have this pipeline you can just stick a bunch of animals into a plate under a camera, activate said neural populations, let them do whatever they feel like doing, and get gobs and gobs of data. This allows you to understand at neural precision which neurons are responsible for any arbitrary behavior you desire. This lets you build maps – maps that help you understand where information is flowing through the brain. And since you know which of these lines are producing which behaviors, you can then go and find even more specific subsets of neurons that let you identify precise neural pathways.

Here are two examples. Flies sometimes walk backwards (moonwalking!). If you look at the image below, you can see (on the bottom) all the different neurons labeled in a fly brain that had an effect on this backward locomotion, and in the upper-right the more specific areas where the neurons are most likely located. In fruit fly brains, the bulbous protrusions where these colors are found are the eyes of the animal, with a couple flecks in the central brain.

This turns out to be incredibly accurate. Some of this moonwalking circuit was recently dissected and a set of neurons from the eye into the brain was linked to causing this behavior. The neurons (in green below) are in exactly the place you’d expect from the map above! They link to a set of neurons known as the ‘moonwalker descending neurons’ which sends signals to the nerve (spinal) cord that cause the animal to walk backwards.

Of course, sometimes it can be more complicated. When a male fly is courting a female fly, he will extend one wing and vibrate it to produce a song. Here are the neurons related to that behavior (there are a lot):

There are two key points from this quantification. First, the sheer amount and quality of data it is possible to gain access to and score these days is allowing us to have immense statistical precision on when and in which contexts behaviors are occurring. Second, the capacity to find new things is increasing because we can be increasingly agnostic to what we are looking for (so it is easier to find surprises in the data!).


Mapping the Neural Substrates of Behavior. Robie et al 2017.

See also: Big behavioral data: psychology, ethology and the foundations of neuroscience. Gomez-Marin et al 2014.