Monday Open Question: How many types of neurons are there in the brain?

How many types of neurons are in the brain? Not just number, but classes that represent some fundamental unit of computation? I tweeted an article about this a couple days ago and (justly) got pilloried for saying it counted classes in the brain rather than in two cortical regions. So what is the answer for the whole brain?

Obviously the answer depends on the brain that you are talking about. In the nematode C. elegans, we know that every hermaphrodite has 302 neurons and every male has 381. I believe these specifically male neurons get pruned in the developmental process if the animal does not become a male. These neurons tend to come in symmetric pairs or quartets, one showing up on each side of the body, so the number of neural ‘classes’ is on the order of 118 – though there is evidence that some neurons can be slightly different between their left and right side (ASEL and ASER, for example). Fruit flies (Drosophila) also show sex-specific neurons, with the genes Fruitless and Doublesex controlling whether certain neurons are masculinized or feminized. So not only are there going to be different classes of neurons in males and females, we know that there are single (or, again, symmetric) neurons that control single behaviors. On the other hand, in the fruit fly retina there are definitely distinct classes of neurons that are tiled across the eye. This should frame our thinking about the number of neural classes – there are classes with large numbers of neurons where convolution is useful (repeating the same computation across some space, such as visual or auditory or even musculature space) but perhaps neural function becomes more specific and class-less once specific outputs are needed.

The fruit fly brain may seem a bit silly, why bother comparing it to us cortical mammals? But adult Drosophila have roughly the same number of neurons as larval zebrafish, a vertebrate animal with a cerebrum that is a popular organism to study in neuroscience. So do we think that the zebrafish has just as many pre-planned neurons as Drosophila? Or is its neural structure somewhat looser, more patterned? I don’t have an answer here but I think it is worth thinking about the similarities and differences in these organisms that have similar numbers of neurons but quite different environmental and developmental needs.

Let’s turn to mammals. The area with the most well-defined number of cell classes is probably the retina. I’m not sure of the up-to-date estimates for number of cell classes but the classic description has two classes (rods/cones) in the input layer of the retina which can be further split depending on the number of colors an animal can see – for instance, humans have S, M, and L cones roughly corresponding to blue, green, and red light. This review roughly estimates that further into the eye there are two types of horizontal cells (first layer), ~12 types of bipolar cells (second layer), ~30 types of amacrine cells (third layer). From other sources we think there is something on the order of ~30 types of retinal ganglion cells, the output from the eye into the brain. Interestingly, this is roughly the same number of defined classes that we think the fruit fly has! But again, there may be species specificity here; something on the order of 95%+ of the output layer of monkey retina is a single cell class. So the eye alone has at least 80 classes of neurons and quite probably more.

The olfactory bulb is probably more complex. In mice, at least, the number of olfactory glomuleri that exist is probably on the order of one or two thousand? Though I would expect that once past this layer the classification will look more like retina or cortex – on the order of tens of subtypes.

 

Now let’s think about the cortex. The paper that inspired this post tried to estimate the number of cell classes by using single-cell RNA-sequencing in mice to identify the transcripts that are present in different cells and then attempts to cluster them into distinct sub-classes. It should be clear up front that the number of clusters you identify (1) may not be categorical but could be continuous between types of neurons and (2) may be different than if you clustered with a different method or with different types of data – functional responses, for instance. The authors in this paper make clear that they certainly find cells that look ‘intermediate’ between their clusters so whatever categories we get may not be very firm. For instance, in the following figure the size of the circles represents the number of cells they identify in a particular cluster and the thickness of the line between two circles is how many cells are intermediate between two clusters.

 

Without getting into too many details, they find that in two distinct anatomical regions they find roughly 50 inhibitory neurons that are common in their transcript types between the regions suggesting that the types of inhibition may be a common, repeated pattern across the brain. However, the types of 50 excitatory neurons were essentially unique to each of the two regions..Chuck Stevens has an interesting paper where he attempts to find lower and upper bounds on the number of possible cell classes in cortex. Let’s say that we accept the tiling principle, that the same types of cells are repeated again and again in a motif:

This argument can be extended to the neocortex. Underneath 1 mm2 of most regions of the primate cortical surface are about 105 neurons — the striate cortex is an exception with twice the number — each of which covers say 0.05 mm2 with its dendritic arbor (assumed to be 0.25 mm in diameter). Twenty neurons with dendritic arbors of this size would be required to cover a square millimetre of cortex, so the upper limit on number of cell types, if each must tile the cortex, is 105/20 = 5000, or an average of 1000 per layer. Now assume that the cortex has 10 times more neurons of each type than required to cover the cortex, a redundancy factor of 10 as guessed above for hippocampus: we thus would have about 100 neuron types per layer. If we believe there are a dozen ganglion cell types, two dozen amacrine cell types, and four dozen different kinds of inhibitory neurons in the CA1 region of hippocampus, 100 cell types per layer of neucortex seems like a reasonable number – not good news for the micromodelers.

Let’s update this estimate; we think that there may be 25 excitatory cell types per region. I don’t actually know off-hand the percentage of mouse cortex that these two regions encompass (a motor region, ALM, and a visual region, VISp) but let’s say they are roughly 10% of the cortical area each (this could be grossly wrong so feel free to correct me). We then might believe that cortex has on the order of 25 * 10 ~ 250 excitatory cell classes and ~50 inhibitory cell classes. Does this feel right? 300 classes for all of cortex?

But the cortex is the minority of the number of cells in the brain – the majority is in a single structure, the cerebellum. I don’t know of an estimate of the number of neural classes but a structure that is known for its beautiful tiling neurons seems more likely to have a fair bit of structure in its number of cell classes. What would we estimate here? Something similar to a primary sensory area, with ~50-100 cell classes? Something more something less?

And what about other subcortical regions in the brain like hypothalamus that are more directly responsible for specific behavior? Should we expect many thousands of distinct subtypes for each of the behaviors or something more patterned?

Tell me where I’m wrong.

Advertisements

Two views of science

The pessimist:

These quotes give you a sense of these two books, both of which build on what Alan Richardson calls “one of the great lessons of the cognitive revolution”: “just how much of mental life remains closed to introspection.” As a brief summation, the unified thesis of Nørretranders’s and Wilson’s works looks something like this: We are not really in control. Not only are we not in control, but we are not even aware of the things of which we are not in control. Our ability to judge anything with any accuracy is a lie, as is our ability to perceive these lies as lies. Consciousness masquerades as awareness and agency, but the sense of self it conjures is an illusion. We are stranded in the great opaque secret of our biology, and what we call subjectivity is a powerless epiphenomenon, sort of like a helpless rider on the back of a galloping horse—the view is great, but pulling on the reins does nothing.

If this description of reality feels familiar to you, it’s because such a neuroscientifically inspired pessimism is a quiet but powerful strain of modern thinking.

The optimists:

The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together (Carl Sagan)

Poets say science takes away from the beauty of the stars – mere globs of gas atoms. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination – stuck on this carousel my little eye can catch one – million – year – old light. A vast pattern – of which I am a part… What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. For far more marvelous is the truth than any artists of the past imagined it. Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent? (Richard Feynman)

These are not necessarily mutually exclusive.

But I also found this Feynman poem, which I had never heard before:

…I stand at the seashore, alone, and start to think.

There are the rushing waves, mountains of molecules
Each stupidly minding its own business
Trillions apart, yet forming white surf in unison

Ages on ages, before any eyes could see
Year after year, thunderously pounding the shore as now
For whom, for what?
On a dead planet, with no life to entertain

Never at rest, tortured by energy
Wasted prodigiously by the sun, poured into space
A mite makes the sea roar

Deep in the sea, all molecules repeat the patterns
Of one another till complex new ones are formed
They make others like themselves
And a new dance starts

Growing in size and complexity
Living things, masses of atoms, DNA, protein
Dancing a pattern ever more intricate

Out of the cradle onto the dry land
Here it is standing
Atoms with consciousness, matter with curiosity
Stands at the sea, wonders at wondering

I, a universe of atoms
An atom in the universe

(This is obviously a response to one of my favorite poems, When I Have Fears That I May Cease To Be)

#Cosyne18, by the numbers

Where does the time go? Another year, another look at my favorite conference: Cosyne. Cosyne is a Computational and Systems Neuroscience conference, this year held in Denver. I think it’s useful to use it each year to assess where the field is and where it may be heading.

First is who is the most active – and this year it is Ken Harris who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang
  • 2017: J. Pillow

If you look at the most across all of Cosyne’s history, well nothing ever changes.

Visualizing the network diagram of co-authorships reveals some of the structure in the computational neuroscience community (click for high-resolution PDF):and zooming in:

Plotting the network of the whole history of Cosyne is a mess – there are too many dense connections. Here are three other ways of looking at it. First, only plotting the superusers (people who have 20+ abstracts across Cosyne’s history, click for PDF):

Or alternately, the regulars (10+ abstracts across Cosyne’s history, click for PDF):

And, finally, the regulars + everyone they have collaborated with (click for PDF):

I’d say the long-term structure looks something like the New York Gang (green), the European Crew (purple), the High-Dimensional Deities (blue), the Ecstasy of Entropy (magenta), and some others that I can’t come up with good names for (comments welcome).

Memming asked whether the central cluster was getting more dispersed or less cliquey with time. This is kind of a hard question to answer. If you just look at how large the central connected group is over time the answer is a resounding no. The community is more cohesive and is more connected than ever before.

On the other hand, we can look within that central cluster. How tightly connected is it? If you look at mean path length – how long it takes to get from one author to another, like degrees of Kevin Bacon or an Erdos number (a Paninski number?) – then the largest cluster is becoming more dispersed. Dan Marinazzo suggested looking at the network efficiency as a metric that is more robust to size. Network efficiency is kind of the inverse of path length, where one would mean you can get from one author to another in a single step and 0 means it takes forever.

I now also have two years of segmented abstracts (both accepted and rejected). What are the most popular topics at Cosyne? I used doc2vec, a method that can take a document and embed it in a high-dimensional space that represents the semantic topics that are being used, and then visualized it with t-SNE. The Cosyne Island that you see above is the density of abstracts at each given point. I’ve given the different islands names that represent the abstracts in each of them.

If you look at the words that you see more in 2018’s accepted abstracts they are “movements”, “uncertainty”, “motion”; looks like behavior!

The rejected abstracts are “orientation”, “techniques”, “highdimensional”,”retinal”, “spontaneous” 😦

We can also look at words that are more likely to be accepted in 2018 than 2017 (which are the big gainers):

And the big losers this year versus last year:

Here is a list of the twitter accounts that will be at Cosyne.

Previous years: [201420152016, 2017]

Communication by virus

‘Some half-baked conceptual thoughts about neuroscience’ alert

In the book Snow Crash, Neil Stephenson explores a future world that is being infected by a kind of language virus. Words and ideas have power beyond their basic physical form: they have the ability to cause people to do things. They can infect you, like a song that you just can’t get out of your head. They can make you transmit them to other people. And the book supposes a language so primal and powerful it can completely and totally take you over.

Obviously that is just fiction. But communication in the biological world is complicated! It is not only about transmitting information but also about convincing them of something. Humans communicate by language and by gesture. Animals sing and hiss and hoot. Bacteria communicate by sending signaling molecules to each other. Often these signals are not just to let someone know something but also to persuade them to do something else. Buy my book, a person says; stay away from me, I’m dangerous, the rattlesnake says; come over here and help me scoop up some nutrients, a bacteria signals.

And each of these organisms are made up of smaller things also communicating with each other. Animals have brains made up of neurons and glia and other meat, and these cells talk to each other. Neurons send chemicals across synapses to signal that they have gotten some information, processed it, and just so you know here is what it computed. The signals it sends aren’t always simple. They can be exciting to another neuron or inhibiting, a kind of integrating set of pluses and minuses for the other neuron to work on. But they can also be peptides and hormones that, in the right set of other neurons, will set new machinery to work, machinery that fundamentally changes how the neuron computes. In all of these scenarios, the neuron that receives the signal has some sort of receiving protein – a receptor – that is specially designed to detect those signaling molecules.

This being biology, it turns out that the story is even more complicated than we thought. Neurons are cells and just like every other cell it has internal machinery that uses mRNAs to provide instructions for building the protein machinery needed to operate. If you need more of one thing, the neuron will synthesize more of the mRNA and transcribe it into new proteins. Roughly, the more mRNA you have the more of that protein – tiny little machines that live inside the cell – you will produce.

This synthesis and transcription is behind much of how neurons learn. The saying goes that the neurons that fire together wire together, so that when they respond to things at the same time (such as being in one location at the same time you feel sad) they will tend to strengthen the link between them to create memories. And the physical manifestation of this is transcribing proteins for a specific receptor (say) so that now the same signal will activate more receptors and result in a stronger link.

And that was pretty much the story so far. But it turns out that there is a new wrinkle to this story: neurons can directly ship mRNAs into each other in a virus-like fashion, avoiding the need for receptors altogether. There is a gene called Arc which is involved in many different pieces of the plasticity puzzle. Looking at the sequence of the gene, it turns out that there is a portion of the code that creates a virus-like structure that can encapsulate RNAs and bury through other cells’ walls. This RNA is then released into the other cell. And this mechanism works. This Arc-mediated signaling actually causes strengthening of synapses.

Who would have believed this? That the building blocks for little machines are being sent directly into another cell? If classic synaptic transmission is kind of like two cells talking, this is like just stuffing someone else’s face with food or drugs. This isn’t in the standard repertoire of how we think about communication; this is more like an intentional mind-virus.

There is this story in science about how the egg was traditionally perceived to be a passive receiver during fertilization. In reality, eggs are able to actively choose which sperm they accept – they have a choice!

The standard way to think about neurons is somewhat passive. Yes, they can excite or inhibit the neurons they communicate with but, at the end of the day, they are passively relaying whatever information they contain. This is true not only in biological neurons but also in artificial neural networks. The neuron at the other end of the system is free to do whatever it wants with that information. Perhaps a reconceptualization is in order. Are neurons more active at persuasion than we had thought before? Not just a selfish gene but selfish information from selfish neurons? Each neuron, less interested in maintaining its own information than in maintaining – directly or homeostatically – properties of the whole network? Neurons do not simply passively transmit information: they attempt to actively guide it.

2017 in review (a quantified life)

I have always found it useful to take advantage of the New Year and reflect on what I have done over the past year. The day itself is a useful bookmark in life, inevitably trapped between leaving town for Christmas and coming back to town after the New Year begins. Because of the enforced downtime, what I happen to read has a strong influence on me – last year, I hopped on the Marie Kondo craze and really did manage to do a better job of keeping clean (kind of) but more importantly organizing my clothes by rolling and folding them until the fit so perfectly in my drawers. So that was useful, I guess.

The last year has been okay. Not great, not terrible. Kind of middle-of-the-road as my life goes. There have been some big wins (organizing a fantastic workshop at Cosyne on neurobehavioral analysis and being awarded a Simons Foundation fellowship that lets me join a fantastic group of scientists) and some frustrations (mostly scientific work that goes slowly slowly slowly).

One thing that sticks out for me over this past year – over these past two years, actually – is how little time I have spent on this blog. Or rather, how little of what I have done has been published on this blog. It’s not for a lack of time! I have actually done a fair bit of writing but am constantly stuck after a paragraph or two, my motivation waning until it disappears completely. This largely due to how I responded to some structural features in my life, mostly a long commute and a lot of things that I want to accomplish.

Last year I had the “clever” idea of creating a strict regimen of hour by hour and daily goals both for work and for my life. Do this analysis from 3pm – 4pm. Debug that code from 4pm – 5pm. Play the piano from 8pm – 9pm. Things like that. Maybe this works for other people? But I end up overambitious, constantly adding things that I need to do today so much that I rapidly switch from project to project, each slot mangled into nonsense by the little new things that will always spring up on any given day. Micromanaging yourself is the worst kind of managing, especially when you don’t realize you are doing it.

This is where what I read over winter break made me think. One of the three articles that influenced me was about the nature of work:

For unlike someone devoted to the life of contemplation, a total worker takes herself to be primordially an agent standing before the world, which is construed as an endless set of tasks extending into the indeterminate future. Following this taskification of the world, she sees time as a scarce resource to be used prudently, is always concerned with what is to be done, and is often anxious both about whether this is the right thing to do now and about there always being more to do. Crucially, the attitude of the total worker is not grasped best in cases of overwork, but rather in the everyday way in which he is single-mindedly focused on tasks to be completed, with productivity, effectiveness and efficiency to be enhanced. How? Through the modes of effective planning, skilful prioritising and timely delegation. The total worker, in brief, is a figure of ceaseless, tensed, busied activity: a figure, whose main affliction is a deep existential restlessness fixated on producing the useful.

Yup, that pretty much sums up how I was trying to organize my life. In the hope of accomplishing more I ended up doing less. This year I am trying a less-is-more approach; have fewer, more achievable goals each day/month/time unit; have more unstructured time; read more widely; and so on. Instead of saying I need to learn piano and I need to make art and I need to play with arduinos and I need to memorize more poetry and finding more and more things that I need to do, just list some things I’m interested in doing. Look at that list every so often to remind myself and then allow myself to flow into the things I am most interested in rather than forcing it.

I was lucky enough in graduate school to join a lunch with Eve Marder. There are two types of scientists, she said. Starters and finishers. Some people start a lot of projects, some people finish a few. This has always stuck with me. This past year I have been trying to maximize how many things I can work on – and it turns out that is a lot of different things – I want to spend this year doing a couple things at a time and finish themDo them well.

I have this memory of Wittgenstein declaring in the Tractatus that “the purpose of the Philosopher is to clarify.” I must have confabulated that quote because I could never find it again. Still, it’s my favorite thing that Wittgenstein ever said. For a scientist, the aphorism should be that “the purpose of the Scientist is to simplify.”

There was an article in the New York Times recently from an 88-year-old man looking back on the 18 years he has lived in the millennium:

I’m trying to break other habits in far more conventional ways. As in many long marriages, my wife and I enjoy spending time with the same friends, watch the same television programs, favor the same restaurants, schedule vacations to many of the same places, avoid activities that venture too far from the familiar.

We decided to become more adventurous, shedding some of those habits. European friends of ours always seem to find the time for an afternoon coffee or glass of wine, something we never did. Now, spontaneously, one of us will suggest going to a coffee shop or cafe just to talk, and we do. It’s hardly a lifestyle revolution, but it does encourage us to examine everything we do automatically, and brings some freshness to a marriage that started when Dwight Eisenhower was elected president.

The best memories can come from unexpected experiences. The best thoughts can come from exposure to unexpected ideas. Attempting to radically organize my life has left me without those little moments where my mind wanders from topic to topic. Efficiency. I have cut back on my reading for pleasure, most of which now comes on audiobook during my commute and somehow seems to prevent deep thinking. But the reason I am interested in science in the first place is because of the questions about who we are and how we behave that come out of thinking about the things I read! The solution, again, is to remove some of the structure I am imposing on my life, simplify and force myself to let go of the need to always be doing something quantifiable and useful.

Looping back, this is where the importance of sitting down and writing, and finishing writing, is one of my big goals for the year. Because I find writing fun! And I find it the best way to really think rigorously, to explore new thoughts and new ideas. There is much less of a need to do so much, to try so many projects when I can read and think about something, writing about it to make something useful and enjoyable instead of making a huge product out of it.

I am not a Stoic but find Stoic thinking useful. Something I read over the holidays:

Let me then introduce you to three fundamental ideas of Stoicism – one theoretical, the other two practical – to explain why I’ve become what I call a secular Stoic. To begin with, the Stoics – a school of philosophers who flourished in the Greek and Roman worlds for several hundred years from the third century BCE – thought that, in order to figure out how to live our lives (what they called ethics), we need to study two other topics: physics and logic. “Physics” meant an understanding of the world, as best as human beings can grasp it, which is done by way of all the natural sciences as well as by metaphysics.

The reason that physics is considered so important is that attempting to live while adopting grossly incorrect notions about how the world works is a recipe for disaster. “Logic” meant not only formal reasoning, but also what we would today call cognitive science: if we don’t know how to use our mind correctly, including an awareness of its pitfalls, then we are not going to be in a position to live a good life.

Beyond reading and self-reflection, the best way to understand your life is to quantify it. Quantification is the best way to peer into the past and really cut through hazy memories that are full of holes. What did I really do? What did I really think? This isn’t an attempt at stricture or rigidity: it’s an attempt at radical self-knowledge. I’m a fairly active at journaling, which is the first step, but I also keep track of what I eat and how I exercise using MyFitnessPal, books I read on Goodreads, movies I watch on letterboxd, where I have been using my phone to track me, and science articles I read using Evernote (I used to be very active on yelp but somehow lost track of that). Using these tools to look back on the past year is a great experience: “Oh yeah, I loved that movie!” or “Ugh I can’t believe I read that whole book.” or just reminding myself of pleasant memories from a short trip to LA.

I’d like to expand that this year to include some other relevant data – ‘skills’ I work on like playing piano to see whether I’m actually improving, TV I watch (because maybe I watch too much, or not enough!), what music I’m listening to, where I spend my money (I already avidly keep track of the fluctuations in how much I have month-to-month), and what important experiences I have (vacations; hikes; seeing exciting new art). There don’t seem to be any good apps for these things outside of Mint, so I have assembled a giant Google Sheet for all of these categories to make it easier to access and analyze the data, with a main Sheet that I can use every month to look back and make some qualitative observations. Oh, and I’m also building a bunch of arduinos that can sense temperature, humidity, light, and sound intensity to put in different rooms of my house to log those things (mostly because my house is always either too hot or too cold and the thermostat is meaningless and I want to figure out why, and partly because I want to make sweet visualizations of the activity in my house throughout the year).

So my lists!

These are the movies I watched in 2017 and to which I gave 5 stars (no particular order):

Embrace of the Serpent
American Honey
Victoria
Gentleman’s Agreement
T2: Trainspotting
Logan
Moana
While We’re Young
Moonlight

With honorable mentions to My Life As A Zucchini, Blade Runner 2049, and Singles.

These are the books I read in 2017 and liked the most:

The Invisibility Cloak (Ge Fei)
The Wind-up Bird Chronicle (Murakami)
Ficciones (Borges)
The Stars Are Legion (Hurley)
Redshirts (Scalzi)
Red Mars (Robinson)
We Are Legion (Taylor)
Permutation City (Egan)
Neuromancer (Gibson)

I see a lot more scifi than I normally read, and many books that I have read previously.

Where was I (generated using this)?

There was an article a few years ago on the predictability of human movement. It turns out that people are pretty predictable! If you know where they are at one moment, you can guess where they will be the next. That’s not too surprising, though, is it? You’re mostly at work or at home. If you go to a bar, there is a higher than random probability that you’ll go home afterward.

The data that you can ask your Android phone to collect on you is unfortunately a bit impoverished. It doesn’t log everything you do but is biased toward times when you check your phone (lunch, when you’re the passenger in a long car ride home, etc). Still, it captures the broad features of the day.

I’ve been keeping track of the data for two years now so I downloaded the data and did a quick analysis about the entropy of my own life. How predictable is my location? If you bin the data into 1 sq. mile bins, entropy is a measure of how much uncertainty there is in where I was. 1 bit of entropy would mean you could guess where I was down to the mile with only one yes or no question; 2 bits of entropy would mean you could guess with two questions; and so on.

On any given day of the week, there are roughly 3 bits of entropy in my location (much less on weekends). But as you can see, it varies a lot by month depending on whether I am traveling or not.

In 2016 (the weird first month is because that’s when I started collecting data and only got a few days):

In 2017:

I will leave you with an image from the last thing I was reading in 2017, and which was consistently the weirdest thing I read: Battle Angel Alita.

Monday open question: can invertebrates be ‘cognitive’?

Janelia Farm, the research center the Howard Hughes Medical Institute recently announced their upcoming research focuses. One of them was controversial: mechanistic cognitive neuroscience. Here’s what they had to say about it:

How does the brain enable cognition? We are developing an integrated program in which tool-builders, biologists, and theorists collaborate to clear the technical, conceptual, and computational hurdles that have kept the most intriguing aspects of cognition beyond the purview of mechanistic investigation. The program will establish tight links across our existing genetic model systems —flies, fish, and rodents— and exploit their complementary strengths. We aim to make the fly the benchmark for reductionist explanations of neural processes underlying complex behavior, leveraging conceptual research by mammalian neuroscientists. The fly has strong potential as a model for rapid mechanistic insights, due to its small brain size, the likelihood of obtaining a complete wiring diagram of its brain, and increasingly powerful methods for measuring and manipulating genetically defined populations of cells in behaving animals. We expect this research to reveal strategies for better understanding the more sophisticated neural and behavioral features of vertebrates. In turn, we expect our vertebrate research to expose complex computational mechanisms, some of which we can study at a detailed level in the fly.

Why was this so controversial? This sentence: “In turn, we expect our vertebrate research to expose complex computational mechanisms, some of which we can study at a detailed level in the fly“. Yes, the humble fly may or may not have cognitive states.

What are some cognitive behaviors that a fly can perform? They use reinforcement learning, can attend to things, have visual place memory. Other invertebrates can recognize faces and perform complex path integration. On the other hand, they have very poor linguistic abilities.

It’s a truth of biology that theories rarely survive contact with new types of data. There is a kind of clarity from knowing the exact neural circuitry and dynamics that a minimal neural circuit needs. If I were studying, say, attention in primates I would be interested in the precise mechanisms that another species uses to accomplish a task similar to what I’m studying. There’s no guarantee that it will be the same mechanism – but is it so unreasonable? Is there a reason that insects would not display cognitive behavior?

You should be using the Neuromethods slack

Ben Saunders has started a Slack for those of you in neuroscience who do, uh, neuroscience. The Neuromethods Slack is a place for scientists to discuss questions about experiments. There’s a channel for electrophysiology, a channel for the biophysics of rhodopsins, a channel for Drosophologists, a channel for data visualization, and so on. It is not the robust mix of science and nonsense that Twitter seems to generate but very much on-topic and seems to be giving answers to questions by other experts within a day or so. You should check it out!

Behavioral quantification: running is part of learning

One of the most accessible ways to study a nervous system is to understand how it generates behavior – its outputs. You can watch an animal and instantly get a sense of what it is doing and maybe even why it is doing it. Then you reach into the animal’s brain and try to find some connection that explains the what and the why.

Take the popular ‘eyeblink conditioning’ task that is used to study learning. You can puff a harmless bit of air at an animal and it will blink in response (wouldn’t you?). Like Pavlov’s dog, you can then pair it with another signal – a tone, a light, something like that – and the animal will slowly learn to associate the two. Eventually you just show the animal the other signal, flashing the light at them, and they will blink as if they were expecting an air puff coming. Simple enough but obviously not every animal is the same. There is a lot of variability in the behavior which could be due to any of a number of unexplored factors, from individual differences in experience to personality. If this is what we are using to investigate the underlying neuroscience, then, it places a fundamental limit on what we can know about the nervous system.

How can we neuroscientists overcome this? One very powerful technique has been to improve our behavioral quantification. I saw a fantastic example of this from Megan Carey when she visited Princeton earlier this year to talk about her work on cerebellum and learning. She had tons of interesting stuff but there was one figure she presented that simply blew me away.

First a bit of history is in order (apologies if I get some of this a bit wrong, my notes here are hazy). When experimenters first tried to get eyeblink conditioning to work with mice, they had trouble. Even though it seems like such a simple reflex the mice were performing very poorly on the task. Eventually, someone (?) found that allowing the mice to walk on a treadmill while experiencing the cues resulted in a huge increase in performance. Was this because they were unhappy being fixed in one place? Was it that they were unable to associate an puff of air to their eye with an environment when they were unable to manipulate their environment?

But there is still a lot of variability. Where does it come from? What you can now do is measure as much about the behavior as possible. Not just how much the animal blinks its eye, but how much it moves and how fast it moves, and how much it does all sorts of other stuff. And it turns out that if you measure the speed that the animal is walking there is a clear linear correlation with how long it takes the animal to learn.

Look at this figure – on the left, you can see how often each individual animal is responding to the air puff with an eyeblink (y-axis) as it is trained through time (x-axis). And on the right is how long it takes to reach some performance benchmark (y-axis) given the average speed the animal walks (x-axis).

So how do you test this? Make sure it is a causation not a meaningless correlation? Put them on a motorized treadmill and control the speed that they walk at. And BAM, most of the variability is gone! Look at the mess of lines in the behavior above and the clearly-delineated behavior below.

There’s a lesson here: when we study a ‘behavior’, there are a lot of other things that an animal is doing at the same time. We think they are irrelevant – we hope they are irrelevant – but often they are part of one bigger whole. If you want to study a behavior that an animal is performing, how else can you do it but by understanding as much about what the animal is doing as possible? How else but seeing how the motor output of the animal is linked together to become one complex form? Time and again, quantifying as many aspects of behavior as possible has revealed that it is in fact finely tuned but driven by some underlying variable that can be measured – once you figure out what it is.

What people mean when they say “maybe”

What is the probability that people perceive when they hear a word like ‘probably’ or ‘probably not’? Someone went and collected some data on this to get the actual probabilities!

Here is some old data:

[This is mostly a personal reminder so I can find this graph again]

Making MATLAB pretty

Alright all y’all haters, it’s MATLAB time.

For better or worse, MATLAB is the language that is used for scientific programming in neuroscience. But it, uh, has some issues when it comes to visualization. One major issue is the clusterfuck that is exporting graphics to vector files like eps. We have all exported a nice-looking image in MATLAB into a vectorized format that not only mangles the image but also ends up somehow needing thousands of layers, right?  Thankfully, Vy Vo pointed me to a package on github that is able to clean up these exported files.

Here is my favorite example (before, after):

If you zoom in or click the image, you can see the awful crosshatching in the before image. Even better, it goes from 11,775 layers before to just 76 after.

On top of this, gramm is a toolbox to add ggplot2-like visualization capabilities to MATLAB:

(Although personally, I like the new MATLAB default color-scheme – but these plotting functions are light-years better than the standard package.)

Update: Ben de Bivort shared his lab’s in-house preferred colormaps. I love ’em.

Update x2: Here’s another way to export your figures into eps nicely. Also, nice perceptually uniform color maps.