2017 in review (a quantified life)

I have always found it useful to take advantage of the New Year and reflect on what I have done over the past year. The day itself is a useful bookmark in life, inevitably trapped between leaving town for Christmas and coming back to town after the New Year begins. Because of the enforced downtime, what I happen to read has a strong influence on me – last year, I hopped on the Marie Kondo craze and really did manage to do a better job of keeping clean (kind of) but more importantly organizing my clothes by rolling and folding them until the fit so perfectly in my drawers. So that was useful, I guess.

The last year has been okay. Not great, not terrible. Kind of middle-of-the-road as my life goes. There have been some big wins (organizing a fantastic workshop at Cosyne on neurobehavioral analysis and being awarded a Simons Foundation fellowship that lets me join a fantastic group of scientists) and some frustrations (mostly scientific work that goes slowly slowly slowly).

One thing that sticks out for me over this past year – over these past two years, actually – is how little time I have spent on this blog. Or rather, how little of what I have done has been published on this blog. It’s not for a lack of time! I have actually done a fair bit of writing but am constantly stuck after a paragraph or two, my motivation waning until it disappears completely. This largely due to how I responded to some structural features in my life, mostly a long commute and a lot of things that I want to accomplish.

Last year I had the “clever” idea of creating a strict regimen of hour by hour and daily goals both for work and for my life. Do this analysis from 3pm – 4pm. Debug that code from 4pm – 5pm. Play the piano from 8pm – 9pm. Things like that. Maybe this works for other people? But I end up overambitious, constantly adding things that I need to do today so much that I rapidly switch from project to project, each slot mangled into nonsense by the little new things that will always spring up on any given day. Micromanaging yourself is the worst kind of managing, especially when you don’t realize you are doing it.

This is where what I read over winter break made me think. One of the three articles that influenced me was about the nature of work:

For unlike someone devoted to the life of contemplation, a total worker takes herself to be primordially an agent standing before the world, which is construed as an endless set of tasks extending into the indeterminate future. Following this taskification of the world, she sees time as a scarce resource to be used prudently, is always concerned with what is to be done, and is often anxious both about whether this is the right thing to do now and about there always being more to do. Crucially, the attitude of the total worker is not grasped best in cases of overwork, but rather in the everyday way in which he is single-mindedly focused on tasks to be completed, with productivity, effectiveness and efficiency to be enhanced. How? Through the modes of effective planning, skilful prioritising and timely delegation. The total worker, in brief, is a figure of ceaseless, tensed, busied activity: a figure, whose main affliction is a deep existential restlessness fixated on producing the useful.

Yup, that pretty much sums up how I was trying to organize my life. In the hope of accomplishing more I ended up doing less. This year I am trying a less-is-more approach; have fewer, more achievable goals each day/month/time unit; have more unstructured time; read more widely; and so on. Instead of saying I need to learn piano and I need to make art and I need to play with arduinos and I need to memorize more poetry and finding more and more things that I need to do, just list some things I’m interested in doing. Look at that list every so often to remind myself and then allow myself to flow into the things I am most interested in rather than forcing it.

I was lucky enough in graduate school to join a lunch with Eve Marder. There are two types of scientists, she said. Starters and finishers. Some people start a lot of projects, some people finish a few. This has always stuck with me. This past year I have been trying to maximize how many things I can work on – and it turns out that is a lot of different things – I want to spend this year doing a couple things at a time and finish themDo them well.

I have this memory of Wittgenstein declaring in the Tractatus that “the purpose of the Philosopher is to clarify.” I must have confabulated that quote because I could never find it again. Still, it’s my favorite thing that Wittgenstein ever said. For a scientist, the aphorism should be that “the purpose of the Scientist is to simplify.”

There was an article in the New York Times recently from an 88-year-old man looking back on the 18 years he has lived in the millennium:

I’m trying to break other habits in far more conventional ways. As in many long marriages, my wife and I enjoy spending time with the same friends, watch the same television programs, favor the same restaurants, schedule vacations to many of the same places, avoid activities that venture too far from the familiar.

We decided to become more adventurous, shedding some of those habits. European friends of ours always seem to find the time for an afternoon coffee or glass of wine, something we never did. Now, spontaneously, one of us will suggest going to a coffee shop or cafe just to talk, and we do. It’s hardly a lifestyle revolution, but it does encourage us to examine everything we do automatically, and brings some freshness to a marriage that started when Dwight Eisenhower was elected president.

The best memories can come from unexpected experiences. The best thoughts can come from exposure to unexpected ideas. Attempting to radically organize my life has left me without those little moments where my mind wanders from topic to topic. Efficiency. I have cut back on my reading for pleasure, most of which now comes on audiobook during my commute and somehow seems to prevent deep thinking. But the reason I am interested in science in the first place is because of the questions about who we are and how we behave that come out of thinking about the things I read! The solution, again, is to remove some of the structure I am imposing on my life, simplify and force myself to let go of the need to always be doing something quantifiable and useful.

Looping back, this is where the importance of sitting down and writing, and finishing writing, is one of my big goals for the year. Because I find writing fun! And I find it the best way to really think rigorously, to explore new thoughts and new ideas. There is much less of a need to do so much, to try so many projects when I can read and think about something, writing about it to make something useful and enjoyable instead of making a huge product out of it.

I am not a Stoic but find Stoic thinking useful. Something I read over the holidays:

Let me then introduce you to three fundamental ideas of Stoicism – one theoretical, the other two practical – to explain why I’ve become what I call a secular Stoic. To begin with, the Stoics – a school of philosophers who flourished in the Greek and Roman worlds for several hundred years from the third century BCE – thought that, in order to figure out how to live our lives (what they called ethics), we need to study two other topics: physics and logic. “Physics” meant an understanding of the world, as best as human beings can grasp it, which is done by way of all the natural sciences as well as by metaphysics.

The reason that physics is considered so important is that attempting to live while adopting grossly incorrect notions about how the world works is a recipe for disaster. “Logic” meant not only formal reasoning, but also what we would today call cognitive science: if we don’t know how to use our mind correctly, including an awareness of its pitfalls, then we are not going to be in a position to live a good life.

Beyond reading and self-reflection, the best way to understand your life is to quantify it. Quantification is the best way to peer into the past and really cut through hazy memories that are full of holes. What did I really do? What did I really think? This isn’t an attempt at stricture or rigidity: it’s an attempt at radical self-knowledge. I’m a fairly active at journaling, which is the first step, but I also keep track of what I eat and how I exercise using MyFitnessPal, books I read on Goodreads, movies I watch on letterboxd, where I have been using my phone to track me, and science articles I read using Evernote (I used to be very active on yelp but somehow lost track of that). Using these tools to look back on the past year is a great experience: “Oh yeah, I loved that movie!” or “Ugh I can’t believe I read that whole book.” or just reminding myself of pleasant memories from a short trip to LA.

I’d like to expand that this year to include some other relevant data – ‘skills’ I work on like playing piano to see whether I’m actually improving, TV I watch (because maybe I watch too much, or not enough!), what music I’m listening to, where I spend my money (I already avidly keep track of the fluctuations in how much I have month-to-month), and what important experiences I have (vacations; hikes; seeing exciting new art). There don’t seem to be any good apps for these things outside of Mint, so I have assembled a giant Google Sheet for all of these categories to make it easier to access and analyze the data, with a main Sheet that I can use every month to look back and make some qualitative observations. Oh, and I’m also building a bunch of arduinos that can sense temperature, humidity, light, and sound intensity to put in different rooms of my house to log those things (mostly because my house is always either too hot or too cold and the thermostat is meaningless and I want to figure out why, and partly because I want to make sweet visualizations of the activity in my house throughout the year).

So my lists!

These are the movies I watched in 2017 and to which I gave 5 stars (no particular order):

Embrace of the Serpent
American Honey
Victoria
Gentleman’s Agreement
T2: Trainspotting
Logan
Moana
While We’re Young
Moonlight

With honorable mentions to My Life As A Zucchini, Blade Runner 2049, and Singles.

These are the books I read in 2017 and liked the most:

The Invisibility Cloak (Ge Fei)
The Wind-up Bird Chronicle (Murakami)
Ficciones (Borges)
The Stars Are Legion (Hurley)
Redshirts (Scalzi)
Red Mars (Robinson)
We Are Legion (Taylor)
Permutation City (Egan)
Neuromancer (Gibson)

I see a lot more scifi than I normally read, and many books that I have read previously.

Where was I (generated using this)?

There was an article a few years ago on the predictability of human movement. It turns out that people are pretty predictable! If you know where they are at one moment, you can guess where they will be the next. That’s not too surprising, though, is it? You’re mostly at work or at home. If you go to a bar, there is a higher than random probability that you’ll go home afterward.

The data that you can ask your Android phone to collect on you is unfortunately a bit impoverished. It doesn’t log everything you do but is biased toward times when you check your phone (lunch, when you’re the passenger in a long car ride home, etc). Still, it captures the broad features of the day.

I’ve been keeping track of the data for two years now so I downloaded the data and did a quick analysis about the entropy of my own life. How predictable is my location? If you bin the data into 1 sq. mile bins, entropy is a measure of how much uncertainty there is in where I was. 1 bit of entropy would mean you could guess where I was down to the mile with only one yes or no question; 2 bits of entropy would mean you could guess with two questions; and so on.

On any given day of the week, there are roughly 3 bits of entropy in my location (much less on weekends). But as you can see, it varies a lot by month depending on whether I am traveling or not.

In 2016 (the weird first month is because that’s when I started collecting data and only got a few days):

In 2017:

I will leave you with an image from the last thing I was reading in 2017, and which was consistently the weirdest thing I read: Battle Angel Alita.

Advertisements

Monday open question: can invertebrates be ‘cognitive’?

Janelia Farm, the research center the Howard Hughes Medical Institute recently announced their upcoming research focuses. One of them was controversial: mechanistic cognitive neuroscience. Here’s what they had to say about it:

How does the brain enable cognition? We are developing an integrated program in which tool-builders, biologists, and theorists collaborate to clear the technical, conceptual, and computational hurdles that have kept the most intriguing aspects of cognition beyond the purview of mechanistic investigation. The program will establish tight links across our existing genetic model systems —flies, fish, and rodents— and exploit their complementary strengths. We aim to make the fly the benchmark for reductionist explanations of neural processes underlying complex behavior, leveraging conceptual research by mammalian neuroscientists. The fly has strong potential as a model for rapid mechanistic insights, due to its small brain size, the likelihood of obtaining a complete wiring diagram of its brain, and increasingly powerful methods for measuring and manipulating genetically defined populations of cells in behaving animals. We expect this research to reveal strategies for better understanding the more sophisticated neural and behavioral features of vertebrates. In turn, we expect our vertebrate research to expose complex computational mechanisms, some of which we can study at a detailed level in the fly.

Why was this so controversial? This sentence: “In turn, we expect our vertebrate research to expose complex computational mechanisms, some of which we can study at a detailed level in the fly“. Yes, the humble fly may or may not have cognitive states.

What are some cognitive behaviors that a fly can perform? They use reinforcement learning, can attend to things, have visual place memory. Other invertebrates can recognize faces and perform complex path integration. On the other hand, they have very poor linguistic abilities.

It’s a truth of biology that theories rarely survive contact with new types of data. There is a kind of clarity from knowing the exact neural circuitry and dynamics that a minimal neural circuit needs. If I were studying, say, attention in primates I would be interested in the precise mechanisms that another species uses to accomplish a task similar to what I’m studying. There’s no guarantee that it will be the same mechanism – but is it so unreasonable? Is there a reason that insects would not display cognitive behavior?

You should be using the Neuromethods slack

Ben Saunders has started a Slack for those of you in neuroscience who do, uh, neuroscience. The Neuromethods Slack is a place for scientists to discuss questions about experiments. There’s a channel for electrophysiology, a channel for the biophysics of rhodopsins, a channel for Drosophologists, a channel for data visualization, and so on. It is not the robust mix of science and nonsense that Twitter seems to generate but very much on-topic and seems to be giving answers to questions by other experts within a day or so. You should check it out!

Behavioral quantification: running is part of learning

One of the most accessible ways to study a nervous system is to understand how it generates behavior – its outputs. You can watch an animal and instantly get a sense of what it is doing and maybe even why it is doing it. Then you reach into the animal’s brain and try to find some connection that explains the what and the why.

Take the popular ‘eyeblink conditioning’ task that is used to study learning. You can puff a harmless bit of air at an animal and it will blink in response (wouldn’t you?). Like Pavlov’s dog, you can then pair it with another signal – a tone, a light, something like that – and the animal will slowly learn to associate the two. Eventually you just show the animal the other signal, flashing the light at them, and they will blink as if they were expecting an air puff coming. Simple enough but obviously not every animal is the same. There is a lot of variability in the behavior which could be due to any of a number of unexplored factors, from individual differences in experience to personality. If this is what we are using to investigate the underlying neuroscience, then, it places a fundamental limit on what we can know about the nervous system.

How can we neuroscientists overcome this? One very powerful technique has been to improve our behavioral quantification. I saw a fantastic example of this from Megan Carey when she visited Princeton earlier this year to talk about her work on cerebellum and learning. She had tons of interesting stuff but there was one figure she presented that simply blew me away.

First a bit of history is in order (apologies if I get some of this a bit wrong, my notes here are hazy). When experimenters first tried to get eyeblink conditioning to work with mice, they had trouble. Even though it seems like such a simple reflex the mice were performing very poorly on the task. Eventually, someone (?) found that allowing the mice to walk on a treadmill while experiencing the cues resulted in a huge increase in performance. Was this because they were unhappy being fixed in one place? Was it that they were unable to associate an puff of air to their eye with an environment when they were unable to manipulate their environment?

But there is still a lot of variability. Where does it come from? What you can now do is measure as much about the behavior as possible. Not just how much the animal blinks its eye, but how much it moves and how fast it moves, and how much it does all sorts of other stuff. And it turns out that if you measure the speed that the animal is walking there is a clear linear correlation with how long it takes the animal to learn.

Look at this figure – on the left, you can see how often each individual animal is responding to the air puff with an eyeblink (y-axis) as it is trained through time (x-axis). And on the right is how long it takes to reach some performance benchmark (y-axis) given the average speed the animal walks (x-axis).

So how do you test this? Make sure it is a causation not a meaningless correlation? Put them on a motorized treadmill and control the speed that they walk at. And BAM, most of the variability is gone! Look at the mess of lines in the behavior above and the clearly-delineated behavior below.

There’s a lesson here: when we study a ‘behavior’, there are a lot of other things that an animal is doing at the same time. We think they are irrelevant – we hope they are irrelevant – but often they are part of one bigger whole. If you want to study a behavior that an animal is performing, how else can you do it but by understanding as much about what the animal is doing as possible? How else but seeing how the motor output of the animal is linked together to become one complex form? Time and again, quantifying as many aspects of behavior as possible has revealed that it is in fact finely tuned but driven by some underlying variable that can be measured – once you figure out what it is.

What people mean when they say “maybe”

What is the probability that people perceive when they hear a word like ‘probably’ or ‘probably not’? Someone went and collected some data on this to get the actual probabilities!

Here is some old data:

[This is mostly a personal reminder so I can find this graph again]

Making MATLAB pretty

Alright all y’all haters, it’s MATLAB time.

For better or worse, MATLAB is the language that is used for scientific programming in neuroscience. But it, uh, has some issues when it comes to visualization. One major issue is the clusterfuck that is exporting graphics to vector files like eps. We have all exported a nice-looking image in MATLAB into a vectorized format that not only mangles the image but also ends up somehow needing thousands of layers, right?  Thankfully, Vy Vo pointed me to a package on github that is able to clean up these exported files.

Here is my favorite example (before, after):

If you zoom in or click the image, you can see the awful crosshatching in the before image. Even better, it goes from 11,775 layers before to just 76 after.

On top of this, gramm is a toolbox to add ggplot2-like visualization capabilities to MATLAB:

(Although personally, I like the new MATLAB default color-scheme – but these plotting functions are light-years better than the standard package.)

Update: Ben de Bivort shared his lab’s in-house preferred colormaps. I love ’em.

Update x2: Here’s another way to export your figures into eps nicely. Also, nice perceptually uniform color maps.

Why does the eye care about the nose?

The ear, the nose, the eye: all of the neurons closest to the environment are doing on thing: attempting to represent the outside world as perfectly as possible. Total perfection is not possible – you can only only make the eye so large and only need to see so much detail in order live your life. But if you were to try to predict what the neurons in the retina or the ear are doing based on what could provide as much information as possible, you’d do a really good job. Once that information is in the nervous system, the neurons that receive this information can do whatever they want with it, processing it further or turning it directly into a command to blink or jump or just stare into space.

Even though this is the story that all of us neuroscientists get told, it’s not the full thing. Awhile back, I posted that the retina receives input from other places in the brain. That just seems weird from this perspective. If the retina is focused on extracting useful information about the visual world, why would it care about how the world smells?

One simple explanation might be that the neurons only want to code for surprising information. Maybe the nose can help out with that? After all, if something is predictable then it is useless; you already know about it! No need to waste precious bits. This seems to be what the purpose of certain feedback signals to the fly eye are for. A few recent papers have shown that neurons in the eye that respond to horizontal or vertical motion receive signals about how the animal is moving, so that when the animal moves to the left it should expect leftward motion in the horizontal cells – and so only respond to leftward motion that is above and beyond what the animal is causing. But again – what could this have to do with smells?

Let’s think for a second about some times when the olfactory system uses non-olfactory information. The olfactory system should be trying to represent the smell-world as well as it can, just like the visual system is trying to represent the image-world. But the olfactory system is directly modulated depending on the needs of an animal at any given moment. For instance, a hungry fly will release a peptide that modifies how much a set of neurons that respond to particular odors can signal the rest of the brain. In other words, how hungry an animal is determines how well it can smell something!

These two stories – how the eye interacts with the motion of the body, how the nose interacts with hunger – might give us a hint about what is happening. The sensory systems aren’t just trying to represent as much information about the world as possible, they are trying to represent as much information about useful stuff as possible. The classical view of sensory systems is a fundamentally static one, that they have evolved to take advantage of the consistencies in the world to provide relevant information as efficiently as possible*. But the world is a dynamic place, and the needs of an animal at one time are different from the needs of the animal at another.

Take an example from tadpoles. When the tadpole is in a very dim environment, it has a harder time separating dark objects from the background. The world just has less contrast (try turning down the brightness on your screen and reading this – you’ll get the idea). One way that these tadpoles control their ability to increase or decrease contrast is through a neuromodulator that changes the resting potential of a cell (how responsive it is to stimuli), but only over relatively long timescales. This is not fast adaptation but slow adaptation to the changing world. The end result of this is that tadpoles are better able to see moving objects – but presumably at the expense of being worse at seeing something else. That seems like a pretty direct way of going from a need for the animal to code certain visual information more efficiently to the act of doing it. The point is not that this is driven by a direct behavioral need of the animal – I have no idea if this is due to a desire to hunt or avoid objects or what-have-you. Instead, it’s an example of how an animal could control certain information if it wanted to.

This kind of behavioral gating does occur from retinal feedback. Male zebrafish use a combination of smell and sight when they decide how they want to interact with other zebrafish. Certain olfactory neurons that respond to a chemical involved in mating signal to neurons in the retina – making certain cells more or less responsive in the same way that tadpoles control the contrast of their world (above). It appears as if the olfactory information sends a signal to the eye that either gates or enhances the visual information – the stripe detection or what-have-you – that the little fishies use when they want to court another animal.

The sensory system is not perfect. It must make trade-offs about which information is important to keep and which can be thrown away, about how much of its limited bandwidth to spend on one signal or another. A lot of the structure comes naturally from evolution, representing a long-term learning of the structure of the world. But animals have needs that fluctuate over other timescales – and may require more computation than can be provided directly in the sensory area. How else would the eye know that it is time to mate?

What this doesn’t answer is why the modulation is happening here; why not downstream?

 

* This is a major simplification, obviously, and a lot of work has been done on adaptation, etc in the retina.

 

Monday Open Question: what do you need to do to get a neuroscience job? (Updated)

Awhile back I asked for help obtaining information on people who had gotten a faculty job in 2016 – 2017. And it worked! With a lot of help, I managed to piece together a list with more than 70 people who had gotten faculty jobs during this last year! I am sure it is incomplete (I keep getting new tips as of ten minutes ago…) but it is time to discuss some of the interesting features of the data.

First, the gender ratio: there are 44 men on the list to 33 women (57%). Over at the neurorumblr, 62% of the people on the Postdoc List were men which is roughly the same proportion.

To get more data, I focused on faculty hires who had a Google Scholar profile – it made it much easier to scrape data. It was suggested that people in National Academy of Sciences or HHMI labs may have a better chance of getting a faculty job. Out of the 51 people with a Google Scholar profile, 4 were in both NAS/HHMI labs, 8 were in HHMI-only labs, and 4 were in NAS-only labs.  Only one person who as in a HHMI/NAS lab in grad school went to a non-HHMI/NAS lab. People also suggested that a prestigious fellowship (HHWF, Damon Runyon, Life Sciences, etc.) It is hard to tell, but there didn’t seem like a huge number of these people gaining a job last year.

The model organisms they use are:

(15) Humans

(13) Mouse

(6) Rat

(4) Monkey

(3) Drosophila

(3) Pure computational

+ assorted others

Where are they all from? Here is the distribution of institutions the postdocs came from (update: though see the bottom of the post for more information):

 

In case you hadn’t noticed, this is a pretty geographically-concentrated pool of institutions. Just adding up schools that are in the NYC+ area (NYC-itself, plus Yale and Princeton), the Bay Area, Greater DC (Hopkins + Janelia), and ‘those Boston schools’. I’m not sure this accurately represents the geographic distribution of neuroscientists.

What about their publications? They had a mean H-index of 11.98 (standard deviation ~ 4.21).

We always hear that “you need a Cell/Nature/Science paper in order to get a job”. 29.4% (15/51) of this pool have a first- or second-author CNS paper. 68% (35/51) have a first- or second-author Nature Neuroscience/Neuron/Nature Methods paper. 78% (40/51) have some combination of these papers. It’s possible that faculty hires have CNS papers in the pipeline, but unless every single issue of CNS is dedicated to people who just got a faculty job this probably isn’t the big deal it’s always made out to be.

There’s a broader theory that I’ve heard from several people (outlined here) that the underlying requirement is really the cumulative impact factor. I have used the metric described in the link, where the approximate impact factor is taken from first-author publications and second-author publications are discounted 75% (reviews are ignored). Here are the CIFs for all 51 candidates over the past 7 years (red is the mean):

I thought there might be a difference by model organism, but within imaginary error bars it looks roughly the same:

In terms of absolute IF of the publications, there is a clear bump in the two years prior to the candidate getting their job (though note all of the peaks in individual traces prior to that):

So far as I can tell, there is no strong signal in terms of publications that you had as a Grad Student. Basically, graduate work or lab don’t matter, except as a conduit to get a postdoc position.

To sum up: you don’t need a CNS paper, though a Nature Neuroscience/Neuron/Nature Methods paper or two is going to help you quite a bit. Publish it in the year or two before you go on the job market.

Oh, and live in New York+ or the Bay Area.

 

Update: the previous city/institution analysis was done on a subset of individuals that had Google Scholar profiles. When I used all of the data, I got this list of institutions/cities:

Updated x2:

I thought it might be interesting to see which journals people commonly co-publish in. It turns out, eh, it is kind and it isn’t kind of. For all authors, here are the journals that they have jointly published in (where links represent the fact that someone has published in both journals):

And here are the journals they have published in as first authors:


Behavioral quantification: mapping the neural substrates of behavior

A new running theme on the blog: cool uses of behavioral quantification.

One of the most exciting directions in behavioral science are the advances in behavioral quantification. Science often advances by being able to perform ever more precise measurements from ever-increasing amounts of data. Thanks to the increasing power of computers and advances in machine learning, we are now able to automatically extract massive amounts of behavioral data at a level of detail that was previously unobtainable.

A great example of this is a recently published paper out of Janelia Farm. Using an absolutely shocking 400,000 flies, the authors systematically activated small subsets of neurons and then observed what behaviors they performed. First, can you imagine a human scoring every moment of four hundred thousand animals as they behaved over fifteen minutes? That is 12.1 billion frames of data to sort through and classify.

Kristan Branson – the corresponding author on the paper – has been developing two pieces of software that allows for efficient and fast estimation of behavior. The first, Ctrax, tracks individual animals as they move around a small arena and assigns a position, an orientation, and various postural features (for instance, since they are fruit flies we can extract the angle of each wing). The second, JAABA, then uses combinations of these features, such as velocity, interfly distance, and so on, in order to identify behaviors. Users annotate videos with examples of when an animal is performing a particular behavior, and then the program will generate examples in other videos that it believes are the same behavior. An iterative back-and-forth between user and machine gradually narrows down what counts as a particular behavior and what doesn’t, eventually allowing fully-automated classification of behavior in new videos.

Then once you have this pipeline you can just stick a bunch of animals into a plate under a camera, activate said neural populations, let them do whatever they feel like doing, and get gobs and gobs of data. This allows you to understand at neural precision which neurons are responsible for any arbitrary behavior you desire. This lets you build maps – maps that help you understand where information is flowing through the brain. And since you know which of these lines are producing which behaviors, you can then go and find even more specific subsets of neurons that let you identify precise neural pathways.

Here are two examples. Flies sometimes walk backwards (moonwalking!). If you look at the image below, you can see (on the bottom) all the different neurons labeled in a fly brain that had an effect on this backward locomotion, and in the upper-right the more specific areas where the neurons are most likely located. In fruit fly brains, the bulbous protrusions where these colors are found are the eyes of the animal, with a couple flecks in the central brain.

This turns out to be incredibly accurate. Some of this moonwalking circuit was recently dissected and a set of neurons from the eye into the brain was linked to causing this behavior. The neurons (in green below) are in exactly the place you’d expect from the map above! They link to a set of neurons known as the ‘moonwalker descending neurons’ which sends signals to the nerve (spinal) cord that cause the animal to walk backwards.

Of course, sometimes it can be more complicated. When a male fly is courting a female fly, he will extend one wing and vibrate it to produce a song. Here are the neurons related to that behavior (there are a lot):

There are two key points from this quantification. First, the sheer amount and quality of data it is possible to gain access to and score these days is allowing us to have immense statistical precision on when and in which contexts behaviors are occurring. Second, the capacity to find new things is increasing because we can be increasingly agnostic to what we are looking for (so it is easier to find surprises in the data!).

References

Mapping the Neural Substrates of Behavior. Robie et al 2017.

See also: Big behavioral data: psychology, ethology and the foundations of neuroscience. Gomez-Marin et al 2014.

 

Monday Open Question: does neuroscience have anything to offer AI?

A review was published this week in Neuron by DeepMind luminary Demis Hassibis and colleagues about Neuroscience-inspired Artificial Intelligence. As one would expect from a journal called Neuron, the article was pretty positive about the use of neurons!

There have been two key concepts from neuroscience that are ubiquitous in the AI field today: Deep Learning and Reinforcement Learning. Both are very direct descendants of research from the neuroscience community. In fact, saying that Deep Learning is an outgrowth of neuroscience obscures the amount of influence neuroscience has had. It did not just gift the idea of connecting of artificial neurons together to build a fictive brain, but much more technical ideas such as convolutional neural networks that use a single function repeatedly across its input as the retina or visual cortex does; hierarchical processing in the way the brain goes from layer to layer; divisive normalization as a way to keep outputs within a reasonable and useful range. Similarly, Reinforcement Learning and all its variants have continued to expand and be developed by the cognitive community.

Sounds great! So what about more recent inspirations? Here, Hassibis &co offer up the roles of attention, episodic memory, working memory, and ‘continual learning’. But reading this, I became less inspired than morose (see this thread). Why? Well look at the example of attention. Attention comes in many forms: automatic, voluntary, bottom-up, top-down, executive, spatial, feature-based, objected-based, and more. It sometimes means a sharpening of the collection of things a neuron responds to, so instead of being active in response to an edge oriented, thisthat, or another way, it only is active when it sees an edge oriented that way. But it sometimes means a narrowing of the area in space that it responds to. Sometimes responses between neurons become more diverse (decorrelated).

But this is not really how ‘attention’ works in deep networks. All of these examples seem primarily motivated by the underlying psychology, not the biological implementation. Which is fine! But does that mean that the biology has nothing to teach us? Even at best, I am not expecting Deep Networks to converge precisely to mammalian-based neural networks, nor that everything the brain does should be useful to AI.

This leads to some normative questions: why hasn’t neuroscience contributed more, especially to Deep Learning? And should we even expect it to?

It could just be that the flow of information from neuroscience to AI  is too weak. It’s not exactly like there’s a great list of “here are all the equations that describe how we think the brain works”. If you wanted to use a more nitty-gritty implementation of attention, where would you turn? Scholarpedia? What if someone wants to move step-by-step through all the ways that visual attention contributes to visual processing? How would they do it? Answer: they would become a neuroscientist. Which doesn’t really help, time-wise. But maybe, slowly over time, these two fields will be more integrated.

More to the point, why even try? AI and neuroscience are two very different fields; one is an engineering discipline of, “how do we get this to work” and the other a scientific discipline of “why does this work”. Who is to say that anything we learn from neuroscience would even be relevant to AI? Animals are bags of meat that have a nervous system trying to solve all sorts of problems (like wiring length energy costs between neurons, physical transmission delays, the need to blood osmolality, etc) that AI has no real interest or need in including but may be fundamental to how the nervous system has evolved. Is the brain the bird to AI’s airplane, accomplishing the same job but engineered in a totally different way?

Then in the middle of writing this, a tweet came through my feed that made me think I had a lot of this wrong (I also realized I had become too fixated on ‘the present’ section of their paper and less on ‘the past’ which is only a few years old anyway).

The ‘best paper’ award at the CVPR 2017 conference went to this paper which connects blocks of layers together, passing forward information from one to the next.

That looks a lot more like what cortex looks like! Though obviously sensory systems in biology are a bit more complicated:

And the advantages? “DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters”

So are the other features of cortex useful in some way? How? How do we have to implement them to make them useful? What are the drawbacks?

Neuroscience is big and unwieldy, spanning a huge number of different fields. But most of these fields are trying to solve exactly the same problem that Deep Learning is trying to solve in very similar ways. This is an incredibly exciting opportunity – a lot of Deep Learning is essentially applied theoretical neuroscience. Which of our hypotheses about why we have attention are true? Which are useless?