Why does the eye care about the nose?

The ear, the nose, the eye: all of the neurons closest to the environment are doing on thing: attempting to represent the outside world as perfectly as possible. Total perfection is not possible – you can only only make the eye so large and only need to see so much detail in order live your life. But if you were to try to predict what the neurons in the retina or the ear are doing based on what could provide as much information as possible, you’d do a really good job. Once that information is in the nervous system, the neurons that receive this information can do whatever they want with it, processing it further or turning it directly into a command to blink or jump or just stare into space.

Even though this is the story that all of us neuroscientists get told, it’s not the full thing. Awhile back, I posted that the retina receives input from other places in the brain. That just seems weird from this perspective. If the retina is focused on extracting useful information about the visual world, why would it care about how the world smells?

One simple explanation might be that the neurons only want to code for surprising information. Maybe the nose can help out with that? After all, if something is predictable then it is useless; you already know about it! No need to waste precious bits. This seems to be what the purpose of certain feedback signals to the fly eye are for. A few recent papers have shown that neurons in the eye that respond to horizontal or vertical motion receive signals about how the animal is moving, so that when the animal moves to the left it should expect leftward motion in the horizontal cells – and so only respond to leftward motion that is above and beyond what the animal is causing. But again – what could this have to do with smells?

Let’s think for a second about some times when the olfactory system uses non-olfactory information. The olfactory system should be trying to represent the smell-world as well as it can, just like the visual system is trying to represent the image-world. But the olfactory system is directly modulated depending on the needs of an animal at any given moment. For instance, a hungry fly will release a peptide that modifies how much a set of neurons that respond to particular odors can signal the rest of the brain. In other words, how hungry an animal is determines how well it can smell something!

These two stories – how the eye interacts with the motion of the body, how the nose interacts with hunger – might give us a hint about what is happening. The sensory systems aren’t just trying to represent as much information about the world as possible, they are trying to represent as much information about useful stuff as possible. The classical view of sensory systems is a fundamentally static one, that they have evolved to take advantage of the consistencies in the world to provide relevant information as efficiently as possible*. But the world is a dynamic place, and the needs of an animal at one time are different from the needs of the animal at another.

Take an example from tadpoles. When the tadpole is in a very dim environment, it has a harder time separating dark objects from the background. The world just has less contrast (try turning down the brightness on your screen and reading this – you’ll get the idea). One way that these tadpoles control their ability to increase or decrease contrast is through a neuromodulator that changes the resting potential of a cell (how responsive it is to stimuli), but only over relatively long timescales. This is not fast adaptation but slow adaptation to the changing world. The end result of this is that tadpoles are better able to see moving objects – but presumably at the expense of being worse at seeing something else. That seems like a pretty direct way of going from a need for the animal to code certain visual information more efficiently to the act of doing it. The point is not that this is driven by a direct behavioral need of the animal – I have no idea if this is due to a desire to hunt or avoid objects or what-have-you. Instead, it’s an example of how an animal could control certain information if it wanted to.

This kind of behavioral gating does occur from retinal feedback. Male zebrafish use a combination of smell and sight when they decide how they want to interact with other zebrafish. Certain olfactory neurons that respond to a chemical involved in mating signal to neurons in the retina – making certain cells more or less responsive in the same way that tadpoles control the contrast of their world (above). It appears as if the olfactory information sends a signal to the eye that either gates or enhances the visual information – the stripe detection or what-have-you – that the little fishies use when they want to court another animal.

The sensory system is not perfect. It must make trade-offs about which information is important to keep and which can be thrown away, about how much of its limited bandwidth to spend on one signal or another. A lot of the structure comes naturally from evolution, representing a long-term learning of the structure of the world. But animals have needs that fluctuate over other timescales – and may require more computation than can be provided directly in the sensory area. How else would the eye know that it is time to mate?

What this doesn’t answer is why the modulation is happening here; why not downstream?

 

* This is a major simplification, obviously, and a lot of work has been done on adaptation, etc in the retina.

 

Advertisements

The skeletal system is part of the brain, too

It seems like a fact uniformly forgotten is that the brain is a biological organ just the same as your liver or your spleen or your bones. Its goal – like every other organ – is to keep your stupid collection of cells in on piece. It is one, coherent organism. Just like any other collection of individuals, it needs to communicate in order to work together.

Many different organs are sending signals to the brain. One is your gut, which is innervated by the enteric nervous system. This “other” nervous system contains more neurons (~500 million) than the spinal cord, and about ten times as many neurons as a mouse has in its whole brain. Imagine that: living inside of you is an autonomous nervous system with sensory inputs and motor outputs.

We like to forget this. We like to point to animals like the octopus and ask, what could life be like as an animal whose nervous system is distributed across its body? Well, look in the mirror. What is it like? We have multiple autonomous nervous systems; we have computational processing spread across our body. Have you ever wondered what the ‘mind’ of your gastrointestinal system must think of the mind in the other parts of your body?

The body’s computations about what to do about the world aren’t limited to your nervous system: they are everywhere. This totality is so complete that even your very bones are participating, submitting votes about how you should be interacting with the world. Bones (apparently) secrete neurohormones that directly interact with the brain. These hormones then travel through the blood to make a small set of neurons more excitable, more ready to respond to the world. These neurons then become ready and willing to tell the rest of the brain to eat less food.

This bone-based signaling is a new finding and totally and completely surprising. I don’t recall anyone postulating a bone-brain axis before. Yet it turns out that substantial computations are performed all throughout the body that affect how we think. Animals that are hungry make decisions in a fundamentally different way, willing to become riskier and riskier.

 

A lot of this extra-brain processing is happening on much slower timescales than the fast neuronal processing in the brain: it is integrating information along much longer amounts of time. This mix of fast-and-slow processing is ubiquitous for animals; classification is fast. The body is both fast and slow.

People seem to forget that we are not one silicon instantiation of neural architecture away from replicating humans: we are meat machines.

References

 

MC4R-dependent suppression of appetite by bone-derived lipocalin 2. Nature 2017.

Every spike matters, down to the (sub)millisecond

There was a time when the neuroscience world was consumed by the question of how individual neurons were coding information about the world. Was it in the average firing rate? Or did every precise spike matter, down to the millisecond? Was it, potentially, more complicated?

Like everything else in neuroscience, the answer was resolved in a kind of “it depends, it’s complicated” kind of way. The most important argument against the role of precise spike timing is noise. There is the potential for noise in sensory input, noise between every synapse, noise at every neuron. Why not make the system robust to this noise by taking some time average? On the other hand, if you want to respond quickly you can’t take too much time to average – you need to respond!

Much of the neural coding literature comes from sensory processing where it is easy to control the input. Once you get deeper into the brain, it becomes less clear how much of what the neuron is receiving is sensory and not some shifting mass.

The field has shifted a bit with the rise of calcium indicators which allow imaging the activity of large population of neurons at the expense of timing information. Not only does it sacrifice precise timing information but it can be hard to get connectivity information. Plus, once you start thinking about networks the nonlinear mess makes it hard to think about timing in general.

The straightforward way to decide whether a neuron is using the specific timing of each spike to mean something is to ask whether that timing contains any information. If you jitter the precise position of any given spike my 5 milliseconds, 1 millisecond, half a millisecond – does the neural code become more redundant? Does this make the response of the neuron any more random at that moment in time than it was before?

Just show an animal some movie and record from a neuron that responds to vision. Now show that movie again and again and get a sense of how that neuron responds to each frame or each new visual scene. Then the information is just how stereotyped the response is at any given moment compared to normal, how much more certain you are at that moment than any other moment. Now pick up a spike and move it over a millisecond or so. Is this within the stereotyped range? Then it probably isn’t conveying information over a millisecond. Does the response become more random? Then you’ve lost information.

But these cold statistical arguments can be unpersuasive to a lot of people. It is nice if you can see a picture and just understand. So here is the experiment: songbirds have neurons which directly control the muscles for breathing (respiration). This provides us with a very simple input/output system, where the input is the time of a spike and the output is the air pressure exerted by the muscle. What happens when we provide just a few spikes and move the precise timing of one of these spikes?

The beautiful figure above is one of those that is going directly into my bag’o’examples. What it shows is a sequence of three induced spikes (upper right) where the time of the middle spike changes. The main curves are the how the pressure changes with the different timing in spikes. You can’t get much clearer than that.

Not only does it show, quite clearly, that the precise time of a single spike matters but that it matters in a continuous fashion – almost certainly on a sub-millisecond level.

Update:

The twitter thread on this post ended up being useful, so let me clarify a few things. First, the interesting thing about this paper is not that the motor neurons can precisely control the muscle; it is that when they record the natural incoming activity, it appears to provide information on the order of ~1ms; and the over-represented patterns of spikes include the patterns in the figure above. So the point is that these motor neurons are receiving information on the scale of one millisecond and that the information in these patterns has behaviorally-relevant effects.

Some other interesting bits of discussion came up. What doesn’t use spike-timing information? Plenty of sensory systems do; I thought at first that maybe olfaction doesn’t but of course I was wrong. Here’s a hypothesis: all sensory and motor systems do (eg, everything facing the outside world). (Although, read these papers). When would you expect spike-timing to not matter? When the number of active input neurons are large and uncorrelated. Does spike timing make sense for Deep Networks where the neurons are implicitly representing firing rates? Here is a paper that breaks it down into rate and phase.

References

Srivastava KH, Holmes CM, Vellema M, Pack AR, Elemans CP, Nemenman I, & Sober SJ (2017). Motor control by precisely timed spike patterns. Proceedings of the National Academy of Sciences of the United States of America, 114 (5), 1171-1176 PMID: 28100491

Nemenman I, Lewen GD, Bialek W, & de Ruyter van Steveninck RR (2008). Neural coding of natural stimuli: information at sub-millisecond resolution. PLoS computational biology, 4 (3) PMID: 18369423

Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Your eyes are your own

Retinal mosaic

This blows my mind. There is a technique that allows us to map the distribution of cones in a person’s eyes. You would think that there is some consistency from individual to individual, or that it would be distributed in some predictable manner but! No!

What happens when you show each of these people a flash of color and ask them to just give us a name? Something like you’d expect – the person who only seems to have red cones seems to name just about everything either red or white. Those with a broader distribution of cones give a broader distribution of colors. You can even predict the colors that someone will name based solely on the tiling of these cones.color naming

And none of these people are technically color blind! What kind of a world is BS seeing??

Reference

Brainard, D., Williams, D., & Hofer, H. (2008). Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots Journal of Vision, 8 (5) DOI: 10.1167/8.5.15

CSHL Vision Course

cshlfirstday

I have just returned from two weeks in Cold Spring Harbor at the Computational Neuroscience: Vision course. I was not entirely sure what to expect. Maybe two weeks of your standard lectures? No – this was two weeks of intense scientific discussion punctuated with the occasional nerf fight (sometimes during lectures, sometimes not), beach bonfire, or table tennis match.

It is not just the material that was great but the people. Every day brought in a fresh rotation of scientists ready to spend a couple of days discussing their work – and the work of the field – and to just hang out. I learned as much or more at the dinner table as I did in the seminar room. But it wasn’t just the senior scientists that were exhilarating but the other students. It is a bit intimidating seeing how much talent exists in the field… And how great they are as people.

I also found that the graduate students at CSHL have the benefit of attending these courses (for free). It was great to meet people from all of the labs and hear about the cool stuff going on. Of course, they live pretty well, too. Here is the back patio of my friend’s house:

cshlpdhouse

I think I could get used to that?

Anyway, this is all a long-winded way of saying: if you get the chance, attend one of these courses! And being there motivated me to start making more of an effort to update the blog again. I swear, I swear…

cshllastday

Sleep – what is it good for (absolutely nothing?)

Sleep can often feel like a fall into annihilation and rebirth. One moment you have all the memories and aches of a long day behind you, the next you wake up from nothingness into the start of something new. Or: a rush from some fictional slumber world into an entirely different waking world. What is it that’s happening inside your head? Why this rest?

Generally the answer that we are given is some sort of spring cleaning and consolidation, a removal of cruft and a return to what is important (baseline, learned). There is certainly plenty of evidence that the brain is doing this while you rest. One of the most powerful of these ‘resetting’ mechanisms is homeostatic plasticity. Homeostatic plasticity often feels like an overlooked form of learning, despite the gorgeous work being done on it (Gina Turrigiano and Eve Marder’s papers have been some of my all-time favorite work for forever).

One simple experiment that you can do to understand homeostatic plasticity is to take a slice of a brain and dump TTX on it to block sodium channels and thus spiking. When you remove it days later, the neurons will be spiking like crazy. Slowly, they will return to their former firing rate. It seems like every neuron knows what its average spiking should be, and tries to reach it.

But when does it happen? I would naively think that it should happen while you are asleep, while your brain can sort out what happened during the day, reorganize, and get back where it wants to be. Let’s test that idea.

Take a rat and at a certain age, blind one eye. Then just watch how visual neurons change their overall firing rate. Like so:Screen Shot 2016-04-03 at 11.26.07 AMdarklight-homeostasis

At first the firing rate goes down. There is no input! Why should they be doing anything? Then, slowly but surely the neuron goes back to doing what it did before it was blinded. Same ol’, same ol’. Let’s look at what it’s doing when the firing rate is returning to its former life:

sleep-homeostasisThis is something of a WTF moment. Nothing during sleep, nothing at all? Only when it is awake and – mostly – behaving? What is going on here?

Here’s my (very, very) speculative possibility: something like efference copy. When an animal is asleep, it’s getting nothing new. It doesn’t know that anything is ‘wrong’. Homeostatic plasticity may be ‘returning to baseline’, but it may also be ‘responding to signals the same way on average’. And when it is asleep, what signals are there? But when it is moving – ah, that is when it gets new signals.

When the brain generates a motor signal, telling the body to move, it also sends signals back to the sensory areas of the brain to let it know what is going on. Makes it much easier to keep things stable when you already know that the world is going to move in a certain way. Perhaps – perhaps – when it is moving, it is getting the largest error signals from the brain, the largest listen to me signals, and that is exactly when the homeostatic plasticity should happen: when it knows what it has something to return to baseline in respect to.

Reference

Hengen, K., Torrado Pacheco, A., McGregor, J., Van Hooser, S., & Turrigiano, G. (2016). Neuronal Firing Rate Homeostasis Is Inhibited by Sleep and Promoted by Wake Cell, 165 (1), 180-191 DOI: 10.1016/j.cell.2016.01.046

Brain Prize 2016

The Brain Prize, a thing I don’t think I knew existed, just gave $1,000,000 to three neuroscientists for their work on LTP. As with most prizes, the best part is the motivation to go back and read classic papers!

The best winner was Richard Morris because he kind of revolutionized the memory field with this figure:

Morris Water Maze

Yes, he created the Morris Water Maze, used to study learning and memory in a seemingly-infinite number of papers.

water maze 2

When was the last time you went back and actually read the original Morris Water Maze paper? I know I had not ever read it before today: but I should have.

No less important was the work of Timothy Bliss (and Terje Lomo, who did not win) illustrating the induction of LTP. Most of us have probably heard “neurons that fire together, wire together” and this is the first real illustration of the phenomenon (in 1973):

LTP induction

Bliss and Lomo were able to induce long-lasting changes in the strength of connections between two neurons by a “tetanic stimulation protocol“. The above figure is seared into my brain from my first year of graduate school, where Jeff Isaacson dragged us through paper after paper that used variations on this protocol to investigate the properties of LTP.

The final winner was Graham Collingridge who demonstrated that hippocampal LTP was induced via NMDA receptors. I don’t think this was the paper that demonstrated it, but I always found his 1986 paper on slow NMDA receptors quite beautiful:

NMDA LTP

Here, he has blocked NMDA receptors with APV and sees no spiking after repeated stimulation. However, when this blocker is washed out, you see spiking only after receiving several inputs because of the slow timescale of the receptors.

While historically powerful, the focus on NMDA receptors can be misleading. LTP can be induced in many different ways depending on the specific neural type and brain region! For my money, I have always been a fan of the more generalized form, STDP. Every neuroscientist should read and understand the Markram et al (1997) paper that demonstrates it and the Bi and Poo (1998) paper that has this gorgeous figure:

bi and poo

Read about the past, and remember where your science came from.

How to create neural circuit diagrams (updated)

nmeth.3777-F3

My diagrams are always a mess, but maybe I could start following this advice a little more carefully?

Diagrams of even simple circuits are often unnecessarily complex, making understanding brain connectivity maps difficult…Encoding several variables without sacrificing information, while still maintaining clarity, is a challenge. To do this, exclude extraneous variables—vary a graphical element only if it encodes something relevant, and do not encode any variables twice…

For neural circuits such as the brainstem auditory circuits, physical arrangement is a fundamental part of function. Another topology that is commonly necessary in neural circuit diagrams is the laminar organization of the cerebral cortex. When some parts of a circuit diagram are anatomically correct, readers may assume all aspects of the figure are similarly correct. For example, if cells are in their appropriate layers, one may assume that the path that one axon travels to reach another cell is also accurate. Be careful not to portray misleading information—draw edges clearly within or between layers, and always clearly communicate any uncertainty in the circuit.

Update: Andrew Giessel pointed me to this collection of blog posts from Nature Methods on how to visualize biological data more generally. Recommended!

Genetically-encoded voltage sensors (Ace edition)

One of the biggest advances in recording from neurons has been the development of genetically-encoded calcium indicators. These allow neuroscientists to record the activity of large numbers of neurons belonging to a specific subclass. Be they fast-spiking interneurons in cortex or single, identified neurons in worms and flies, genetic encoding of calcium indicators has brought a lot to the field.

Obviously, calcium is just an indirect measure of what most neuroscientists really care about: voltage. We want to see the spikes. A lot of work has been put into making genetically-encoded voltage indicators, though the signal-to-noise has always been a problem. That is why I was so excited to see this paper from Mark Schnitzer’s lab:

ace-gevi

I believe they are calling this voltage indicator Ace. It looks pretty good, but as they say time will tell.

The chatter is that it bleaches quickly (usable on the order of a minute) and still has low signal to noise – look at the scale bar ~1% above. I have also heard there may be lots of photodamage. But, hey, those are spikes.