Sleep – what is it good for (absolutely nothing?)

Sleep can often feel like a fall into annihilation and rebirth. One moment you have all the memories and aches of a long day behind you, the next you wake up from nothingness into the start of something new. Or: a rush from some fictional slumber world into an entirely different waking world. What is it that’s happening inside your head? Why this rest?

Generally the answer that we are given is some sort of spring cleaning and consolidation, a removal of cruft and a return to what is important (baseline, learned). There is certainly plenty of evidence that the brain is doing this while you rest. One of the most powerful of these ‘resetting’ mechanisms is homeostatic plasticity. Homeostatic plasticity often feels like an overlooked form of learning, despite the gorgeous work being done on it (Gina Turrigiano and Eve Marder’s papers have been some of my all-time favorite work for forever).

One simple experiment that you can do to understand homeostatic plasticity is to take a slice of a brain and dump TTX on it to block sodium channels and thus spiking. When you remove it days later, the neurons will be spiking like crazy. Slowly, they will return to their former firing rate. It seems like every neuron knows what its average spiking should be, and tries to reach it.

But when does it happen? I would naively think that it should happen while you are asleep, while your brain can sort out what happened during the day, reorganize, and get back where it wants to be. Let’s test that idea.

Take a rat and at a certain age, blind one eye. Then just watch how visual neurons change their overall firing rate. Like so:Screen Shot 2016-04-03 at 11.26.07 AMdarklight-homeostasis

At first the firing rate goes down. There is no input! Why should they be doing anything? Then, slowly but surely the neuron goes back to doing what it did before it was blinded. Same ol’, same ol’. Let’s look at what it’s doing when the firing rate is returning to its former life:

sleep-homeostasisThis is something of a WTF moment. Nothing during sleep, nothing at all? Only when it is awake and – mostly – behaving? What is going on here?

Here’s my (very, very) speculative possibility: something like efference copy. When an animal is asleep, it’s getting nothing new. It doesn’t know that anything is ‘wrong’. Homeostatic plasticity may be ‘returning to baseline’, but it may also be ‘responding to signals the same way on average’. And when it is asleep, what signals are there? But when it is moving – ah, that is when it gets new signals.

When the brain generates a motor signal, telling the body to move, it also sends signals back to the sensory areas of the brain to let it know what is going on. Makes it much easier to keep things stable when you already know that the world is going to move in a certain way. Perhaps – perhaps – when it is moving, it is getting the largest error signals from the brain, the largest listen to me signals, and that is exactly when the homeostatic plasticity should happen: when it knows what it has something to return to baseline in respect to.

Reference

Hengen, K., Torrado Pacheco, A., McGregor, J., Van Hooser, S., & Turrigiano, G. (2016). Neuronal Firing Rate Homeostasis Is Inhibited by Sleep and Promoted by Wake Cell, 165 (1), 180-191 DOI: 10.1016/j.cell.2016.01.046

Brain Prize 2016

The Brain Prize, a thing I don’t think I knew existed, just gave $1,000,000 to three neuroscientists for their work on LTP. As with most prizes, the best part is the motivation to go back and read classic papers!

The best winner was Richard Morris because he kind of revolutionized the memory field with this figure:

Morris Water Maze

Yes, he created the Morris Water Maze, used to study learning and memory in a seemingly-infinite number of papers.

water maze 2

When was the last time you went back and actually read the original Morris Water Maze paper? I know I had not ever read it before today: but I should have.

No less important was the work of Timothy Bliss (and Terje Lomo, who did not win) illustrating the induction of LTP. Most of us have probably heard “neurons that fire together, wire together” and this is the first real illustration of the phenomenon (in 1973):

LTP induction

Bliss and Lomo were able to induce long-lasting changes in the strength of connections between two neurons by a “tetanic stimulation protocol“. The above figure is seared into my brain from my first year of graduate school, where Jeff Isaacson dragged us through paper after paper that used variations on this protocol to investigate the properties of LTP.

The final winner was Graham Collingridge who demonstrated that hippocampal LTP was induced via NMDA receptors. I don’t think this was the paper that demonstrated it, but I always found his 1986 paper on slow NMDA receptors quite beautiful:

NMDA LTP

Here, he has blocked NMDA receptors with APV and sees no spiking after repeated stimulation. However, when this blocker is washed out, you see spiking only after receiving several inputs because of the slow timescale of the receptors.

While historically powerful, the focus on NMDA receptors can be misleading. LTP can be induced in many different ways depending on the specific neural type and brain region! For my money, I have always been a fan of the more generalized form, STDP. Every neuroscientist should read and understand the Markram et al (1997) paper that demonstrates it and the Bi and Poo (1998) paper that has this gorgeous figure:

bi and poo

Read about the past, and remember where your science came from.

How to create neural circuit diagrams (updated)

nmeth.3777-F3

My diagrams are always a mess, but maybe I could start following this advice a little more carefully?

Diagrams of even simple circuits are often unnecessarily complex, making understanding brain connectivity maps difficult…Encoding several variables without sacrificing information, while still maintaining clarity, is a challenge. To do this, exclude extraneous variables—vary a graphical element only if it encodes something relevant, and do not encode any variables twice…

For neural circuits such as the brainstem auditory circuits, physical arrangement is a fundamental part of function. Another topology that is commonly necessary in neural circuit diagrams is the laminar organization of the cerebral cortex. When some parts of a circuit diagram are anatomically correct, readers may assume all aspects of the figure are similarly correct. For example, if cells are in their appropriate layers, one may assume that the path that one axon travels to reach another cell is also accurate. Be careful not to portray misleading information—draw edges clearly within or between layers, and always clearly communicate any uncertainty in the circuit.

Update: Andrew Giessel pointed me to this collection of blog posts from Nature Methods on how to visualize biological data more generally. Recommended!

Genetically-encoded voltage sensors (Ace edition)

One of the biggest advances in recording from neurons has been the development of genetically-encoded calcium indicators. These allow neuroscientists to record the activity of large numbers of neurons belonging to a specific subclass. Be they fast-spiking interneurons in cortex or single, identified neurons in worms and flies, genetic encoding of calcium indicators has brought a lot to the field.

Obviously, calcium is just an indirect measure of what most neuroscientists really care about: voltage. We want to see the spikes. A lot of work has been put into making genetically-encoded voltage indicators, though the signal-to-noise has always been a problem. That is why I was so excited to see this paper from Mark Schnitzer’s lab:

ace-gevi

I believe they are calling this voltage indicator Ace. It looks pretty good, but as they say time will tell.

The chatter is that it bleaches quickly (usable on the order of a minute) and still has low signal to noise – look at the scale bar ~1% above. I have also heard there may be lots of photodamage. But, hey, those are spikes.

 

Stimulating deeper with near-infrared

Interesting idea in this post at Labrigger:

Compared to visible (vis) light, near infrared (NIR) wavelength scatters less and is less absorbed in brain tissue. If your fluorescent target absorbs vis light, then one way to use NIR is to flood the area with molecules that will absorb NIR and emit vis light. The process is called “upconversion“, since it is in contrast to the more common process of higher energy (shorter wavelength) light being converted into lower energy (longer wavelength) light. The effect looks superficially similar to two-photon absorption, but it’s a very different physical process.

Apparently you can now do this with optogenetics, and is much higher wavelength than Chrimson or ReaChR. Reminds me a bit of Miesenbock’s Trp/P2X2 optogenetics system from 2003.

Also on Labrigger: open source intrinsic imaging

[Invertebrate note: since ReaChR allows Drosophila researchers to activate neurons in intact, freely behaving flies, perhaps this might give us a method to inactivate with Halorhodopsin?]

Behold, The Blue Brain

The Blue Brain project releases their first major paper today and boy, it’s a doozy. Including supplements, it’s over 100 pages long, including 40 figures and 6 tables. In order to properly understand everything in the paper, you have to go back and read a bunch of other papers they have released over their years that detail their methods. This is not a scientific paper: it’s a goddamn philosophical treatise on The Nature of Neural Reconstruction.

The Blue Brain Project – or should I say Henry Markram? it is hard to say where the two diverge – aims to simulate absolutely everything in a complete mammalian brain. Except right now it sits at middle-ground: other simulations have replicated more neurons (Izhikevich had a model with 10^11 neurons of 21 subtypes). At the other extreme, MCell has completely reconstructed everything about a single neuron – down to the diffusion of single atoms – in a way that Blue Brain does not.

The focus of Blue Brain right now is a certain level of simulation that derives from a particular mindset in neuroscience. You see, people in neuroscience work at all levels: from the individual molecules to flickering ion channels to single neurons up to networks and then whole brain regions. Markram came out of Bert Sakmann’s lab (where he discovered STDP) and has his eye on the ‘classical’ tradition that stretches back to Hodgkin and Huxley. He is measuring distributions of ion channels and spiking patterns and extending the basic Hodgkin-Huxley model into tinier compartments and ever more fractal branching patterns. In a sense, this is swimming against the headwinds of contemporary neuroscience. While plenty of people are still doing single-cell physiology, new tools that allow imaging of many neurons simultaneously in behaving animals have reshaped the direction of the field – and what we can understand about neural networks.

Some very deep questions arise here: is this enough? What will this tell us and what can it not tell us? What do we mean when we say we want to simulate the brain? How much is enough? We don’t really know – though the answer to the first question is assuredly no – and we assuredly don’t know enough to even begin to answer the second set of questions.

m-types

The function of the new paper is to collate in one place all of the data that they have been collecting – and it is a doozy. They report having recorded and labeled >14,000 (!!!!!) neurons from somatosensory cortex of P14 rats with full reconstruction of more than 1,000 of these neurons. That’s, uh, a lot. And they use a somewhat-convoluted terminology to describe all of these, throwing around terms like ‘m-type’ and ‘e-type’ and ‘me-type’ in order to classify the neurons. It’s something, I guess.

e-types

Since the neurons were taken from different animals at different times, they do a lot of inference to determine connectivity, ion channel conductance, etc. And that’s a big worry because – how many parameters are being fit here? How many channels are being missed? You get funny sentences in the paper like:

[We compared] in silico (edmodeled) PSPs with the corresponding in vitro (ed – measured in a slice prep) PSPs. The in silico PSPs were systematically lower (ed– our model was systematically different from the data). The results suggested that reported conductances are about 3-fold too low for excitatory connections, and 2-fold too low for inhibitory connections.

And this worries me a bit; are they not trusting their own measurements when it suits them? Perhaps someone who reads the paper more closely can clarify these points.

They then proceed to run these simulated neurons and perform ‘in silico experiments’. They first describe lowering the extracellular calcium level and finding that the network transitions from a regularly spiking state to a variable (asynchronous) state. And then they go, and do this experiment on biological neurons and get the same thing! That is a nice win for the model; they made a prediction and validated it.

On the other hand you get statements like the following:

We then used the virtual slice to explore the behavior of the microcircuitry for a wide range of tonic depolarization and Ca2+ levels. We found a spectrum of network states ranging from one extreme, where neuronal activity was largely synchronous, to another, where it was largely asynchronous. The spectrum was observed in virtual slices, constructed from all 35 individual instantiations of the microcircuit  and all seven instantiations of the average microcircuit.

In other words, it sounds like they might be able to find everything in their model.

But on the other hand…! They fix their virtual networks and ask: do we see specific changes in our network that experiments have seen in the past? And yes, generally they do. Are we allowed to wonder how many of these experiments and predictions did they do that did not pan out? It would have been great to see a full-blown failure to understand where the model still needs to be improved.

I don’t want to understand the sheer amount of work that was done here, or the wonderful collection of data that they now have available. The models that they make will be (already are?) available for anyone to download and this is going to be an invaluable resource. This is a major paper, and rightly so.

On the other hand – what did I learn from this paper? I’m not sure. The network wasn’t really doing anything, it just kind of…spiked. It wasn’t processing structured information like an animal’s brain would, it was just kind of sitting there, occasionally having an epileptic fit (note that at one point they do simulate thalamic input into the model, which I found to be quite interesting).

This project has metamorphosed into a bit of a social conundrum for the field. Clearly, people are fascinated – I had three different people send me this paper prior to its publication, and a lot of others were quite excited and wanted access to it right away. And the broader Blue Brain Project has had a somewhat unhappy political history. A lot of people – like me! – are strong believers in computation and modeling, and would really like it see it succeed. Yet what the chunk of neuroscience that they have bitten off, and the way they have gone about it, lead many to worry. The field had been holding its breath a bit to see what Blue Brain was going to release – and I think they will need to hold their breath a bit longer.

Reference

Markram et al (2015). Reconstruction and Simulation of Neocortical Microcircuitry Cell

(link)

Not your typical science models

CPwPff6WEAAijVN

Cell has an article showcasing other animal model candidates beyond the typical C. elegansDrosophila, mice, etc. Really it is just a list of people using other models explaining why they use them, but it is pretty cool to learn about what they are doing. They list Thellungiella sp., axolotl, naked mole rats, lampreys (which look terrifying), honey bees, bats, mouse lemurs (with which Tony Movshon famously trolled all rodent vision scientists), turquoise killfish, and songbirds. Because I love bees, here is the bee explanation:

Honey bees (Apis mellifera) provide remarkable opportunities for understanding complex behavior, with systems of division
of labor, communication, decision making, and social aging/immunity. They teach us how social behaviors develop from solitary behavioral modules, with only minor ‘‘tweaking’’ of molecular networks. They help us unravel the fundamental properties of learning, memory, and symbolic language. They reveal the dynamics of collective decision making
and how social plasticity can change epigenetic brain programming or reverse brain aging. They show us the mechanistic basis of trans-generational immune priming in invertebrates, perhaps facilitating the first vaccines for insects.

These processes and more can be studied across the levels of biological complexity—from genes to societies and over multiple timescales—from action potential to evolutionary. As models in neuroscience and animal behavior, honey bees have batteries of established research tools for brain/behavioral patterns, sensory perception, and cognition. Genome sequence, molecular tools, and a number of functional genomic tools are also available. With a relatively large-sized body (1.5 cm) and brain (1 mm3), this fascinating animal is, additionally, easy to work with for students of all ages.

Beekeeping practices date as early as the Minoan Civilization, where the bee symbolized a Mother Goddess. Today, we increasingly value honey bees as essential pollinators of commercial crops and for their ecosystem services. Honey bees have been called keepers of the logic of life. They are truly.

I would add mosquitoes, ants, deer mice, (prairie/etc) voles, cuttlefish, jellyfish and of course marmosets to the list.

10 years of neural opsins

Just in time for Nature Neuroscience’s Optogenetics 10-year anniversary retrospective, Ed Boyden has announced the first (?) time the FDA has approved optegenetics for human testing.

The set of retrospective pieces that NN published are quite interesting. For instance:

(Deisseroth) It seems unlikely that the initial experiments described here would have been fundable, as such, by typical grant programs focusing on a disease state, on a translational question, or even on solidly justified basic science…In this way, progress over the last ten years has revealed not only much about the brain, but also something about the scientific process.

(Boyden) The study, which originated in ideas and experiments generated by Karl Deisseroth and myself, collaborating with Georg Nagel and Ernst Bamberg and later with the assistance of Feng Zhang, was not immediately a smash hit. Rejected by Science, then Nature, the discovery perhaps seemed too good to be true. Could you really just express a single natural algal gene, channelrhodopsin-2 (ChR2), in neurons to make them light-activatable?

These are from the history and future of optogenetics summaries, respectively. Many people looking back on it had similar thoughts:

(Josselyn) I thought the data were interesting, but likely not replicable and definitely not generalizable. I thought optogenetics would not work reliably and, even if it did, the technique would be so complicated as to be out of reach for most neuroscience labs. My initial impression was that optogenetics would be highly parameter-sensitive and would take lots of fiddling to get any kind of effect. I was definitely in the camp that didn’t think it would have an impact on my kind of neuroscience.

Think about their perspective at the time:

So why did it take time to develop and apply methods for placing these proteins into different classes of neurons in behaving animals? As mentioned above, the development of optogenetics was a biological three-body problem in which it was hard to resolve (or, even more importantly, to motivate attempts to resolve) any one of the three challenges without first addressing the other components. For example, microbial rhodopsin photocurrents were predicted to be exceedingly small, suggesting a difficult path forward even if efficient delivery and incorporation of the all-trans retinal chromophore were possible in adult non-retinal brain tissue, and even in the event of safe and correct trafficking of these evolutionarily remote proteins to the surface membrane of complex metazoan neurons. For these weak membrane conductance regulators to work, high gene-expression and light-intensity levels would have to be attained in living nervous systems while simultaneously attaining cell-type specificity and minimizing cellular toxicity. All of this would have to be achieved even though neurons were well known to be highly vulnerable to (and often damaged or destroyed by) overexpression of membrane proteins, as well as sensitive to side effects of heat and light. Motivating dedicated effort to exploration of microbial opsin-based optical control was difficult in the face of these multiple unsolved problems, and the dimmest initial sparks of hope would turn out to mean a great deal.

And the important thing to remember:

(Soltesz) But what made the rise of optogenetics so fast? I believe it was more than just the evident usefulness of the technology itself. Indeed, in my opinion, it is to the credit of Deisseroth and Boyden that they had recognized early that by freely sharing the reagents and methods they can make optogenetics as much of a basic necessity in neurosci-ence labs as PCs, iPhones and iPads came to be in the lives of everyday citizens. This is a part of their genius that made optogenetics spread like wildfire. The open-source philosophy that they adopted stands in stark contrast to numerous other techniques where the developers tightly control all material and procedural aspects of their methodology for short-term gain, which in most, albeit not all, cases has proven to be a rather penny-wise, pound-foolish attitude in the long run.

 

Go read this Q&A with many of the pioneers of the field. Stay through to the end of the “What was your first reaction when optogenetics came onto the scene 10 years ago?” question at least.

Here is the original paper. Don’t forget that Miesenbock’s group had optogenetics work that preceded (Boyden 2005) but never quite “made it”.

The silent majority (of neurons)

Kelly Clancy has yet another fantastic article explaining a key idea in theoretical neuroscience (here is another):

Today we know that a large population of cortical neurons are “silent.” They spike surprisingly rarely, and some do not spike at all. Since researchers can only take very limited recordings from inside human brains (for example, from patients in preparation for brain surgery), they have estimated activity rates based on the brain’s glucose consumption. The human brain, which accounts for less than 2 percent of the body’s mass, uses 20 percent of its calorie budget, or three bananas worth of energy a day. That’s remarkably low, given that spikes require a lot of energy. Considering the energetic cost of a single spike and the number of neurons in the brain, the average neuron must spike less than once per second.4 Yet the cells typically recorded in human patients fire tens to hundreds of times per second, indicating a small minority of neurons eats up the bulk of energy allocated to the brain.

There are two extremes of neural coding: Perceptions might be represented through the activity of ensembles of neurons, or they might be encoded by single neurons. The first strategy, called the dense code, would result in a huge storage capacity: Given N neurons in the brain, it could encode 2Nitems—an astronomical figure far greater than the number of atoms in the universe, and more than one could experience in many lifetimes. But it would also require high activity rates and a prohibitive energy budget, because many neurons would need to be active at the same time. The second strategy—called the grandmother code because it implies the existence of a cell that only spikes for your grandmother—is much simpler. Every object in experience would be represented by a neuron in the same way each key on a keyboard represents a single letter. This scheme is spike-efficient because, since the vast majority of known objects are not involved in a given thought or experience, most neurons would be dormant most of the time. But the brain would only be able to represent as many concepts as it had neurons.

Theoretical neuroscientists struck on a beautiful compromise between these ideas in the late ’90s.6,7In this strategy, dubbed the sparse code, perceptions are encoded by the activity of several neurons at once, as with the dense code. But the sparse code puts a limit on how many neurons can be involved in coding a particular stimulus, similar to the grandmother code. It combines a large storage capacity with low activity levels and a conservative energy budget.

 

 

She goes on to discuss the sparse coding work of Bruno Olshausen, specifically this famous paper. This should always be read in the context of Bell & Sejnowski which shows the same thing with ICA. Why are the independent components and the sparse coding result the same? Bruno Olshausen has a manuscript explaining why this is the case, but the general reason is that both are just Hebbian learning!

She ends by asking, why are some neurons sparse and some so active? Perhaps these are two separate coding strategies? But they need not be: in order for codes to be sparse in general, it could require some few specific neurons to be highly active.

#Cosyne2015, by the numbers

 

Cosyne2015_posters

Another year, another Cosyne. Sadly, I will be there only in spirit (and not, you know, reality.) But I did manage to get my hands all over the Cosyne abstract authors data…I can now tell you everyone who has had a poster or talk presented there and who it was with. Did you know Steven Pinker was a coauthor on a paper in 2004?!

This year, the winner of the ‘most posters’ award (aka, the Hierarch of Cosyne)  goes to Carlos Brody. Carlos has been developing high-throughput technology to really bang away at the hard problem of decision-making in rodents, and now all that work is coming out at once. Full disclosure notice, his lab sits above me and they are all doing really awesome work.

Here are the Hierarchs, historically:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody

CosyneAll_posters

Above is the total number of posters/abstracts by author. There are prolific authors, and there is Liam Paninski. Congratulations Liam, you maintain your iron grip as the Pope of Cosyne.

As a technical note, I took ‘unique’ names by associating first letter of the name with last name. I’m pretty sure X. Wang is at least two or three different people and some names (especially those with an umlaut or, for some reason, Paul Schrater) are especially likely to change spelling from year to year. I tried correcting a bit, but fair warning.

Power law 2004-2015

 

As I mentioned last year, the distribution of posters follows a power law.

But now we have the network data and it is pretty awesome to behold. I was surprised that if we just look at this year’s posters, there is tons of structure (click here for a high-res, low-size PDF version):
ajcCOSYNE_2015_small_image

When you include both 2014 and 2015, things get even more connected (again, PDF version):

ajcCOSYNE_2014-2015_small_image

Beyond this it starts becoming a mess. The community is way too interconnected and lines fly about every which way. If anyone has an idea of a good way to visualize all the data (2004-2015), I am all ears. And as I said, I have the full connectivity diagram so if anyone wants to play around with the data, just shoot me an email at adam.calhoun at gmail.

Any suggestions for further analyses?