Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Your eyes are your own

Retinal mosaic

This blows my mind. There is a technique that allows us to map the distribution of cones in a person’s eyes. You would think that there is some consistency from individual to individual, or that it would be distributed in some predictable manner but! No!

What happens when you show each of these people a flash of color and ask them to just give us a name? Something like you’d expect – the person who only seems to have red cones seems to name just about everything either red or white. Those with a broader distribution of cones give a broader distribution of colors. You can even predict the colors that someone will name based solely on the tiling of these cones.color naming

And none of these people are technically color blind! What kind of a world is BS seeing??

Reference

Brainard, D., Williams, D., & Hofer, H. (2008). Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots Journal of Vision, 8 (5) DOI: 10.1167/8.5.15

CSHL Vision Course

cshlfirstday

I have just returned from two weeks in Cold Spring Harbor at the Computational Neuroscience: Vision course. I was not entirely sure what to expect. Maybe two weeks of your standard lectures? No – this was two weeks of intense scientific discussion punctuated with the occasional nerf fight (sometimes during lectures, sometimes not), beach bonfire, or table tennis match.

It is not just the material that was great but the people. Every day brought in a fresh rotation of scientists ready to spend a couple of days discussing their work – and the work of the field – and to just hang out. I learned as much or more at the dinner table as I did in the seminar room. But it wasn’t just the senior scientists that were exhilarating but the other students. It is a bit intimidating seeing how much talent exists in the field… And how great they are as people.

I also found that the graduate students at CSHL have the benefit of attending these courses (for free). It was great to meet people from all of the labs and hear about the cool stuff going on. Of course, they live pretty well, too. Here is the back patio of my friend’s house:

cshlpdhouse

I think I could get used to that?

Anyway, this is all a long-winded way of saying: if you get the chance, attend one of these courses! And being there motivated me to start making more of an effort to update the blog again. I swear, I swear…

cshllastday

Sleep – what is it good for (absolutely nothing?)

Sleep can often feel like a fall into annihilation and rebirth. One moment you have all the memories and aches of a long day behind you, the next you wake up from nothingness into the start of something new. Or: a rush from some fictional slumber world into an entirely different waking world. What is it that’s happening inside your head? Why this rest?

Generally the answer that we are given is some sort of spring cleaning and consolidation, a removal of cruft and a return to what is important (baseline, learned). There is certainly plenty of evidence that the brain is doing this while you rest. One of the most powerful of these ‘resetting’ mechanisms is homeostatic plasticity. Homeostatic plasticity often feels like an overlooked form of learning, despite the gorgeous work being done on it (Gina Turrigiano and Eve Marder’s papers have been some of my all-time favorite work for forever).

One simple experiment that you can do to understand homeostatic plasticity is to take a slice of a brain and dump TTX on it to block sodium channels and thus spiking. When you remove it days later, the neurons will be spiking like crazy. Slowly, they will return to their former firing rate. It seems like every neuron knows what its average spiking should be, and tries to reach it.

But when does it happen? I would naively think that it should happen while you are asleep, while your brain can sort out what happened during the day, reorganize, and get back where it wants to be. Let’s test that idea.

Take a rat and at a certain age, blind one eye. Then just watch how visual neurons change their overall firing rate. Like so:Screen Shot 2016-04-03 at 11.26.07 AMdarklight-homeostasis

At first the firing rate goes down. There is no input! Why should they be doing anything? Then, slowly but surely the neuron goes back to doing what it did before it was blinded. Same ol’, same ol’. Let’s look at what it’s doing when the firing rate is returning to its former life:

sleep-homeostasisThis is something of a WTF moment. Nothing during sleep, nothing at all? Only when it is awake and – mostly – behaving? What is going on here?

Here’s my (very, very) speculative possibility: something like efference copy. When an animal is asleep, it’s getting nothing new. It doesn’t know that anything is ‘wrong’. Homeostatic plasticity may be ‘returning to baseline’, but it may also be ‘responding to signals the same way on average’. And when it is asleep, what signals are there? But when it is moving – ah, that is when it gets new signals.

When the brain generates a motor signal, telling the body to move, it also sends signals back to the sensory areas of the brain to let it know what is going on. Makes it much easier to keep things stable when you already know that the world is going to move in a certain way. Perhaps – perhaps – when it is moving, it is getting the largest error signals from the brain, the largest listen to me signals, and that is exactly when the homeostatic plasticity should happen: when it knows what it has something to return to baseline in respect to.

Reference

Hengen, K., Torrado Pacheco, A., McGregor, J., Van Hooser, S., & Turrigiano, G. (2016). Neuronal Firing Rate Homeostasis Is Inhibited by Sleep and Promoted by Wake Cell, 165 (1), 180-191 DOI: 10.1016/j.cell.2016.01.046

Brain Prize 2016

The Brain Prize, a thing I don’t think I knew existed, just gave $1,000,000 to three neuroscientists for their work on LTP. As with most prizes, the best part is the motivation to go back and read classic papers!

The best winner was Richard Morris because he kind of revolutionized the memory field with this figure:

Morris Water Maze

Yes, he created the Morris Water Maze, used to study learning and memory in a seemingly-infinite number of papers.

water maze 2

When was the last time you went back and actually read the original Morris Water Maze paper? I know I had not ever read it before today: but I should have.

No less important was the work of Timothy Bliss (and Terje Lomo, who did not win) illustrating the induction of LTP. Most of us have probably heard “neurons that fire together, wire together” and this is the first real illustration of the phenomenon (in 1973):

LTP induction

Bliss and Lomo were able to induce long-lasting changes in the strength of connections between two neurons by a “tetanic stimulation protocol“. The above figure is seared into my brain from my first year of graduate school, where Jeff Isaacson dragged us through paper after paper that used variations on this protocol to investigate the properties of LTP.

The final winner was Graham Collingridge who demonstrated that hippocampal LTP was induced via NMDA receptors. I don’t think this was the paper that demonstrated it, but I always found his 1986 paper on slow NMDA receptors quite beautiful:

NMDA LTP

Here, he has blocked NMDA receptors with APV and sees no spiking after repeated stimulation. However, when this blocker is washed out, you see spiking only after receiving several inputs because of the slow timescale of the receptors.

While historically powerful, the focus on NMDA receptors can be misleading. LTP can be induced in many different ways depending on the specific neural type and brain region! For my money, I have always been a fan of the more generalized form, STDP. Every neuroscientist should read and understand the Markram et al (1997) paper that demonstrates it and the Bi and Poo (1998) paper that has this gorgeous figure:

bi and poo

Read about the past, and remember where your science came from.

How to create neural circuit diagrams (updated)

nmeth.3777-F3

My diagrams are always a mess, but maybe I could start following this advice a little more carefully?

Diagrams of even simple circuits are often unnecessarily complex, making understanding brain connectivity maps difficult…Encoding several variables without sacrificing information, while still maintaining clarity, is a challenge. To do this, exclude extraneous variables—vary a graphical element only if it encodes something relevant, and do not encode any variables twice…

For neural circuits such as the brainstem auditory circuits, physical arrangement is a fundamental part of function. Another topology that is commonly necessary in neural circuit diagrams is the laminar organization of the cerebral cortex. When some parts of a circuit diagram are anatomically correct, readers may assume all aspects of the figure are similarly correct. For example, if cells are in their appropriate layers, one may assume that the path that one axon travels to reach another cell is also accurate. Be careful not to portray misleading information—draw edges clearly within or between layers, and always clearly communicate any uncertainty in the circuit.

Update: Andrew Giessel pointed me to this collection of blog posts from Nature Methods on how to visualize biological data more generally. Recommended!

Genetically-encoded voltage sensors (Ace edition)

One of the biggest advances in recording from neurons has been the development of genetically-encoded calcium indicators. These allow neuroscientists to record the activity of large numbers of neurons belonging to a specific subclass. Be they fast-spiking interneurons in cortex or single, identified neurons in worms and flies, genetic encoding of calcium indicators has brought a lot to the field.

Obviously, calcium is just an indirect measure of what most neuroscientists really care about: voltage. We want to see the spikes. A lot of work has been put into making genetically-encoded voltage indicators, though the signal-to-noise has always been a problem. That is why I was so excited to see this paper from Mark Schnitzer’s lab:

ace-gevi

I believe they are calling this voltage indicator Ace. It looks pretty good, but as they say time will tell.

The chatter is that it bleaches quickly (usable on the order of a minute) and still has low signal to noise – look at the scale bar ~1% above. I have also heard there may be lots of photodamage. But, hey, those are spikes.

 

Stimulating deeper with near-infrared

Interesting idea in this post at Labrigger:

Compared to visible (vis) light, near infrared (NIR) wavelength scatters less and is less absorbed in brain tissue. If your fluorescent target absorbs vis light, then one way to use NIR is to flood the area with molecules that will absorb NIR and emit vis light. The process is called “upconversion“, since it is in contrast to the more common process of higher energy (shorter wavelength) light being converted into lower energy (longer wavelength) light. The effect looks superficially similar to two-photon absorption, but it’s a very different physical process.

Apparently you can now do this with optogenetics, and is much higher wavelength than Chrimson or ReaChR. Reminds me a bit of Miesenbock’s Trp/P2X2 optogenetics system from 2003.

Also on Labrigger: open source intrinsic imaging

[Invertebrate note: since ReaChR allows Drosophila researchers to activate neurons in intact, freely behaving flies, perhaps this might give us a method to inactivate with Halorhodopsin?]

Behold, The Blue Brain

The Blue Brain project releases their first major paper today and boy, it’s a doozy. Including supplements, it’s over 100 pages long, including 40 figures and 6 tables. In order to properly understand everything in the paper, you have to go back and read a bunch of other papers they have released over their years that detail their methods. This is not a scientific paper: it’s a goddamn philosophical treatise on The Nature of Neural Reconstruction.

The Blue Brain Project – or should I say Henry Markram? it is hard to say where the two diverge – aims to simulate absolutely everything in a complete mammalian brain. Except right now it sits at middle-ground: other simulations have replicated more neurons (Izhikevich had a model with 10^11 neurons of 21 subtypes). At the other extreme, MCell has completely reconstructed everything about a single neuron – down to the diffusion of single atoms – in a way that Blue Brain does not.

The focus of Blue Brain right now is a certain level of simulation that derives from a particular mindset in neuroscience. You see, people in neuroscience work at all levels: from the individual molecules to flickering ion channels to single neurons up to networks and then whole brain regions. Markram came out of Bert Sakmann’s lab (where he discovered STDP) and has his eye on the ‘classical’ tradition that stretches back to Hodgkin and Huxley. He is measuring distributions of ion channels and spiking patterns and extending the basic Hodgkin-Huxley model into tinier compartments and ever more fractal branching patterns. In a sense, this is swimming against the headwinds of contemporary neuroscience. While plenty of people are still doing single-cell physiology, new tools that allow imaging of many neurons simultaneously in behaving animals have reshaped the direction of the field – and what we can understand about neural networks.

Some very deep questions arise here: is this enough? What will this tell us and what can it not tell us? What do we mean when we say we want to simulate the brain? How much is enough? We don’t really know – though the answer to the first question is assuredly no – and we assuredly don’t know enough to even begin to answer the second set of questions.

m-types

The function of the new paper is to collate in one place all of the data that they have been collecting – and it is a doozy. They report having recorded and labeled >14,000 (!!!!!) neurons from somatosensory cortex of P14 rats with full reconstruction of more than 1,000 of these neurons. That’s, uh, a lot. And they use a somewhat-convoluted terminology to describe all of these, throwing around terms like ‘m-type’ and ‘e-type’ and ‘me-type’ in order to classify the neurons. It’s something, I guess.

e-types

Since the neurons were taken from different animals at different times, they do a lot of inference to determine connectivity, ion channel conductance, etc. And that’s a big worry because – how many parameters are being fit here? How many channels are being missed? You get funny sentences in the paper like:

[We compared] in silico (edmodeled) PSPs with the corresponding in vitro (ed – measured in a slice prep) PSPs. The in silico PSPs were systematically lower (ed– our model was systematically different from the data). The results suggested that reported conductances are about 3-fold too low for excitatory connections, and 2-fold too low for inhibitory connections.

And this worries me a bit; are they not trusting their own measurements when it suits them? Perhaps someone who reads the paper more closely can clarify these points.

They then proceed to run these simulated neurons and perform ‘in silico experiments’. They first describe lowering the extracellular calcium level and finding that the network transitions from a regularly spiking state to a variable (asynchronous) state. And then they go, and do this experiment on biological neurons and get the same thing! That is a nice win for the model; they made a prediction and validated it.

On the other hand you get statements like the following:

We then used the virtual slice to explore the behavior of the microcircuitry for a wide range of tonic depolarization and Ca2+ levels. We found a spectrum of network states ranging from one extreme, where neuronal activity was largely synchronous, to another, where it was largely asynchronous. The spectrum was observed in virtual slices, constructed from all 35 individual instantiations of the microcircuit  and all seven instantiations of the average microcircuit.

In other words, it sounds like they might be able to find everything in their model.

But on the other hand…! They fix their virtual networks and ask: do we see specific changes in our network that experiments have seen in the past? And yes, generally they do. Are we allowed to wonder how many of these experiments and predictions did they do that did not pan out? It would have been great to see a full-blown failure to understand where the model still needs to be improved.

I don’t want to understand the sheer amount of work that was done here, or the wonderful collection of data that they now have available. The models that they make will be (already are?) available for anyone to download and this is going to be an invaluable resource. This is a major paper, and rightly so.

On the other hand – what did I learn from this paper? I’m not sure. The network wasn’t really doing anything, it just kind of…spiked. It wasn’t processing structured information like an animal’s brain would, it was just kind of sitting there, occasionally having an epileptic fit (note that at one point they do simulate thalamic input into the model, which I found to be quite interesting).

This project has metamorphosed into a bit of a social conundrum for the field. Clearly, people are fascinated – I had three different people send me this paper prior to its publication, and a lot of others were quite excited and wanted access to it right away. And the broader Blue Brain Project has had a somewhat unhappy political history. A lot of people – like me! – are strong believers in computation and modeling, and would really like it see it succeed. Yet what the chunk of neuroscience that they have bitten off, and the way they have gone about it, lead many to worry. The field had been holding its breath a bit to see what Blue Brain was going to release – and I think they will need to hold their breath a bit longer.

Reference

Markram et al (2015). Reconstruction and Simulation of Neocortical Microcircuitry Cell

(link)

Not your typical science models

CPwPff6WEAAijVN

Cell has an article showcasing other animal model candidates beyond the typical C. elegansDrosophila, mice, etc. Really it is just a list of people using other models explaining why they use them, but it is pretty cool to learn about what they are doing. They list Thellungiella sp., axolotl, naked mole rats, lampreys (which look terrifying), honey bees, bats, mouse lemurs (with which Tony Movshon famously trolled all rodent vision scientists), turquoise killfish, and songbirds. Because I love bees, here is the bee explanation:

Honey bees (Apis mellifera) provide remarkable opportunities for understanding complex behavior, with systems of division
of labor, communication, decision making, and social aging/immunity. They teach us how social behaviors develop from solitary behavioral modules, with only minor ‘‘tweaking’’ of molecular networks. They help us unravel the fundamental properties of learning, memory, and symbolic language. They reveal the dynamics of collective decision making
and how social plasticity can change epigenetic brain programming or reverse brain aging. They show us the mechanistic basis of trans-generational immune priming in invertebrates, perhaps facilitating the first vaccines for insects.

These processes and more can be studied across the levels of biological complexity—from genes to societies and over multiple timescales—from action potential to evolutionary. As models in neuroscience and animal behavior, honey bees have batteries of established research tools for brain/behavioral patterns, sensory perception, and cognition. Genome sequence, molecular tools, and a number of functional genomic tools are also available. With a relatively large-sized body (1.5 cm) and brain (1 mm3), this fascinating animal is, additionally, easy to work with for students of all ages.

Beekeeping practices date as early as the Minoan Civilization, where the bee symbolized a Mother Goddess. Today, we increasingly value honey bees as essential pollinators of commercial crops and for their ecosystem services. Honey bees have been called keepers of the logic of life. They are truly.

I would add mosquitoes, ants, deer mice, (prairie/etc) voles, cuttlefish, jellyfish and of course marmosets to the list.