Monday Open Question: what are the current controversies in neuroscience?

Shamelessly stolen from Marina Picciotto, who asked on twitter: what are the current controversies in neuroscience? The easy and eternal answer seems to be, are we doing fMRI properly?

Here are some other suggestions:

aka, do mirror neurons exist and do they do what we think? Is the DSM useful? Are there real (biological) sex differences and should they be studied? Should we be using Bayesian statistics? What does sleep actually do? What is the role of parvalbumin and somatostatin-positive interneurons? What is the role of hippocampus?

How many smells can a smelly person smell?

Who cares about invertebrates if they don’t even have a cortex?

Who cares about cerebellum if it isn’t even cortex? Also, does cerebellar LTD mediate motor learning (TIL this is a controversy; I’m paying attention to the wrong cortex).

What does the ACC do? (This is important for cognition. Probably.)

What does LIP do? Is it involved in decision-making?

People can survive with basically no cortex and appear fine. So what does cortex actually do?

Is the brain Bayesian? Should we care about the Bayesian Brain hypothesis?

Is the brain actually noisy or is that all signal?

How should we mathematically model the brain, and behavior?

Should we use animal models of whole disorders or just specific symptoms?

Does PKMzeta actually have a role in memory (does it even have a real role in LTP)?

Advertisements

Genetically-encoded voltage sensors (Ace edition)

One of the biggest advances in recording from neurons has been the development of genetically-encoded calcium indicators. These allow neuroscientists to record the activity of large numbers of neurons belonging to a specific subclass. Be they fast-spiking interneurons in cortex or single, identified neurons in worms and flies, genetic encoding of calcium indicators has brought a lot to the field.

Obviously, calcium is just an indirect measure of what most neuroscientists really care about: voltage. We want to¬†see the spikes. A lot of work has been put into making genetically-encoded¬†voltage indicators, though the signal-to-noise has always been a problem. That is why I was so excited to see this paper from Mark Schnitzer’s lab:

ace-gevi

I believe they are calling this voltage indicator Ace. It looks pretty good, but as they say time will tell.

The chatter is that it bleaches quickly (usable on the order of a minute) and still has low signal to noise – look at the scale bar ~1% above. I have also heard there may be lots of photodamage. But, hey, those are spikes.

 

Friday Fun: Guess the correlation!

For all you who work with data, guess that correlation!

GuessThatCorrelation

I am so bad that I keep dying in this game. Never trust me to blindly guesstimate a correlation by eye.

Happy Friday, everyone.

(via Konrad Kording)

Should small labs do fMRI experiments?

Over at Wiring The Brain, Kevin Mitchell asks whether it is worth it for small labs to do fMRI:

For psychiatric conditions like autism or schizophrenia I don‚Äôt know of any such ‚Äúfindings‚ÄĚ that have held up. We still have no diagnostic or prognostic imaging markers, or any other biomarkers for that matter, that have either yielded robust insights into underlying pathogenic mechanisms or been applicable in the clinic.

A number of people suggested that if neuroimaging studies were expected to have larger samples and to also include replication samples, then only very large labs would be able to afford to carry them out. What would the small labs do? How would they keep their graduate students busy and train them?

I have to say I have absolutely no sympathy for that argument at all, especially when it comes to allocating funding. We don‚Äôt have a right to be funded just so we can be busy. If a particular experiment requires a certain sample size to detect an effect size in the expected and reasonable range, then it should not be carried out without such a sample. And if it is an exploratory study, then it should have a replication sample built in from the start ‚Äď it should not be left to the field to determine whether the finding is real or not….Such studies just pollute the literature with false positives.

At the end of the day, you are doing rigorous science or you are not.

I do have a silly little theory – which I keep meaning to write up – on the economics of science. In some cases, it may be worth doing underpowered studies as a cost-effective way to generate hypotheses. However, this depends on the cost of the experiment – and fMRI seems to fall way too far into the “too expensive per data point” field to be worth it.

Commentary on a comment

If you want to see a masterclass in dissecting a paper, go read Tal Yarkoni’s discussion of “The dACC is selective for pain“:

That conclusion rests almost entirely on inspection of meta-analytic results produced by Neurosynth, an automated framework for large-scale synthesis of results from thousands of published fMRI studies. And while I‚Äôll be the first to admit that I know very little about the anterior cingulate cortex, I am probably the world‚Äôs foremost expert on Neurosynth*‚ÄĒbecause I created it.

…The basic argument L&E make is simple, and largely hangs on the following¬†observation about Neurosynth data:¬†when we look for activation in the dorsal ACC (dACC) in various¬†‚Äúreverse inference‚ÄĚ brain maps¬†on Neurosynth, the dominant¬†associate is the term ‚Äúpain‚ÄĚ…The blue outline in panel A is the anatomical boundary of dACC; the colorful stuff in B is the Neurosynth map for ‚ÄėdACC‚Äô…As you can see, the two don‚Äôt converge all that closely. Much of the Neurosynth map sits squarely inside preSMA territory rather than in dACC proper…That said, L&E should also have known better, because they were among the first authors to ascribe a strong functional role to a region of dorsal ACC that wasn‚Äôt really dACC at all… Much of the ongoing debate over what the putative role of dACC is traces back directly to this paper.

…Localization issues aside, L&E clearly do have a point when they note that there appears to be a relatively strong association between the posterior dACC and pain. Of course, it‚Äôs not a novel point…Of course, L&E go beyond the claims made in Yarkoni et al (2011)‚ÄĒand what the Neurosynth page for pain reveals‚ÄĒin that they claim not only that pain is preferentially associated with dACC, but that ‚Äúthe clearest account of dACC function is that it is selectively involved in pain-related processes.‚ÄĚ…Perhaps the most obvious problem with the claim is that it‚Äôs largely based on comparison of pain with just three other groups of terms, reflecting executive function, cognitive conflict, and salience**. This is, on its face, puzzling evidence for the claim that the dACC is pain-selective.

etc. etc. Traditionally, this type of critique would slowly be drafted as a short rebuttal to PNAS. But isn’t this better? Look how deep the critique is, look how well everything is defined and explained. And what is stopping the authors from¬†directly interacting with the author of the critique to really get at the problem? The only thing left is some way for pubmed or Google Scholar to link these directly to the paper.

Go read the whole thing and be learned.

Why are fish brains so small?

I’ll take “questions I didn’t realize I was interested in”. The deeper you go in the ocean, the smaller brains get. From the abstract:

Here, we test three hypotheses of brain size evolution using marine teleost fishes: the direct metabolic constraints hypothesis (DMCH), the expensive tissue hypothesis and the temperature-dependent hypothesis. Our analyses indicate that there is a robust positive correlation between encephalization and basal metabolic rate (BMR) that spans the full range of depths occupied by teleosts from the epipelagic (< 200 m), mesopelagic (200-1000 m) and bathypelagic (> 4000 m). Our results disentangle the effects of temperature and metabolic rate on teleost brain size evolution, supporting the DMCH. Our results agree with previous findings that teleost brain size decreases with depth; however, we also recover a negative correlation between trophic level and encephalization within the mesopelagic zone, a result that runs counter to the expectations of the expensive tissue hypothesis. We hypothesize that mesopelagic fishes at lower trophic levels may be investing more in neural tissue related to the detection of small prey items in a low-light environment.

In other words, there are metabolic constraints at lower ocean depths over and above the temperature-dependence. And interestingly, fish that are lower on the food chain (trophic levels) have relatively larger brains; possibly because it requires more difficult sensory/etc computations to find their prey in a sensory-deficient environment:

Although encephalization in marine fishes of the¬†mesopelagic was partially explained by trophic level¬†(Tables 2 and 3), this finding disagrees with expectations¬†under the expensive tissue hypothesis. Rather¬†than finding an increase in encephalization at higher¬†trophic positions, our analysis supported an inverse¬†relationship. This trend of increased brain size relative¬†to body size at lower trophic positions may be partially¬†explained by the increased sensory needs of planktonic¬†feeders at depths below 200 m… Plankton feeders in particular tend to have¬†greater eye and lateral line modifications in order to¬†detect more minute prey quantities (Bleckmann, 1986;¬†Coombs et al., 1988). While changes in brain morphology¬†have been associated with epipelagic fishes living¬†in turbid water…

(ht Neuroskeptic)

Neuroscience podcasts

I have a long drive to work each day so I listen to a lot of podcasts. I have been enjoying the new Unsupervised Thinking podcast from some computational neuroscience graduate students at Columbia. So far they have discussed: Blue Brain, Brain-Computer Interfaces, and Neuromorphic Computing. Where else would you find that?

I also found out that I got a shout-out on the Data Skeptic podcast (episode: Neuroscience from a Data Scientist’s perspective).

Update: I should also mention that I quite like the Neurotalk podcast. The grad students (?) interview neuroscientists who come to give talks at Stanford. Serious stuff. Raw Data was also recommended to me as up-my-alley but I have not yet had a chance to listen to it. YMMV.

Ben Carson is not a neuroscientist

16635957336_eb48b84689_z

Photo by Gage Skidmore

As every neuroscientist can tell you, most people don’t understand the difference between neuroscientists, neurosurgeons, and¬†neurologists.

Neurologist: A medical doctor who diagnoses and treats diseases of the nervous system

Neurosurgeon: A medical doctor who slices your brain up in order to heal it

Neuroscientist:¬†A scientist who studies how the nervous systems of all animals work. Most work at a level so abstract it seems pointless (but it isn’t!)

What does this mean? A neurologist listens to your symptoms and will try to figure out what has gone wrong in your brain; a neuroscientist tries to understand how the brain and nervous system work down to the finest detail, no matter how useless-seeming that detail might be; a neurosurgeon specializes in performing very technically challenging surgical procedures to cure disorders of the nervous system.

Ben Carson, a neurosurgeon and presidential candidate, published a tweet showcasing what he knows about neuroscience:

‚Ķthe brain can process two million bits of information per second. It remembers everything you’ve ever seen, everything you’ve ever heard‚Ķ

This is, if not wrong, then just plain old made up.

Let’s break this down:

“the brain can process two million bits of information per second”

Now two million bits per second certainly sounds like a lot! 2 million bits is what you might know as 2 megabits or roughly 200 kilobytes. For comparison, here is a picture of a Corgi in a Mario costume that is a little more than 200KB:

super_mario_corgi_costume

Is that too much information for you? Does it blow your mind??

It is actually really hard to calculate how much information a nervous system is ‘processing’. In fact, I can only find one paper that even attempts to answer a small part of that question: how¬†much does the eye tell the brain?¬†By recording from neurons in the retina, these scientists were able to estimate that one retina will transmit ~800KB/sec. This may be a bit of an overstatement [see (1) below for discussion], but obviously – the visual system is transmitting a lot of information.

But your eye is not the only thing that transmits information to the brain! You have ears, you can touch, you can sense how hungry you are or how sick you feel. All the while you are making decisions and thinking about the past and the future. Your brain is computing a lot.

In other words, while it may seem at first like ‘the brain can process two million bits of information per second’ is an overstatement, it is actually an understatement. And probably by a¬†lot. But more importantly: we don’t know, we don’t have any clue or guess, and I have no idea where Ben Carson pulled this number from. It is plain old made up.

“It remembers everything you’ve ever seen, everything you’ve ever heard”

This makes everyone sound a bit like Santa Claus! In reality, what we know points in the opposite direction. Although it is popular to describe the brain as a computer, it is not. From the moment of perception, the brain begins by filtering filtering filtering. Your eye receives a barrage of light Рand much of this is filtered away. This image gets sent to the brain Рand much of this is filtered away. Your mind does its best to infer what is occurring in the world Рand in the process, much is filtered away or simply assumed. It is tragically easy to force someone to perceive something that is not there. In other words, right now you cannot remember everything that you saw two seconds ago! It is simply not available to your conscious mind.

But we can be a little generous Рwhat about memories? Could we at least recall everything we consciously perceived? Everyone knows that is not true: who can remember being a baby? And even as adults we do not remember everything. Memories are not just photos to be peered at; they are a dense connection of associations. These associations can be activated together, but always in the context of whatever else going on in the brain (and this is not even getting into totally false memories).

This is a particular problem with eyewitness testimony. It is pretty well-known at this point that eyewitness memory is unreliable and prone to manipulation. Simply asking a witness to describe someone seems to modify the  memory Рleaving the original gone forever.

The Inside Out¬†view of memory as a discrete collection of little movies is wrong – though even in this movie they know that we can forget things forever! – and is based on an incorrect view of how the brain works. Memories are not crystalline balls ready to be sent up to consciousness at any moment, but a web of connections that can easily be rewired. Based on what we know about learning and memory, Ben Carson’s quote is wrong, and almost certainly made up.

From everything we know, Ben Carson is a phenomenal neurosurgeon. But Ben Carson is not a neuroscientist.

Addendum:

(1) How much information can the nervous system process? This is a really interesting question! It may seem straightforward, but this question actually has a lot of different interpretations. Let’s just take the example of our two eyes, each looking out onto the world. The eyes do not see totally different parts of the world, but an overlapping scene; just close each eye in succession and you will see much of the world the same.

In a sense, this means they are processing the same information about the world. Both can see the laptop (or phone/etc) in front of you and so much of what they see is redundant. If the left eye sees, say, 1MB of visual information, and the other does as well, does that mean the two eyes are processing 2MB of information? Or are they simply processing 1.25MB in parallel (the other 0.75 MB being the same thing in each eye Рredundant overall)?

Stimulating deeper with near-infrared

Interesting idea in this post at Labrigger:

Compared to visible (vis) light, near infrared (NIR) wavelength scatters less and is less absorbed in brain tissue. If your fluorescent target absorbs vis light, then one way to use NIR is to flood the area with molecules that will absorb NIR and emit vis light. The process is called “upconversion“, since it is in contrast to the more common process of higher energy (shorter wavelength) light being converted into lower energy (longer wavelength) light. The effect looks superficially similar to two-photon absorption, but it’s a very different physical process.

Apparently you can now do this with optogenetics, and is much higher wavelength than Chrimson or ReaChR. Reminds me a bit of Miesenbock’s Trp/P2X2 optogenetics system from 2003.

Also on Labrigger: open source intrinsic imaging

[Invertebrate note: since ReaChR allows Drosophila researchers to activate neurons in intact, freely behaving flies, perhaps this might give us a method to inactivate with Halorhodopsin?]