Should small labs do fMRI experiments?

Over at Wiring The Brain, Kevin Mitchell asks whether it is worth it for small labs to do fMRI:

For psychiatric conditions like autism or schizophrenia I don’t know of any such “findings” that have held up. We still have no diagnostic or prognostic imaging markers, or any other biomarkers for that matter, that have either yielded robust insights into underlying pathogenic mechanisms or been applicable in the clinic.

A number of people suggested that if neuroimaging studies were expected to have larger samples and to also include replication samples, then only very large labs would be able to afford to carry them out. What would the small labs do? How would they keep their graduate students busy and train them?

I have to say I have absolutely no sympathy for that argument at all, especially when it comes to allocating funding. We don’t have a right to be funded just so we can be busy. If a particular experiment requires a certain sample size to detect an effect size in the expected and reasonable range, then it should not be carried out without such a sample. And if it is an exploratory study, then it should have a replication sample built in from the start – it should not be left to the field to determine whether the finding is real or not….Such studies just pollute the literature with false positives.

At the end of the day, you are doing rigorous science or you are not.

I do have a silly little theory – which I keep meaning to write up – on the economics of science. In some cases, it may be worth doing underpowered studies as a cost-effective way to generate hypotheses. However, this depends on the cost of the experiment – and fMRI seems to fall way too far into the “too expensive per data point” field to be worth it.

Advertisements

Commentary on a comment

If you want to see a masterclass in dissecting a paper, go read Tal Yarkoni’s discussion of “The dACC is selective for pain“:

That conclusion rests almost entirely on inspection of meta-analytic results produced by Neurosynth, an automated framework for large-scale synthesis of results from thousands of published fMRI studies. And while I’ll be the first to admit that I know very little about the anterior cingulate cortex, I am probably the world’s foremost expert on Neurosynth*—because I created it.

…The basic argument L&E make is simple, and largely hangs on the following observation about Neurosynth data: when we look for activation in the dorsal ACC (dACC) in various “reverse inference” brain maps on Neurosynth, the dominant associate is the term “pain”…The blue outline in panel A is the anatomical boundary of dACC; the colorful stuff in B is the Neurosynth map for ‘dACC’…As you can see, the two don’t converge all that closely. Much of the Neurosynth map sits squarely inside preSMA territory rather than in dACC proper…That said, L&E should also have known better, because they were among the first authors to ascribe a strong functional role to a region of dorsal ACC that wasn’t really dACC at all… Much of the ongoing debate over what the putative role of dACC is traces back directly to this paper.

…Localization issues aside, L&E clearly do have a point when they note that there appears to be a relatively strong association between the posterior dACC and pain. Of course, it’s not a novel point…Of course, L&E go beyond the claims made in Yarkoni et al (2011)—and what the Neurosynth page for pain reveals—in that they claim not only that pain is preferentially associated with dACC, but that “the clearest account of dACC function is that it is selectively involved in pain-related processes.”…Perhaps the most obvious problem with the claim is that it’s largely based on comparison of pain with just three other groups of terms, reflecting executive function, cognitive conflict, and salience**. This is, on its face, puzzling evidence for the claim that the dACC is pain-selective.

etc. etc. Traditionally, this type of critique would slowly be drafted as a short rebuttal to PNAS. But isn’t this better? Look how deep the critique is, look how well everything is defined and explained. And what is stopping the authors from directly interacting with the author of the critique to really get at the problem? The only thing left is some way for pubmed or Google Scholar to link these directly to the paper.

Go read the whole thing and be learned.

Monday open question: does fMRI activation have a consistent meaning?

Reports from fMRI rely, somewhat implicitly, on a rate-coding model of populations of neurons in the brain. More activity means more activation, and more activation usually means roughly the same thing. Useful, but misleading. How much should we rely on the interpretation that an area having similar activation in two different behaviors means the same thing? Neuroskeptic covers one such finding:

The authors are Choong-Wan Woo and colleagues of the University of Colorado, Boulder. Woo et al. say that, based on a new analysis of fMRI brain scanning data, they’ve found evidence inconsistent with the popular theory that the brain responds to the ‘pain’ of social rejection using the same circuitry that encodes physical pain. Rather, it seems that although the two kinds of pain do engage broadly the same areas, they do so in very different ways.

Roughly, the use a cool new statistical technique to measure activity in more oblique ways: combinations of activity have more meaning than they may have in the past.

The basic question here is: given that we know small regions can have multiple ‘cognitive’ meanings depending on the context of the entire network – or specifically which neurons in the region itself – are active, how much can we compare ‘activity’ signals between (or even within!) behaviors?

Obviously sometimes it will be entirely fine. Other times it won’t. Is there an obvious line?

MRI now for dopamine?

The Jasanoff lab has been working on improving MRI for a while, using such cool terms as ‘molecular fMRI’. They are really attempting to push the technology by designing molecular agents to help with the imaging. For instance, they have sensors that can respond to kinase activity or to amines like dopamine.

MRI works by sending powerful magnetic fields at a tissue such as the brain, and measuring the time it takes for molecules in this tissue to ‘relax’ to its previous state. In order to detect molecules such as dopamine, they modified magnetized proteins to bind specifically to those molecules. The relaxation occurs in a specific ‘communication channel’ called T1, as opposed to the T2 ‘channel’ that is used to detect changes in blood flow for fMRI. Since the proteins have different relaxation times depending on whether they are bound or unbound, MRI can be used to measure when there is more or less dopamine in the tissue.

Although I’ve been hearing about these sensors for a few years (the dopamine one came out in a paper four years ago, the kinase one six), I hadn’t seen a paper that really used them until now. The Jasanoff lab has now shown that if you stimulate the nerves that release dopamine, their sensors can indeed detect it. Problem is: they have to inject the sensor directly into the brain. This means, first of all, that they probably aren’t able to measure dopamine activity across the whole brain using this technique. I’m not sure, but I imagine they can image the level of the sensor that is at any given point? But that level is going to affect the signal that they get. Further, someone suggested that because the sensor is large and polar, it’s not going to cross the blood-brain barrier so it’s not a plausible way to image dopamine release in humans. The field will just have to stick with PET imaging for now.

Finally, a personal complaint: they kept claiming they were measuring ‘phasic’ activity of the dopamine (ie, transient). Although they were stimulating the dopamine neurons phasically, I didn’t see any control to measure the tonic level of dopamine! I’m not sure I would have allowed them to get away with that if I were a reviewer. Still, it’s a cool technique that has a lot of potential in the years ahead. It should be exciting to see how it gets developed.

Unrelated, but the Jasanoff lab page claims they are doing MRI in flies. In flies! But I can’t find any papers that do this; anyone know about that?

Can a machine tell us about beauty?

Trapped on a plane flying to Salt Lake City, I got to thinking about the recent article on how ‘the same brain centers that appreciate art were being activated by beautiful maths‘. In a caffeine-fueled binge, I started righting a purple prose-filled essay on the subject. Clark Ashton Smith would be proud:

Our brain works through a series of chemical messaging systems: payloads of neurotransmitters cross synapses, ions whizz through directly-connected gap junctions, molecular cascades tumble through cells. And on a gross level we have large chunks of grubby grey matter whose fluctuating electrical potentials draw in blood when we see beauty. Yet the phenomenon of beauty is not solely based on the level of blood flow in our brains; rather, it is the precise matrix of neurons and proteins and peptides that are in flux at the right moment that creates our emergent feelings of aesthetics. The beauty of a sunset is not the beauty of literature is not the beauty of an equation, despite what our burbling blood whispers to the thrumming MRI machines.

Anyway, despite the ‘poetics’, the point is real. There is a lot of cynicism among certain in the neuroscience community about the utility of fMRI. This certainly isn’t helped by dead-salmon studies of the ilk that Neuroskeptic or Neurocritic often point out. But that doesn’t mean it isn’t useful! Because of the reward function in science, labs are motivated to oversell their findings – and the media et al help them get away with it, because they don’t really understand what’s going on and like pretty pictures of brains. Yet even when the result is simply finding that some area of the brain ‘lights up’ to some stimulus, that still tells us something about the underlying circuitry, and where to go check for more details.

RIP Jack Belliveau, first person to use fMRI to measure brain activity

Sad news that Jack Belliveau has passed away at age 55:

Dr. Belliveau tried a different approach. He had developed a technique to track blood flow, called dynamic susceptibility contrast, using an M.R.I. scanner that took split-second images, faster than was usual at the time. This would become a standard technique for assessing blood perfusion in stroke patients and others, but Dr. Belliveau thought he would try it to spy on a normal brain in the act of thinking or perceiving.

“He went out to RadioShack and bought a strobe light, like you’d see in a disco,” said Dr. Bruce Rosen, director of the Martinos Center and one of Dr. Belliveau’s advisers at the time. “He thought the strobe would help image the visual areas of the brain, where there was a lot of interest.”

Dr. Belliveau took images of the brain while volunteers watched the strobe, then compared those readings with images taken while the strobe was off, subtracting one from the other.

“It didn’t work,” Dr. Rosen said. “He got nothing.”

He tried again, finding new volunteers and this time outfitting them with goggles that displayed a checkerboard pattern. “That did it,” Dr. Rosen said. “The visual areas lighted up beautifully.”

On Nov. 1, 1991, the journal Science published the findings in a paperby Dr. Belliveau and his colleagues at Massachusetts General Hospital.

So young, and to die from complications of gastroenteritis is scary.

fMRI gets a lot of flak because it has a tendency to be oversold. I just wrote an overwrought piece on Medium about that very topic, and why for all the criticisms it is a crucial piece in the neuroscience puzzle (more on that tomorrow.)

Deciding about deciding

In the field of decision-making, a typical laboratory experiment goes something like this: give a subject an option between two choices, let them decide, force them to do it again.  Put a novel variation on the way the decision is made and BAM, you’ve got yourself a little paper!  Mostly the decisions are something akin to choosing between a picture of a cake and a picture of a moldy cheese.  But a more realistic decision process might involve choosing whether the cake and moldy cheese are the best one can get; maybe you should look for something better!  One (might) call this a foraging decision, something that has been studied extensively in other contexts.  Let’s look at how the brain represents this decision.

The setup of the first experiment is a bit tricky.  Subjects were shown a choice of two rewards that they could choose between, or a set of other rewards that could be selected from randomly.  In the initial ‘foraging’ round, they got to decide whether to keep the two rewards, or get two new (random) ones for a small price.  This was repeated until they were satisfied with the two options, at which point they moved to a ‘decision’ round where they chose between the two rewards.  It is a bit unsurprising that subjects required a higher expected value from the ‘foraging’ option in order to choose it.  The authors call this their ‘foraging readiness’ but it would be more accurate to call it their level of risk-aversion.  It has been known for a long time that people prefer more sure options than more risky options, even if the economically rational man would have no preference.  I guess that’s a less sexy phrase, though.

The authors zeroed in on the anterior cingulate cortex (ACC).  Like pretty much everything that comes out of fMRI and cognitive studies, there is a lot of controversy about what exactly the ACC is doing (this isn’t a ding on fMRI or cognitive studies, it’s just really hard).  Here, researchers find that activity in ACC was positively correlated with the expected value of the foraging and negatively correlated with the expected value of the binary decision.  The BOLD signal in ACC was able to predict the number of times a subject would repeatedly search, as well as how the subject weighted the expected value of the foraging option.  And that last point is important!  Even though the researchers knew that the two new options would be chosen with equal probability, the subjects did not know that.  Or, they at least did not know that they could trust that information from the researchers who are notoriously unreliable in what they tell their subjects.   So the signal probably represents some measure of what their posterior probability distribution was, as well as how much they valued risky gains and losses, all convolved with the expected reward of each option.

Another recent paper looked at a visual task in monkeys and skipped the whole fMRI step, just putting electrodes directly into the dorsal region of ACC (dACC).  Monkeys were allowed to saccade between patches that would give a continual reward that decreased with time.  They then faced a real foraging decision: when do you leave a depleted patch to find a new source of reward?  Neurons in dACC seemed to increase their firing rate when the monkeys were making this decision.  The speed with which the firing rate increased was related to the travel time to a new patch (the cost of going to that patch of reward).  This increase continued until it reached a threshold related to the relative value of leaving the patch.

The authors are clear that the dACC signal itself is not sufficient for a leaving-decision; an observer would have to get information from other regions to determine what the threshold for leaving is.  But the data strongly suggests that dACC is coding the value of relative value of leaving a patch.

So what do the two studies together tell us about how ACC helps us make a decision?  The first paper tells us that ACC is representing the predicted cost of finding new options.  This calculation probably includes the predicted probability distribution of all available options, and will also include how many times (how long) someone is willing to go searching for a better option.  The second paper is in broad agreement, and claims that dACC represents the relative expected value but is an insufficient signal to tell the brain when to make that decision; it just encodes that signal.  It does however represent the maximum cost the brain is currently willing to bear to find a new option, just like the fMRI study shows.

These two papers are great together as they really show how (1) fMRI can be useful and (2) the differences in how the same question is framed in different subfields of neuroscience.

References

Neural mechanisms of foraging.  Kolling, Behrens, Mars, Rushworth.  Science (2012).  DOI: 10.1126/science.1216930

Neuronal basis of sequential foraging decisions in a patchy environment.  Hayden, Pearson, Platt.  Nature Neuroscience (2011).  DOI: 10.1038/nn.2856

Picture from