MRI now for dopamine?

The Jasanoff lab has been working on improving MRI for a while, using such cool terms as ‘molecular fMRI’. They are really attempting to push the technology by designing molecular agents to help with the imaging. For instance, they have sensors that can respond to kinase activity or to amines like dopamine.

MRI works by sending powerful magnetic fields at a tissue such as the brain, and measuring the time it takes for molecules in this tissue to ‘relax’ to its previous state. In order to detect molecules such as dopamine, they modified magnetized proteins to bind specifically to those molecules. The relaxation occurs in a specific ‘communication channel’ called T1, as opposed to the T2 ‘channel’ that is used to detect changes in blood flow for fMRI. Since the proteins have different relaxation times depending on whether they are bound or unbound, MRI can be used to measure when there is more or less dopamine in the tissue.

Although I’ve been hearing about these sensors for a few years (the dopamine one came out in a paper four years ago, the kinase one six), I hadn’t seen a paper that really used them until now. The Jasanoff lab has now shown that if you stimulate the nerves that release dopamine, their sensors can indeed detect it. Problem is: they have to inject the sensor directly into the brain. This means, first of all, that they probably aren’t able to measure dopamine activity across the whole brain using this technique. I’m not sure, but I imagine they can image the level of the sensor that is at any given point? But that level is going to affect the signal that they get. Further, someone suggested that because the sensor is large and polar, it’s not going to cross the blood-brain barrier so it’s not a plausible way to image dopamine release in humans. The field will just have to stick with PET imaging for now.

Finally, a personal complaint: they kept claiming they were measuring ‘phasic’ activity of the dopamine (ie, transient). Although they were stimulating the dopamine neurons phasically, I didn’t see any control to measure the tonic level of dopamine! I’m not sure I would have allowed them to get away with that if I were a reviewer. Still, it’s a cool technique that has a lot of potential in the years ahead. It should be exciting to see how it gets developed.

Unrelated, but the Jasanoff lab page claims they are doing MRI in flies. In flies! But I can’t find any papers that do this; anyone know about that?

Advertisements

Monday question: Do neuromodulators have a unifying role?

We would never say that glutamate or GABA, the “basic” neurotransmitters, have a particular function. So why do attempt to give modulators like dopamine or oxytocin a defined role? Dopamine, for instance, is not only spread across the brain, but is also in the retina!

Or take oxytocin, “the love molecule”. It is not only involved in social behaviors but also cross-modal plasticity and modulating hippocampal fast spiking interneurons.

Do either dopamine or oxytocin – or any other neuromodulator for that matter – have a unifying function? Or did the brain evolve multiple independent uses for the modulators?

The sheep and the lion: dopamine differences between animals

There was a recent comment in Nature Reviews Neuroscience on differences in dopaminergic systems between animals. It has a few great paragraphs that look like this:

Species differences should also be considered when discussing dopaminergic projections to the cerebral cortex (Fig. 1b). In rodents, mesocortical dopamine projections arise almost exclusively from the ventral tegmental area (VTA) and terminate mostly in prefrontal regions. The primary motor cortex, where a large population of PT neurons is located, is only sparsely innervated by these fibres. By contrast, primates (including humans) have gained a substantial dopaminergic innervation of M1 and related motor cortices78; this innervation has arisen in large part from the substantia nigra pars compacta (SNc) and the retrorubral area (RRA)910, thereby suggesting a prominent role of cortical dopamine on PT neurons in the M1 of primates, but not in rodents.

So if that’s your thing, you should go read it.

Decision Theory Journal Club: Our brains are perfect machines

A few of us have started a Decision Theory journal club where we plan on reading papers from a variety of fields that examine how decisions are made.  We have people from neuroscience, economics, and cognitive science participating (so far), including people participating through Google+ hangouts!, which will hopefully make lead to some productive discussions.  I’m a couple papers behind, but I hope to post summaries of what we have been reading.

Our first paper follows an idea that is common in the psychological literature concerning how someone gains evidence, the noisy evidence accumulator (diffusion to a boundary).  Let’s say you hear a loud noise and have to decide whether to look to your left or your right.  If the noise is almost directly behind you, it can be difficult to tell which way to look.  Both of your ears are going to be hearing something loud, and as the sound waves crash about the room it will make the sound even noisier: sometimes one ear will be louder than the other.  But one ear is usually louder than the other and when you’ve received enough evidence that one ear is hearing something louder than the other, your head will swivel and your decision is made.

We can do essentially the same thing with rats.  They can be put into a chamber where clicks will randomly come from speakers to their left and to their right, and if they turn in the direction with the most clicks, they get a reward.  Rats are fairly good at this – as are humans.  One interesting difference, though, is that when humans are certain, they will always go in the direction of the most clicks.  Rats, on the other hand, peak out at about 90% certainty; I guess they don’t trust the experiment as much as people do and want to explore their environment more!

But we’re interested in how this decision is made, so we can go back to our noisy evidence accumulator and see if that can explain how well the decision is made.  We can also through in all sorts of options: is the memory of the rats a bit forgetful? Is there all sorts of internal noise in the brain?  Is there noise in the environment?  And so on.  It turns out that the headline of the paper tells it all: rats and humans are optimal evidence accumulators.  There is no internal noise.  There is no forgetting.  Every bit of evidence that is given to the animals is in there, waiting to be used.

Fortunately, results from a different paper can explain to us what might be happening.  There are direct connections between the cortical auditory neurons and neurons in striatum – an area of the brain that receives dopamine and is involved in selecting the best action to take.  Activating these auditory neurons signals the striatum and makes the animal more likely to go in whichever direction the experimenter wants.  Inhibiting these neurons has the opposite effect.  It’s quite possible that the auditory input is interacting with this dopamine system to keep track of where an animal wants to go – and what decision it wants to make.

As for the optimality of the animals, well, that’s at least the headline, and it would be great if that were always true.  In actuality, there’s a large population of rats which show sub-optimal evidence accumulators.  Although they don’t discuss this in the paper, to me this is the most exciting result (although the lack of neural noise ranks up there, too.  Our brains are machines.).  Of course you’d expect that evolution would evolve animals that, well, make good decisions.  So why are there any animals that do show significant neural noise? Why is there such large variability in forgetfulness?  Although the majority of animals are almost perfect, many are not.  Hopefully in the future, we will be able to explain why it’s good to not always be perfect.

References
Brunton, B., Botvinick, M., & Brody, C. (2013). Rats and Humans Can Optimally Accumulate Evidence for Decision-Making Science, 340 (6128), 95-98 DOI: 10.1126/science.1233912

Znamenskiy, P., & Zador, A. (2013). Corticostriatal neurons in auditory cortex drive decisions during auditory discrimination Nature, 497 (7450), 482-485 DOI: 10.1038/nature12077

Photo from

Neuroscience is useful: NBA edition

Antoine Walker

Although I wasn’t able to attend it, Yonatan Loewenstein apparently gave a talk at a Cosyne workshop about decision-making and related it to NBA players.  I was curious to find the paper and while ultimately I could not, I did find that he had a different one that was interesting.  One of the most commonly used methods in neuroscience to model learning is reinforcement learning.  In reinforcement learning, you learn from the consequences of your actions; intuitively, a reward will act to reinforce a behavior.  Although inspired by psychological theories of learning, it has gained support in neuroscience from the patterns of activity of dopamine cells which provide exactly the learning error signal you’d expect.

Basketball is a dynamic game where players are constantly evaluating their chance of a shot, and whether they should pass it and make a 2 or 3 point field goal attempt (FGA).  One of the most contentious issues in basketball (statistics) is the ‘hot hand effect’: if you’ve successfully made a 3 point shot, are you more likely to make the next one?  Maybe it’s just one of those nights where you are on, your form is perfect and every shot will sink.  Problem is, statistically speaking there is no evidence for it.

chanceofmade3ptBut the players sure think that it exists!  Now look at the figure to the right.  Here, the blue line represents how likely a player is to shoot a 3 point field goal if their last (0, 1, 2, 3) shots were made 3 point field goals.  In general, they shoot 3 pointers about ~40% of the time.  If they made their last 3 pointer, they now have a ~50% of shooting a 3 pointer on their next attempt.  And if they make that one?  They have a 55% chance of shooting a 3 pointer.  Similarly, the red line follows the probability of shooting a 3 pointer if you last few shots were missed 3 pointers.

Okay, so basketball players believe in the hot hand, and act like it.  Why do they act like it?  If we take our model of the learning process, Reinforcement Learning, and apply it to the data, we actually get a great prediction of how likely a player is to shoot a 3 pointer!  Our internal machinery that we use for learning the value of an action is also a good model for learning the value of taking a 3 pointer – and shooting a 3 pointer will only reinforce the idea that the next shot for a 3 pointer (get it?)!

Alas, this type of behavior does not help anything; a player who makes a 3 pointer is 6% less likely to make his next his 3 than if he had missed his last 3 pointer.  In fact, if you take our Reinforcement Learning model and see how each player behaves, we can estimate how susceptible that player is to learning.  Some players won’t change how they shoot (unsusceptible) and some players will learn a lot from each shot, with the history of made and missed shots having huge effects on how likely they are to shoot another 3.  And believe it or not, the players that are least susceptible to learning are the ones who get the most points out of each 3 point shot.  Unless you are Antoine Walker, then you will just shoot a lot of bad 3 pointers for the hell of it.

Finding non-existent ‘hidden patterns’ in noise is a natural human phenomenon and is a natural outgrowth of learning from past experiences.  So tell your parents!  Learning: not always good for you.

References

Neiman, T., & Loewenstein, Y. (2011). Reinforcement learning in professional basketball players Nature Communications, 2 DOI: 10.1038/ncomms1580

Never make a decision on an empty stomach… or a full stomach…

You are hungry already and dinner is hours away.  You’re getting irritable and making stupid decisions that you normally wouldn’t.  Or maybe you just had a big meal and you’re sated.  Your friend who is seated next to you turns and asks for a favor; you pleasantly agree and sink into your chair sleepily.  What’s going on?

An underappreciated fact about the neuromodulatory system is that release of these molecules can have diffuse and widespread effects all across the brain.  Take dopamine and leptin. Dopamine is a chemical that drives decision-making – among other things, but it really does have an important role in this – while leptin is generally thought to signal satiety.  Leptin is released from the fat cells of the body and we typically think of it acting on the hypothalamus, an area responsible for many metabolic behaviors.  When more leptin is circulating in the blood stream, you will eat less food and increase more energy which makes it a natural candidate for yet another failed diet pill.  Since leptin interacts with motivation to eat food, an alternative set of areas it could interact with are the dopamine regions .  And in those regions, in the striatum in particular, the response to food and food pictures will be reduced when there is increased leptin.

It would be nice to know mechanistically how the two systems interact.  One method of going about this is to activate dopamine release through a stress pathway: by keeping pain at a constant self-reported score, a robust and constant amount of dopamine will be released.  Yes, for some reason people actually volunteer for these experiments.  Now we can exploit the fact that there are known variants in the gene responsible for leptin, LEP.  If you look at how people with these variants respond, you get large differences in dopamine release, which seems to preferentially effect the D2/3 receptors.  Although different researchers seem to disagree on which specific regions of the striatum are modified by leptin, a good guess it that this is highly dependent on the task and leptin will change the amount of dopamine available to the areas.

What affect might this have on behavior?  One behavior that these D2/3 receptors are involved in is risky decision-making.  We all have our own preferences for risky bets.  Some people prefer small bets that they are guaranteed whereas others prefer the risky option (these are the compulsive gamblers).  But it’s a bit more complicated than that.  Sure, you’d take a risky bet when the option was between a sure 5 cents and a “risky” $1.  But maybe you wouldn’t if you were guaranteed $100 with a risky option of $2000 or nothing.  How sensitive you are to these bets turns out to rely on the concentration of D2/3 receptors in the dorsal striatum.  Putting two and two together, we can bet that the leptin that has an effect on dopamine levels also has an effect on how willing you are to take a risk as the stakes get larger.

This means that all of our body is linked, together, with the state of the world.  Periods of hunger or bounty will cause people to behave in very different ways, with behavior linked to the body’s hormone signaling.  Particularly prevalent here is that hormones that are generally thought of as responding purely to food may have a broader role in signaling to the body how to properly respond to all sorts of situations.

References

Burghardt, P., Love, T., Stohler, C., Hodgkinson, C., Shen, P., Enoch, M., Goldman, D., & Zubieta, J. (2012). Leptin Regulates Dopamine Responses to Sustained Stress in Humans Journal of Neuroscience, 32 (44), 15369-15376 DOI: 10.1523/JNEUROSCI.2521-12.2012

Cocker, P., Dinelle, K., Kornelson, R., Sossi, V., & Winstanley, C. (2012). Irrational Choice under Uncertainty Correlates with Lower Striatal D2/3 Receptor Binding in Rats Journal of Neuroscience, 32 (44), 15450-15457 DOI: 10.1523/JNEUROSCI.0626-12.2012

Dunn, J., Kessler, R., Feurer, I., Volkow, N., Patterson, B., Ansari, M., Li, R., Marks-Shulman, P., & Abumrad, N. (2012). Relationship of Dopamine Type 2 Receptor Binding Potential With Fasting Neuroendocrine Hormones and Insulin Sensitivity in Human Obesity Diabetes Care, 35 (5), 1105-1111 DOI: 10.2337/dc11-2250

Photo from

A mechanics of depression

There are many reactions that can be taken in response to the world going crazy on you, and depression is one of these.  Even though it is (rightly) seen as perhaps not the greatest illness to have, there is a case to be made that depression is an energetically-efficient response to overwhelming stress; it can be better to shrink back and conserve your energy than fight it.  Think about it like this: you probably know some people who are super laid back, who take things as they come and don’t seem to stress out.  And you also probably know some people who freak out at stress, work really hard, and just seem to be stressed out all the time.  These are two different strategies for dealing with stress and one seems more likely to lead into depression.  At the same time that same strategy seems like the person is fighting harder to get out of the stressful situation.  How does the brain do something like that?

It is thanks to tools from the lab of Karl Deisseroth that we are finally able to begin to really, mechanistically, understand what is going on in the brain.  And fortunately, Deisseroth is both a research scientist and a psychiatrist who is interested in helping people with mental diseases.  There are three (!) papers published in Nature over the last month with his name on them, and they shed a lot of light on the mechanisms that are at work.

No one is really sure what it is about the brain that causes depression, although we have some hints: antidepressant drugs tend to work by modifying the release of the neuromodulators serotonin and dopamine (and norepinephrin).  We also know that an area called the prefrontal cortex (PFC) is highly linked to all sorts of psychiatric disorders; the PFC is an area that receives inputs from all over the place and then sends outputs right back out.  He’s the boss, the one that hears everything people have to say and then directs other areas in order to coordinate the brain to accomplish internal goals.  You can imagine what might happen if you have a bad boss: your brain is out of sync, things don’t get coordinated properly and then BAM, schizophrenia and depression.

As you might imagine, these things are all interconnected in the brain: the PFC talks to the serotonin and dopamine areas, and the serotonin and dopamine areas talk to the PFC.  And these connections are particularly important.  Take the connection between the PFC and an area that releases serotonin, the dorsal raphe nucleus (DRN).  This connection is required to motivate an animal to avoid escapable stress.  A mouse that is in a position to escape from stress will clearly do so.  However, if you inactivate the PFC the mouse will not escape from stress and its release of serotonin will look the same as if it were in a stressful situation it can’t escape from.  If you disable the PFC in a stressful situation that it can’t escape from?  No change: the PFC seems to control motivated behavior from escapable situations only, and without it you can’t.

That’s exactly what one of the recent Deisseroth papers examined.  They were able to directly activate only the PFC neurons that send information to the DRN and by doing so they found a way to escape from a learned kind of helplessness.  Rats that are stuck in a cup of water will struggle for a while, attempting to escape.  After a while they learn that struggling isn’t getting them anywhere and they just kind of give up.  But if you activate the PFC connections to the DRN?  The rats launch back into the struggle again!  But this doesn’t happen if you just activate all the neurons of the PFC or all the neurons of the DRN: there is a specific pathway through both of these brain regions that motivates an escape from helplessness.

Release of dopamine can help motivate escape from a helpless condition as well, although it is released from a different part of the brain.  The ventral tegmental area (VTA) is one of the main release sites of dopamine in the brain and is the signal of ‘pleasure’ in the brain, although it is perhaps more accurate to say that it is the primary signal of motivation.  And if you stimulate the neurons in the VTA you get an increase in motivation to escape a depressing circumstance, just as you’d expect from that area.  And specifically, this is because of a release of dopamine from the VTA to another area of the brain, the nucleus accumbens (NAcc).  What is likely to be happening is that the VTA is sending a motivating signal to an action and learning center of the brain (the NAcc), and that center of the brain helps decide what to do next, and what to do next is to get the heck out of there.  Something exciting happens here: if you now go and record the neurons in the NAcc after additional dopamine is released from the VTA, they now respond to different things.  The whole way that an action is represented in the brain changes, and in a way that emphasizes escape.

But this ability to learn escape can have a negative side.  Take the example of another method of stressing out mice, chronic social defeat.  What you do here is force mice to get defeated in battle again and again.  Yes, this is actually a commonly studied behavior; these poor guys are basically given PTSD.  But it turns out that some mice are resilient to this stress, they can withstand it and not get depressed.  If you look at the neurons in the VTA, the susceptible animals show an increased amount of bursts of activity (technically: phasic firing) during stress while the resilient animals just hummed along with no change of activity in the VTA at all!  This natural increase in firing can be simulated in the resilient animals by artificially increasing VTA firing.  Then, when you test whether they have acquired PTSD?  Well, it turns out that they have.  This makes a certain kind of sense: dopamine reinforces behavior, so susceptible animals are seeing more dopamine and hence more reinforcing of defeat than are the resilient animals.  Again, though, different internal pathways have different effects in the brain: if you activate the neurons that send information to the ‘pleasure center’, the nucleus accumbens then you are more vulnerable to stress.  And if you inhibit the activity of neurons that send information to the PFC, then you also become more vulnerable to stress.

The same sets of neurons that can help you escape stress (the VTA to NAcc connection) are the ones that will cause you to be more depressed in the future.  This suggests that there might be a tradeoff in life: you can be stressed out but really motivated to escape stress, or you can put up with a whole bunch of stress and be laid back about it in the future.  But it seems like it might be hard to be both.  There are of course in-betweens: the PFC, the boss, has specific circuits dedicated to telling the VTA and the DRN what to do, and can tell them to do opposite things.  And of course, the more you try to escape stress and fail, the more you learn it is futile to escape stress, triggering a terrible feedback cycle.  But if you want to be learning, you better be trying.

References

Chaudhury, D., Walsh, J., Friedman, A., Juarez, B., Ku, S., Koo, J., Ferguson, D., Tsai, H., Pomeranz, L., Christoffel, D., Nectow, A., Ekstrand, M., Domingos, A., Mazei-Robison, M., Mouzon, E., Lobo, M., Neve, R., Friedman, J., Russo, S., Deisseroth, K., Nestler, E., & Han, M. (2012). Rapid regulation of depression-related behaviours by control of midbrain dopamine neurons Nature, 493 (7433), 532-536 DOI: 10.1038/nature11713
Tye, K., Mirzabekov, J., Warden, M., Ferenczi, E., Tsai, H., Finkelstein, J., Kim, S., Adhikari, A., Thompson, K., Andalman, A., Gunaydin, L., Witten, I., & Deisseroth, K. (2012). Dopamine neurons modulate neural encoding and expression of depression-related behaviour Nature, 493 (7433), 537-541 DOI: 10.1038/nature11740
Warden, M., Selimbeyoglu, A., Mirzabekov, J., Lo, M., Thompson, K., Kim, S., Adhikari, A., Tye, K., Frank, L., & Deisseroth, K. (2012). A prefrontal cortex–brainstem neuronal projection that controls response to behavioural challenge Nature DOI: 10.1038/nature11617
Amat, J., Baratta, M., Paul, E., Bland, S., Watkins, L., & Maier, S. (2005). Medial prefrontal cortex determines how stressor controllability affects behavior and dorsal raphe nucleus Nature Neuroscience, 8 (3), 365-371 DOI: 10.1038/nn1399

Photo from

Unrelated to all that (January 16th edition)

Wait, there’s a paper with ‘neuroecology’ in the title?  I’m sold! Well a review of a paper, really, but they did it better and more thoroughly than I could.

That’s…that’s a lot of dopamine and depression.  Scicurious has a series of articles on the link between dopamine and depression.

See schizophrenia isn’t all that bad, you should be thankful really.  And really, the culture that you live in shapes your schizophrenia.

See, being a psychopath isn’t all that bad, you should be thankful really.  This is more evidence for the importance of ‘neurodiversity’.

Maybe if they were psychopaths they just wouldn’t want more friends.  On the Dunbar number, and why we can only have so many friends.

Learning: positive and negative

Reward and punishment operate through two very different pathways in the human brain.  The general idea is that these two types of learning – positive and negative – operate through different unique types of dopamine receptors.  The D1 receptors (D1R) are generally ‘positive’ receptors, while the D2 receptors (D2R) are ‘negative’.  Specifically, D1Rs generally tend to increase the concentration of CamKII and D2Rs decrease it; this means that they are going to have opposite effects on downstream pathways such as receptor plasticity, intrinsic voltage channels, etc.

How are the D1 and D2 pathways distinct in terms of learning?  The hypothesis has been that in striatal projection neurons, D1R expressing medium spiny neurons (dMSNs) mediate reinforcement and D2R expressing indirect pathway neurons (iMSNs) mediate punishment.  Kravitz et al expressed channelrhodopsin selectively in dMSNs and iMSNs so they could use light to activate only one type of neuron at a time.  They figured that the striatum would be a good place to start looking for the effects of these neurons.  After all, it is a primary site of reinforcement and action selection (also, they probably tried a few other places and didn’t get great results…?).  These transgenic mice were then placed in a box with two triggers, one of which would stimulate the light and the other would do nothing.  So the mice are in this box, and able to turn on and off their neurons if they want to.  I wonder how that feels?

When the mice were able to activate their D1R (positively-reinforcing) neurons, they were much more likely to keep pressing the trigger.  The D2R (negatively-reinforcing) mice were more likely to press the other trigger.  But that’s not all!  By the third day, the effects of activating the D2R pathway had worn off – they no longer cared about the effect.  You can see this on the graph to the left, where 50% is chance.  The preference for the D1R pathway persisted, however.  Even on short time scales of 15 – 30 seconds, the mice kept their preference for stimulating D1R reward cells over D2R aversion cells.  In the figure to the right, this is seen with YFP being a control (it should have no effect); whereas activating the dMSN pathway over the first 30 seconds always is different than activating YFP, the iMSN pathway only shows a (statistical) different over the first 15 seconds.

The authors conclude by saying that that the dMSN pathway is sufficient for persistent reinforcement, while iMSNs are sufficient for transient punishment.  This is a nice finding; that the D1R pathway really is doing some positive reinforcement and that the D2R pathway is doing negative reinforcement, and one is more effective in the long-term than the other.  Remember this when raising your kids!

References
Kravitz AV, Tye LD, & Kreitzer AC (2012). Distinct roles for direct and indirect pathway striatal neurons in reinforcement. Nature neuroscience PMID: 22544310

Posts for the week

Bonobo genome sequenced, secrets of sex soon to be unlocked

Beetles are good parents!  And they’re social and talk!

On dopamine and being old

EO Wilson says war is inevitable, that’s just the way we are, someone else disagrees

Vampire electronics on the way

More about our microbiome

On Kuhn.  I wish I could see more of a discussion on the Kuhnian conception of science versus what we actually do in Biology

What men and women focus on when watching porn.  Surprisingly safe for work.

About being an adolescent:

We know from many human functional MRI, or FMRI studies, that the social brain is a network of brain regions that is consistently activated whenever adults think about other people. There are about three different regions in the brain, one in medial prefrontal cortex, and two other regions in the temporal lobe: the posterior-superior temporal sulcus, and the anterior temporal cortex. It doesn’t really matter about the names, but the point is that that network of brain regions in adults is consistently active whenever you think about other people or think about interacting with other people, or think about their mental states or their emotions.

Adolescents use the same network, the social brain network, to a very similar extent, but what seems to happen is that activity shifts from the anterior region, the medial prefrontal cortex region, to the posterior, the anterior temporal cortex or the superior temple sulcus region, as they go through adolescence. In other words, when they’re thinking about other people, adolescents seem to be using this prefrontal cortex, right at the front region, more than adults do, and adults seem to be using the temporal regions more than adolescents do.