The frontal cortex, freeing you from the straightjacket of genes

Robert Sapolsky – possibly the best neurobiologist science writer – has an article on teenagers and the krazy stuff they do:

Around the onset of adolescence, the frontal cortex is the only brain region that has not reached adult levels of grey matter, made up of neuronal cell bodies. It would seem logical that gray matter levels would increase thereafter. But no, over the course of adolescence, frontal cortical gray matter volume decreases.

These traits are exacerbated when adolescents are around peers. In one study, Laurence Steinberg of Temple University discovered that adolescents and adults, when left on their own, don’t differ in the risks they take in a driving simulator. Add peers egging them on and rates don’t budge in adults but become significantly higher in teens. When the study is carried out in a brain scanner, the presence of peers (egging on by intercom) lessens frontal cortical activity and enhances activity in the limbic dopamine system in adolescents, but not in adults….As has been said, the greatest crime-fighting tool available to society is a 30th birthday.

So what is the adaptive advantage of human brain development evolving this way? Potentially, there is no advantage…No, I think that the genetic program of brain development has evolved to help free the frontal cortex from the straightjacket of genes. If the frontal cortex is the last part of the brain to fully mature, it is by definition the brain region least shaped by that genome and most sculpted by experience. With each passing day, the frontal cortex is more the creation of what life has thrown at you, and thus who you become.

Frontal cortex, responsible for high cognitive functions and decisions, is like a ruthless lumberjack. The forest of neurons grows and, at the time of puberty, the lumberjack marches in and starts trimming anything that doesn’t seem useful. As Sapolsky suggests, what happens in adolescence is indelibly marked on your life. This suggests a curious possibility: does the age at which you go through puberty affect your future behavior? Children are going through puberty at a younger and younger age these days; and those who go through early or  late puberty will have vastly different experiences and cultural environments surrounding them. Since the age at which you go through puberty has some impact on your behavior – what are the differences in what you are learning during that time?

I couldn’t find any good references but if anyone knows anything, please let me know!

Update: I forgot to mention how this is a great example of genes putting you in a place where you have the opportunity to develop some phenotype. As in: perhaps early puberty does not cause children to be more wild than average, say, but it may more often put a child in an environment that makes the behavior more attractive.

Advertisements

The public sphere of neuroscience

I have complained in the past about the lack of a blogosphere in neuroscience. And it’s not just bad for the community – it’s bad for the scientists, too. Here is a short selection from a piece on how twitter and blogs are not just an add-on to academic research:

A lot of early career scholars, in particular, worry that exposing their research too early, in too public a manner, will either open them to ridicule, or allow someone else to ‘steal’ their ideas.  But in my experience, the most successful early career humanists have already started building a form of public dialogue in to their academic practise – building an audience for their work, in the process of doing the work itself…

Perhaps the best example of this is Ben Schmidt, and his hugely influential blog: Sapping Attention.  His blog posts contributed to his doctorate, and will form part of his first book.  In doing this, he has crafted one of the most successful academic careers of his generation – not to mention the television consultation business, and world-wide intellectual network. Or Helen Rogers, whose maintains two blogs: Conviction: Stories from a Nineteenth-Century Prison – on her own research; and also the collaborative blog, Writing Lives, created as an outlet for the work of her undergraduates…The Many Headed Monster, the collective blog authored by Brodie Waddell, Mark Hailwood,  Laura Sangha and Jonathan Willis, is rapidly emerging as one of the sites where 17th century British history is being re-written.   While Jennifer Evans is writing her next book via her blog, Early Modern Medicine.

The most impressive thing about these blogs (and the academic careers that generate them), is that there is no waste – what starts as a blog, ends as an academic output, and an output with a ready-made audience, eager to cite it…But as importantly, blogs are part of establishing a public position, and contributing to a debate. Twitter is in some ways the same – or at least, like blogging, Twitter is good for making communities, and finding collaborators; and letting other people know what you are doing.  But, it also has another purpose.

Really, go read it all, it’s great.

Social media isn’t just a place to joke around and have fun – it’s a place to get into discussions and get your ideas out there. It’s a place to have an outsized voice if you have an outsized opinion. Papers are one way to get your ideas out there – but social media is more powerful. And a Latourian reading of science is that if your ideas don’t get out there, they don’t exist.

Although not in the influential category of the examples above, let me offer myself as an example. I often write about things that are on my mind. I put my thoughts and ideas out there to try to get them into a coherent form. And people interact and discuss my ideas with me, and help me refine them (even if they don’t know it!). I even found out that someone gave a lab meeting on one of my blog posts! Even more, I’ve found that over the past year, people will come up to me at conferences and tell me that they read my blog…which is honestly really weird for me (but it’s fine!). The point is: just being willing to talk on the internet has real-world consequences for your scientific ideas.

Someone published a comment in GenomeBiology today proposing a Kardashian Index: how many social media followers you have above what you’d expect from the number of scientific citations you have. It’s true to a certain extent: you pop the world “professor” into your twitter profile and it seems like an automatic boost in followers. But they make having an outsized following out to be a bad thing! It seems to me that means that you’re doing it right.

Are silly superstitions useful because they are silly?

(Attention warning: massive speculation ahead.)

Auguries often seem made up, useless. Is that why they are useful?

Dove figured that the birds must be serving as some kind of ecological indicator. Perhaps they gravitated toward good soil, or smaller trees, or some other useful characteristic of a swidden site. After all, the Kantu’ had been using bird augury for generations, and they hadn’t starved yet. The birds, Dove assumed, had to be telling the Kantu’something about the land. But neither he, nor any other anthropologist, had any notion of what that something was.

He followed Kantu’ augurers. He watched omen birds. He measured the size of each household’s harvest. And he became more and more confused. Kantu’ augury is so intricate, so dependent on slight alterations and is-the-bird-to-my-left-or-my-right contingencies that Dove soon found there was no discernible correlation at all between Piculets and Trogons and the success of a Kantu’ crop. The augurers he was shadowing, Dove told me, ‘looked more and more like people who were rolling dice’.

Stumped, he switched dissertation topics. But the augury nagged him. He kept thinking about it for ‘a decade or two’. And then one day he realised that he had been looking at the question the wrong way all the time. Dove had been asking whether Kantu’ augury imparted useful ecological information, as opposed to being random. But what if augury was useful precisely because it was random?

It’s actually pretty hard for people to be random – famously, if you ask someone to write down a string of random numbers, they’ll be very far from actually randomness. But there are some decisions that are actually better if they are done randomly. For instance: in competitive environments if you are predictable then you are more easily beaten. If you need to determine how good something is, a randomized control trial helps eliminate bias. And so on.

The problem is that when people (and animals) make decisions, they are really bad at figuring out what they did was helpful and what was useless. There is a video, somewhere, of a mouse trying to learn how to push a bar around in order to get a reward. But the mouse doesn’t know what got it the reward, so it develops these strange, superstitious movements before each push of the lever. Some bit of that was useful, but their mouse-brains don’t know what.

And it’s really possible that this is what the augurers are doing; using a seemingly random collection of movements that seemed useful but, really, aren’t. Dove (above) argued that there is ultimately a utility to these: they generate randomness.

But a recent paper suggests another possibility. There is a similar reinforcement-learning mechanism where a person is asked to learn about the value of some random symbols by choosing between different options. When a good choice is rewarded, the person feels better about them. When the choice isn’t rewarded, the person feels worse. Yet when a positively-rewarded choice is freely chosen – instead of chosen for the person – they are reinforced more. People learn more quickly when they make the choice themselves than when other people – or other entities – make it for them.

What if this was the reason auguries and superstitions are useful? Not because they generate randomness but because they prevent learning? When the environment is random and uncorrelated, learning quickly will overfit noise. If you are choosing how much of your crop to plant each season, you want to prepare for disaster. Even after a few great seasons, there is still the chance of drought. What if these ‘silly’ superstitions are useful exactly because they are silly? Because there are some aspects of the environment that you don’t want to learn about.

References

Cockburn, J., Collins, A., & Frank, M. (2014). A Reinforcement Learning Mechanism Responsible for the Valuation of Free Choice Neuron DOI: 10.1016/j.neuron.2014.06.035

Watch ALL the neurons in a brain: Ahrens and Freeman continue their reign of terror

Okay, not quite all of them. But it looks like Misha Ahrens and Jeremy Freeman are going to continue their reign of terror, imaging the whole zebrafish brain as if it’s no big deal. Yeah they’ve got almost every neuron of a vertebrate, so what?

Besides figuring out that not shooting light at the eyes might be a good idea (I think it may have been a little more complicated than that…), they released software for analysis of these kind of big data sets. Beyond Ahrens and Freeman, I know of at least two other labs using the same type of microscope to image all of the fly and can count five labs doing the same in worms. And that’s probably both a huge undercount, as well as the tip of the iceberg that will be a coming tidal wave of massively-large neural data sets. This is something that is so important, DARPA is throwing huge amounts of money at it (or at least wants to).

Their software, called thunder, is freely available and open-sourced, and available at a really slick website. It has a really great tutorial to analyze data and make sweet figures. This kind of openness is really Science Done Right.

Seriously, look at these bad boys:

running mice make neurons go fast

Mice running make mice neurons go fast

neurons in phase space

 

Neural activity floats around in their own not-so-metaphorical dimensions.the whole brain of the zebrafish is tuned for direction

 

Neurons are tuned for motion, with different colors representing different motions.ze brafish

[via]

References

Freeman, J., Vladimirov, N., Kawashima, T., Mu, Y., Sofroniew, N., Bennett, D., Rosen, J., Yang, C., Looger, L., & Ahrens, M. (2014). Mapping brain activity at scale with cluster computing Nature Methods DOI: 10.1038/nmeth.3041

Vladimirov, N., Mu, Y., Kawashima, T., Bennett, D., Yang, C., Looger, L., Keller, P., Freeman, J., & Ahrens, M. (2014). Light-sheet functional imaging in fictively behaving zebrafish Nature Methods DOI: 10.1038/nmeth.3040

PBS series on neuroscience: Janelia, schizophrenia, creativity

PBS had three segments on neuroscience during the last week. They were:

– What studying fruit flies and zebrafish can reveal about the human brain

– Pinpointing genetic links to schizophrenia may open doors to better treatment

– Connecting strength and vulnerability of the creative brain

The first one (on Janelia!) was especially nice, and it’s always nice to see a recognition of invertebrate neurobiology.

The series also had an article on creativity published in The Atlantic:

As I began interviewing my subjects, I soon realized that I would not be confirming my schizophrenia hypothesis. If I had paid more attention to Sylvia Plath and Robert Lowell, who both suffered from what we today call mood disorder, and less to James Joyce and Bertrand Russell, I might have foreseen this. One after another, my writer subjects came to my office and spent three or four hours pouring out the stories of their struggles with mood disorder—mostly depression, but occasionally bipolar disorder. A full 80 percent of them had had some kind of mood disturbance at some time in their lives, compared with just 30 percent of the control group—only slightly less than an age-matched group in the general population. (At first I had been surprised that nearly all the writers I approached would so eagerly agree to participate in a study with a young and unknown assistant professor—but I quickly came to understand why they were so interested in talking to a psychiatrist.)

So far, this study—which has examined 13 creative geniuses and 13 controls—has borne out a link between mental illness and creativity similar to the one I found in my Writers’ Workshop study. The creative subjects and their relatives have a higher rate of mental illness than the controls and their relatives do (though not as high a rate as I found in the first study), with the frequency being fairly even across the artists and the scientists. The most-common diagnoses include bipolar disorder, depression, anxiety or panic disorder, and alcoholism.

As in the first study, I’ve also found that creativity tends to run in families, and to take diverse forms. In this arena, nurture clearly plays a strong role. Half the subjects come from very high-achieving backgrounds, with at least one parent who has a doctoral degree. The majority grew up in an environment where learning and education were highly valued.

They also found that the ‘more creative’ are more likely to take risks, have higher activation in ‘association cortices’, and be more persistent in the face of rejection. There are some issues with the article but it’s still a nice read.

Unrelated to all that, 7/26 edition

The limits of animal life on Tatooine

This anecdote about filming a sci fi movie in the pre-CGI era becomes a lot more important if you’re trying to take Star Wars semi-literally, as an accounting of alien worlds and the animals and sentient beings that live there. From this perspective, there are at least 15 animal species native to desert-covered Tatooine plus another five whose origins are either otherworldy or unclear. (The two most-famous beasties — the Rancor and the Saarlac — aren’t actually natives.) Most of these animals are megafauna, big enough that a human could ride them. And you can probably guess what I’m going to say: This is scientifically unrealistic. But not necessarily because of the heat. Get too hung up on whether big animals can survive under hot and dry conditions, and you’ll miss the major reason scientists raise an eyebrow at Tatooine’s fauna.

Continue reading

No, we cannot control matter with our minds: another science myth debunked #lucymovie

“I’ve accessed 28% of my cranial capacity. I can feel every living thing.”

So says Scarlett Johansson in the new movie Lucy. Unfortunately, this movie is perpetuating a common scientific myth, one that every neuroscientist reading this blog should feel offended by: that people have the power of psychokinesis.

Now, it is understandable if you are one of the many people that believes in the power to control space-time with your mind. After all, up to 47 million Americans believe that psychic powers exist and 1 in 10 people believes in psychokinesis – just go to random physics forums to see how prevalent this belief is. And you could be one of them! Yet despite the many convincing videos available on youtube, the wikihow tutorials, and this scientific-seeming experiment on weather controlpsychokinesis is a myth.

Remember, these powers have never been demonstrated scientifically! James Randi has offered a million dollars to anyone who can prove their powers to his satisfaction, but none has done so yet.

Fact #1: No part of the brain has been shown to light up and cause things to move (besides other parts of your body)

Fact #2: Telekinesis is probably inconsistent with the laws of physics

Fact #3: There is only one way to control space-time, and you probably don’t have access to it

Fact #4: I can’t believe we have to have this discussion, but there you go

So kids: when you go see Lucy – wish you should do because it looks totally sweet – remember that it is full of neurobunk: there is no such thing as telekinetic powers. It’s completely silly and the idea that someone would make a movie around it is comical.

Oh there’s also something about using a lot of the brain but I go by wikipedia article length and it seems like people care way more about psychic powers than they do about the amount of of their brain that they use.

Why the new paper by Christakis and Fowler on friendship makes me queasy

I am a neuroscientist, and as a neuroscientist I have a strange belief that most of who we are comes from our brains. My entire career is based around understanding the neural basis of behavior which, I think, is pretty justifiable.

So when I see paper looking at the genetics of behavior, I expect to see at least one or two genes that are directly involved in neural function. A dopamine receptor, probably, or maybe some calcium channels that are acting up. And in one recent paper looking at schizophrenia, that’s exactly what we find! A D2-like dopamine receptor and some glutamate genes. My world is consistent.

But then we get a paper about friendship from Christakis and Fowler who find that friends are more likely to be genetically related to you than chance. So that means that your close friend? Basically a fourth cousin. What Christakis and Fowler have found is a few sets of genes that seem like they might influence friendship. The most important is an olfactory gene which just reeks of pheromones (or possibly hygiene). But the next most important genes? They have to do with linoleic metabolism and immune processes!

Now what am I, as a neuroscientist, supposed to do with that? How do I reconcile my neural view of the world with one where metabolic processes are influencing decisions?

Perhaps I can quiet my mind a little. In a past blog post, I wrote about how social status causes changes in genes related to immune processes. So maybe I can squint and say that okay, really this is an epiphenomenon relating to social status.

But if I’m going to understand behavior – what do I have to know? Do I have to understand literally all of biology? That traits and choices are being affected by what seem to be totally non-brain factors? That my philosophical position of the extended mind is maybe true? That makes me a little queasy.

(End massively speculative rant.)

References

Christakis NA, & Fowler JH (2014). Friendship and natural selection. Proceedings of the National Academy of Sciences of the United States of America, 111 (Supplement 3), 10796-10801 PMID: 25024208

Ripke, S., Neale, B., Corvin, A., Walters, J., Farh, K., Holmans, P., Lee, P., Bulik-Sullivan, B., Collier, D., Huang, H., Pers, T., Agartz, I., Agerbo, E., Albus, M., Alexander, M., Amin, F., Bacanu, S., Begemann, M., Belliveau Jr, R., Bene, J., Bergen, S., Bevilacqua, E., Bigdeli, T., Black, D., Bruggeman, R., Buccola, N., Buckner, R., Byerley, W., Cahn, W., Cai, G., Campion, D., Cantor, R., Carr, V., Carrera, N., Catts, S., Chambert, K., Chan, R., Chen, R., Chen, E., Cheng, W., Cheung, E., Ann Chong, S., Robert Cloninger, C., Cohen, D., Cohen, N., Cormican, P., Craddock, N., Crowley, J., Curtis, D., Davidson, M., Davis, K., Degenhardt, F., Del Favero, J., Demontis, D., Dikeos, D., Dinan, T., Djurovic, S., Donohoe, G., Drapeau, E., Duan, J., Dudbridge, F., Durmishi, N., Eichhammer, P., Eriksson, J., Escott-Price, V., Essioux, L., Fanous, A., Farrell, M., Frank, J., Franke, L., Freedman, R., Freimer, N., Friedl, M., Friedman, J., Fromer, M., Genovese, G., Georgieva, L., Giegling, I., Giusti-Rodríguez, P., Godard, S., Goldstein, J., Golimbet, V., Gopal, S., Gratten, J., de Haan, L., Hammer, C., Hamshere, M., Hansen, M., Hansen, T., Haroutunian, V., Hartmann, A., Henskens, F., Herms, S., Hirschhorn, J., Hoffmann, P., Hofman, A., Hollegaard, M., Hougaard, D., Ikeda, M., Joa, I., Julià, A., Kahn, R., Kalaydjieva, L., Karachanak-Yankova, S., Karjalainen, J., Kavanagh, D., Keller, M., Kennedy, J., Khrunin, A., Kim, Y., Klovins, J., Knowles, J., Konte, B., Kucinskas, V., Ausrele Kucinskiene, Z., Kuzelova-Ptackova, H., Kähler, A., Laurent, C., Lee Chee Keong, J., Hong Lee, S., Legge, S., Lerer, B., Li, M., Li, T., Liang, K., Lieberman, J., Limborska, S., Loughland, C., Lubinski, J., Lönnqvist, J., Macek Jr, M., Magnusson, P., Maher, B., Maier, W., Mallet, J., Marsal, S., Mattheisen, M., Mattingsdal, M., McCarley, R., McDonald, C., McIntosh, A., Meier, S., Meijer, C., Melegh, B., Melle, I., Mesholam-Gately, R., Metspalu, A., Michie, P., Milani, L., Milanova, V., Mokrab, Y., Morris, D., Mors, O., Murphy, K., Murray, R., Myin-Germeys, I., Müller-Myhsok, B., Nelis, M., Nenadic, I., Nertney, D., Nestadt, G., Nicodemus, K., Nikitina-Zake, L., Nisenbaum, L., Nordin, A., O’Callaghan, E., O’Dushlaine, C., O’Neill, F., Oh, S., Olincy, A., Olsen, L., Van Os, J., Endophenotypes International Consortium, P., Pantelis, C., Papadimitriou, G., Papiol, S., Parkhomenko, E., Pato, M., Paunio, T., Pejovic-Milovancevic, M., Perkins, D., Pietiläinen, O., Pimm, J., Pocklington, A., Powell, J., Price, A., Pulver, A., Purcell, S., Quested, D., Rasmussen, H., Reichenberg, A., Reimers, M., Richards, A., Roffman, J., Roussos, P., Ruderfer, D., Salomaa, V., Sanders, A., Schall, U., Schubert, C., Schulze, T., Schwab, S., Scolnick, E., Scott, R., Seidman, L., Shi, J., Sigurdsson, E., Silagadze, T., Silverman, J., Sim, K., Slominsky, P., Smoller, J., So, H., Spencer, C., Stahl, E., Stefansson, H., Steinberg, S., Stogmann, E., Straub, R., Strengman, E., Strohmaier, J., Scott Stroup, T., Subramaniam, M., Suvisaari, J., Svrakic, D., Szatkiewicz, J., Söderman, E., Thirumalai, S., Toncheva, D., Tosato, S., Veijola, J., Waddington, J., Walsh, D., Wang, D., Wang, Q., Webb, B., Weiser, M., Wildenauer, D., Williams, N., Williams, S., Witt, S., Wolen, A., Wong, E., Wormley, B., Simon Xi, H., Zai, C., Zheng, X., Zimprich, F., Wray, N., Stefansson, K., Visscher, P., Trust Case-Control Consortium, W., Adolfsson, R., Andreassen, O., Blackwood, D., Bramon, E., Buxbaum, J., Børglum, A., Cichon, S., Darvasi, A., Domenici, E., Ehrenreich, H., Esko, T., Gejman, P., Gill, M., Gurling, H., Hultman, C., Iwata, N., Jablensky, A., Jönsson, E., Kendler, K., Kirov, G., Knight, J., Lencz, T., Levinson, D., Li, Q., Liu, J., Malhotra, A., McCarroll, S., McQuillin, A., Moran, J., Mortensen, P., Mowry, B., Nöthen, M., Ophoff, R., Owen, M., Palotie, A., Pato, C., Petryshen, T., Posthuma, D., Rietschel, M., Riley, B., Rujescu, D., Sham, P., Sklar, P., St Clair, D., Weinberger, D., Wendland, J., Werge, T., Daly, M., Sullivan, P., & O’Donovan, M. (2014). Biological insights from 108 schizophrenia-associated genetic loci Nature, 511 (7510), 421-427 DOI: 10.1038/nature13595

Can we predict evolution?

Is evolution random, or predictable?

But Gould had a deeper question in mind as he wrote his book. If you knew everything about life on Earth half a billion years ago, could you predict that humans would eventually evolve?

Gould thought not. He even doubted that scientists could safely predict that any vertebrates would still be on the planet today. How could they, he argued, when life is constantly buffeted by random evolutionary gusts? Natural selection depends on unpredictable mutations, and once a species emerges, its fate can be influenced by all sorts of forces, from viral outbreaks to continental drift, volcanic eruptions and asteroid impacts. Our continued existence, Gould wrote, is the result of a thousand happy accidents.

If Gould were right, the pattern of evolution on each island would look nothing like the pattern on the other islands. If evolution were more predictable, however, the lizards would tend to repeat the same patterns…For the most part, though, lizard evolution followed predictable patterns. Each time lizards colonized an island, they evolved into many of the same forms. On each island, some lizards adapted to living high in trees, evolving pads on their feet for gripping surfaces, along with long legs and a stocky body. Other lizards adapted to life among the thin branches lower down on the trees, evolving short legs that help them hug their narrow perches. Still other lizards adapted to living in grass and shrubs, evolving long tails and slender trunks. On island after island, the same kinds of lizards have evolved.

The article also discusses Lenski’s work with the evolution of E. coli. He has a fantastic blog that you should be reading if you care about evolution at all.

A big theme in behavior right now is prediction – how well can we guess what an animal will do based on what it’s done in the past, and what it’s experienced? It turns out on an individual level, you can do a lot better than you’d think.

As a butterfly flaps its wings in Tokyo, a neuron in your head veers slightly heavenward…

When you look at the edge of a table, there is a neuron in your head that goes from silence to pop pop pop. As you extend your arm, a nerve commanding the muscle does the same thing. Your retina has neurons whose firing rate goes up or down depending on whether it detects a light spot or a dark spot. The traditional view of the nervous system descends from experiments that have supported this view of neural activity. And perhaps it is true at the outer edges of the nervous system, near the sensory inputs and the motor outputs. But things get murkier once you get inside.

Historically, people began thinking about the brain in terms of how single neurons represent the physical world. The framework they settled on had neurons responding to a specific set of things out in the world, with the activity of those neurons increasing when they saw those specific things and decreasing when they saw their opposite. As time flowed by, this neural picture became jumbled up with questions about whether overall activity level or specific timing of an individual spike was what was important.

When it comes to multiple neurons, a similar view has generally prevailed: activity levels go up or down. Perhaps each neuron has some (noisy) preference for something in the world; now just think of the population as the conjunction of each of their activity. Then the combination of all of the neurons is less noisy than any individual. But still: it’s all about activity going up or down. Our current generation of tools for manipulating neural activity unconsciously echoes this idea of how the nervous system functions. Optogenetics cranks the activity of cells – though often specific subpopulations of cells – to move their activity up or down in aggregate.

An alternate view which I has been pushed primarily by Krishna Shenoy and Mark Churchland takes a dynamic perspective of neural activity, and I think comes from taking a premotor view of the nervous system. Generally, nervous  activity is designed to control our physical behavior: moving, shouting, breathing, looking, remaining silent. But that is a lot to have to control, and selection of the correct set of behaviors has to take a huge numbers of factors into account and has a lot to prepare for. What have I seen? How much do I like that? What am I afraid of? How hungry am I? This means that premotor cortical activity is probably representing many things simultaneously in order to choose among them.

The problem can be approached by looking at the population of activity and asking how many different things it could represent, without necessarily knowing what those are. Perhaps the population is considering six different things at the same time (a noted mark of genius)! Now that’s a slightly different perspective: it’s not about the up or down of overall activity, but how that activity flows through possibilities on the level of the whole population.

These streams of possible action must converge into a river somewhere. There are many possible options for how this could happen. They could be lying in wait, just below threshold, building up until they overcome the dam holding their behavior at bay. They could also be gated off, allowed to turn on when some other part of the system decides to allow movement.

But when we stop and consider the dynamics required in movement, in behavior, another possibility emerges. Perhaps there is just a dynamical system churning away, evolving to produce some reaching or jumping. Then these streams of preparatory activity could be pushing the state of the dynamical system in one direction or another to guide its later evolution. Its movement, its decision.

Churchland and Shenoy have worked on providing evidence for this happening in motor cortex as well as prefrontal cortex: neurons there may be tuned to move their activity in some large space, where only the joint activity of all the neurons is meaningful. In this context, we cannot think usefully about the individual neuron but instead must consider the whole population simultaneously. It is not the cog that matters, but the machine.

References

Kaufman MT, Churchland MM, Ryu SI, & Shenoy KV (2014). Cortical activity in the null space: permitting preparation without movement. Nature neuroscience, 17 (3), 440-8 PMID: 24487233

Mante V, Sussillo D, Shenoy KV, & Newsome WT (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503 (7474), 78-84 PMID: 24201281

Churchland, M., Cunningham, J., Kaufman, M., Foster, J., Nuyujukian, P., Ryu, S., & Shenoy, K. (2012). Neural population dynamics during reaching Nature DOI: 10.1038/nature11129

Shenoy KV, Sahani M, & Churchland MM (2013). Cortical control of arm movements: a dynamical systems perspective. Annual review of neuroscience, 36, 337-59 PMID: 23725001

Ames KC, Ryu SI, & Shenoy KV (2014). Neural dynamics of reaching following incorrect or absent motor preparation. Neuron, 81 (2), 438-51 PMID: 24462104

Churchland, M., Cunningham, J., Kaufman, M., Ryu, S., & Shenoy, K. (2010). Cortical Preparatory Activity: Representation of Movement or First Cog in a Dynamical Machine? Neuron, 68 (3), 387-400 DOI: 10.1016/j.neuron.2010.09.015