Communication by virus

‘Some half-baked conceptual thoughts about neuroscience’ alert

In the book Snow Crash, Neil Stephenson explores a future world that is being infected by a kind of language virus. Words and ideas have power beyond their basic physical form: they have the ability to cause people to do things. They can infect you, like a song that you just can’t get out of your head. They can make you transmit them to other people. And the book supposes a language so primal and powerful it can completely and totally take you over.

Obviously that is just fiction. But communication in the biological world is complicated! It is not only about transmitting information but also about convincing them of something. Humans communicate by language and by gesture. Animals sing and hiss and hoot. Bacteria communicate by sending signaling molecules to each other. Often these signals are not just to let someone know something but also to persuade them to do something else. Buy my book, a person says; stay away from me, I’m dangerous, the rattlesnake says; come over here and help me scoop up some nutrients, a bacteria signals.

And each of these organisms are made up of smaller things also communicating with each other. Animals have brains made up of neurons and glia and other meat, and these cells talk to each other. Neurons send chemicals across synapses to signal that they have gotten some information, processed it, and just so you know here is what it computed. The signals it sends aren’t always simple. They can be exciting to another neuron or inhibiting, a kind of integrating set of pluses and minuses for the other neuron to work on. But they can also be peptides and hormones that, in the right set of other neurons, will set new machinery to work, machinery that fundamentally changes how the neuron computes. In all of these scenarios, the neuron that receives the signal has some sort of receiving protein – a receptor – that is specially designed to detect those signaling molecules.

This being biology, it turns out that the story is even more complicated than we thought. Neurons are cells and just like every other cell it has internal machinery that uses mRNAs to provide instructions for building the protein machinery needed to operate. If you need more of one thing, the neuron will synthesize more of the mRNA and transcribe it into new proteins. Roughly, the more mRNA you have the more of that protein – tiny little machines that live inside the cell – you will produce.

This synthesis and transcription is behind much of how neurons learn. The saying goes that the neurons that fire together wire together, so that when they respond to things at the same time (such as being in one location at the same time you feel sad) they will tend to strengthen the link between them to create memories. And the physical manifestation of this is transcribing proteins for a specific receptor (say) so that now the same signal will activate more receptors and result in a stronger link.

And that was pretty much the story so far. But it turns out that there is a new wrinkle to this story: neurons can directly ship mRNAs into each other in a virus-like fashion, avoiding the need for receptors altogether. There is a gene called Arc which is involved in many different pieces of the plasticity puzzle. Looking at the sequence of the gene, it turns out that there is a portion of the code that creates a virus-like structure that can encapsulate RNAs and bury through other cells’ walls. This RNA is then released into the other cell. And this mechanism works. This Arc-mediated signaling actually causes strengthening of synapses.

Who would have believed this? That the building blocks for little machines are being sent directly into another cell? If classic synaptic transmission is kind of like two cells talking, this is like just stuffing someone else’s face with food or drugs. This isn’t in the standard repertoire of how we think about communication; this is more like an intentional mind-virus.

There is this story in science about how the egg was traditionally perceived to be a passive receiver during fertilization. In reality, eggs are able to actively choose which sperm they accept – they have a choice!

The standard way to think about neurons is somewhat passive. Yes, they can excite or inhibit the neurons they communicate with but, at the end of the day, they are passively relaying whatever information they contain. This is true not only in biological neurons but also in artificial neural networks. The neuron at the other end of the system is free to do whatever it wants with that information. Perhaps a reconceptualization is in order. Are neurons more active at persuasion than we had thought before? Not just a selfish gene but selfish information from selfish neurons? Each neuron, less interested in maintaining its own information than in maintaining – directly or homeostatically – properties of the whole network? Neurons do not simply passively transmit information: they attempt to actively guide it.

Brain Prize 2016

The Brain Prize, a thing I don’t think I knew existed, just gave $1,000,000 to three neuroscientists for their work on LTP. As with most prizes, the best part is the motivation to go back and read classic papers!

The best winner was Richard Morris because he kind of revolutionized the memory field with this figure:

Morris Water Maze

Yes, he created the Morris Water Maze, used to study learning and memory in a seemingly-infinite number of papers.

water maze 2

When was the last time you went back and actually read the original Morris Water Maze paper? I know I had not ever read it before today: but I should have.

No less important was the work of Timothy Bliss (and Terje Lomo, who did not win) illustrating the induction of LTP. Most of us have probably heard “neurons that fire together, wire together” and this is the first real illustration of the phenomenon (in 1973):

LTP induction

Bliss and Lomo were able to induce long-lasting changes in the strength of connections between two neurons by a “tetanic stimulation protocol“. The above figure is seared into my brain from my first year of graduate school, where Jeff Isaacson dragged us through paper after paper that used variations on this protocol to investigate the properties of LTP.

The final winner was Graham Collingridge who demonstrated that hippocampal LTP was induced via NMDA receptors. I don’t think this was the paper that demonstrated it, but I always found his 1986 paper on slow NMDA receptors quite beautiful:

NMDA LTP

Here, he has blocked NMDA receptors with APV and sees no spiking after repeated stimulation. However, when this blocker is washed out, you see spiking only after receiving several inputs because of the slow timescale of the receptors.

While historically powerful, the focus on NMDA receptors can be misleading. LTP can be induced in many different ways depending on the specific neural type and brain region! For my money, I have always been a fan of the more generalized form, STDP. Every neuroscientist should read and understand the Markram et al (1997) paper that demonstrates it and the Bi and Poo (1998) paper that has this gorgeous figure:

bi and poo

Read about the past, and remember where your science came from.

Psychohydraulics

psychohydraulic model

On twitter, @mnxmnkmnd pointed me to Lorenz’ model of ‘psychohydraulics‘ as a theory of behavior. Wut?

From a book chapter (I can’t figure out which book):

Lorenz introduced the (artificial) concept of an action-specific energy, ac- cumulating in a tank with a valve. In this model, the level of action-specific energy is raised as a result of the passage of time (if the behavior is not being executed), leading to the eventual opening of the valve, and the flow of action-specific energy into a bucket with several holes on different levels, represent- ing different aspects of the behavior in question. The flow of action-specific behavior into the bucket can also be increased by external factors, represented by weights on a scale, connected to the valve by means of a string. As the energy flows into the bucket, the low-threshold parts of the behavior are im- mediately expressed, and higher-threshold aspects are expressed if the level of energy reaches sufficiently high. Before proceeding with a simple set of equations for this model, one should note that the modern view of motivation is more complex than the simple feedback model just described.

Wut? Here’s some equations, because that makes everything easier to understand:

psychohydraulics

Remember, they’re talking about animal motivation. This is what happens when you win a Nobel prize.

Here is more explanation and digressions.

Orangutan facts

They’re surprisingly smart:

“They say that if you give a chimpanzee a screwdriver, he’ll break it; if you give a gorilla a screwdriver, he’ll toss it over his shoulder; but if you give an orangutan a screwdriver, he’ll open up his cage and walk away.”

At Camp Leakey, the orangutans had plenty of opportunity to observe and imitate people. They soon developed a habit of stealing canoes, paddling them downriver, and abandoning them at their destinations. Even triple and quadruple knots in the ropes securing the canoes to the dock did not deter the apes. Over the years, they have also learned to brush their teeth, bathe themselves, wash clothes, weed pathways, wield saws and hammers, and soak rags in water in order to cool their foreheads with them. And they have done all of this without any instruction.

They’re also social:

But it turns out that adult female relatives stick together: they have overlapping ranges and periodically interact. “I grew up in rural Saskatchewan,” Russon, who now works and teaches at York University, in Toronto, told me. “And, for me, that is exactly what orangutan social life is like. There are communities, but they are very broadly dispersed. It might be fifteen miles to your cousin’s place, or another twenty miles to the next nearest relative, but everybody knows everybody.” Adolescent orangutans—curious and audacious—regularly make new friends. These wandering youngsters, vaulting from one tree to the next, are likely the torchbearers of orangutan culture.

Here is a paper on social behavior of Orangutans:

As they grew older males increasingly spent less time making physical contact, but the amount of time they spent in proximity (within arm’s length) to others increased. Adult females regularly played with other group members. Contact, allogrooming, and social play showed nonrandom relationships between individuals. Adult females showed the most allogrooming and contact, adolescent and subadult males the most play. There was no obvious dominance hierarchy. One adult male spent about 10% of his time walking around the perimeter of the island. One-year-old infants rarely interacted with other individuals apart from their own and the other infant’s mother. While orangutans lead relatively solitary lives in nature, it was concluded that the opportunities for social contact and play provided by the SZG orangutan island were beneficial to this species in captivity.

[Photo by George]

Why do we language?

Aeon has an article on how the genetics that contribute to language are actually part of a much larger system:

But over the years, it became clear that the truth about language origins was not quite as simple as a “language gene” or well-defined language module. Further study revealed that the FOXP2gene is relevant to multiple mental abilities and is not strictly a language gene at all. In a 2009 paper, for example, Max Planck Institute geneticist Wolfgang Enard exploited the fact that just three amino acids distinguish the human version of the FOXP2 protein from that of mice. When he engineered the FOXP2 genes of mice to produce proteins with the two human FOXP2 amino acids, it resulted in functional differences in brain areas critical for carrying out fine motor tasks and controlling muscle movements, as well as altered function in regions involved in sending and receiving reward signals.

The same gene that regulated language so strongly also regulated other mental faculties, so its very existence appeared to contradict rather than strengthen the idea that language commands its own territory separate from other areas of the brain. As Enard points out, the language-as-island idea is also inconsistent with the way evolution typically works. “What I don’t like about the ‘module’ is the idea that it evolved from scratch somehow. In my view, it’s more that existing neural circuits have been adapted for language and speech.”

In a great commentary on mice humanized with FOXP2, Bjoern Brembs has a similar point:

It adds weight to the so-called ‘motor-learning hypothesis’ that came up some time around 2006/7 or thereabout. This hypothesis posits that FoxP2 is mainly involved in the motor, or speech component of language, i.e., learning to control the muscles in the lips, tongue, voice chords, etc. in order to articulate syllables and words. The movements of these organs have to become stereotypic in order to reliably produce understandable language and the main experimental paradigms for this stereotypization of behavior (independent of language) have been procedural learning and habit formation. This work provides further evidence that indeed FoxP2 is an important component of the learning process that leads to automatic, stereotypic behavior.

In particular, it suggests that FoxP2 is involved in the control of the process of stereotypization, i.e., at what point the behavior shifts from being flexible, to becoming more rigid. Until this work, the evidence from vertebrates and invertebrates has pointed to FoxP genes to be involved in the automatization of behavior. Now, this evidence is extended to also – at least in mammals – include the negotiation process, which I don’t think anybody had on the radar thus far.

Besides the genetics, anthropology can help us understand the reasoning behind language. From a perspective article on an exciting article about talking around campfires:

how many hours we need to talk per day

The longstanding assumption, dating back at least a century, has been to assume that language evolved to facilitate the transmission of technical knowledge (“this is how you make an arrowhead”), a view that has been generalized more recently to encompass the social transmission of cultural knowledge (again, mainly with a directly ecological purpose). An alternative view has been that language evolved, at least in the first instance, to facilitate community bonding (to allow more effective communal solutions to ecological problems).

In fact, Wiessner’s data suggest that fire and language may be more closely related than conventional views assume. Whatever may have been the original reason why humans acquired control over fire, it seems that it came to play a central role in two crucial respects. First, it effectively extended the active day…

Stories are important in all societies because they provide the framework that holds the community together: we share this a set of cultural knowledge because we are who we are, and that is why we are different from the folks that live over the hill.

And look at what we talk about during the day vs at night:

what we talk about

What has neuroscience done for machine intelligence? (Updated)

Today on the twitters, Michael Hendricks asked, “Why do AI people bother with how animal brains work? Most good inventions work by doing things totally unlike how an animal would.”

The short answer is that animal brains can already solve the problems that AI researchers want to solve; so why not look into how they are accomplishing it?

The long answer is that in the end, the algorithms that we ultimately use may end up being dramatically different – but we need a starting point somewhere. By looking at some of the algorithms that have a neural inspiration, it is clear that by thinking about ideas of how the nervous system works machine learning/AI researchers can come up with clear solutions to their problems:

  1. Neural networks. In the 1940s and 50s, McCulloch, Pitts, and Hebb all contributed to modeling how a nervous system might work. In some sense, neural nets are trapped in this 1940s view of the nervous system; but why not? At an abstract level, it’s close…ish.
  2. Deep learning. Currently the Hot Shit in machine learning, these are like “neural networks 2.0”. Some quick history: traditionally, neural networks were done one layer at a time, with strict feedforward connectivity. One form of recurrent neural network proposed by Hopfield can be used to memorize patterns, or create ‘memories’. A variant on this, proposed by (computational neuroscientist) Terry Sejnowski and Geoff Hinton is the Boltzmann machine. If you combine multiple layers of Boltzmann machines with ideas from biological development, you get Deep Learning (and you publish it in the journal Neural Computation!).
  3. Independent Component Analysis. Although this story is possibly apocryphal, one of the earliest algorithms for computing ICA was developed – by Tony Bell and Terry Sejnowski (again) – by thinking about how neurons maximize their information about the physical world.
  4. Temporal difference learning. To quote from the Scholarpedia page: “This line of research work began with the exploration of Klopf’s 1972 idea of generalized reinforcement which emphasized the importance of sequentiality in a neuronal model of learning”

Additionally, companies like Qualcomm and the Brain Corporation are attempting to use ideas from spiking neural networks to make much more energy efficient devices.

In the other direction, neuroscientists can find that the brain appears to be implementing already-known ML algorithms (see this post on Nicole Rust). Many ideas and many biological specifics will be useless – but research is the hope of finding the tiny fraction of an idea that is useful to a new problem.

Updated:

Over on reddit, downtownslim offers two more examples:

Neocognitron was the foundation for the ConvNet. Fukushima came up with the model, LeCun figured out how to train it.

Support Vector Machines This last one is quite interesting, not many people outside the neural computation community know that Support Vector machines were influenced by the neural network community. They were originally called Support Vector Networks.

What is intelligence?

You may have heard that a recent GWAS study found three genes for heritable intelligence, though with tiny effects. There was a great quote in a Nature News article on the topic:

“We haven’t found nothing,” he says.

Yeah, you don’t want that to be your money quote.

Kevin Mitchell has been tweeting about the study – I hope he storifies it! – and linked to an old post of his suggesting that the genetics of intelligence are really the genetics of stupidity: it’s not that these genes are making you smarter, but that they’re making you less dumb (as I gather, a lot of evidence suggests that ‘intelligence’ is related to overall health.)

Anyway, the SNPs that the GWAS identified are in KNCMA1, NRXN1, POU2F3, and SCRT which all are involved in glutamate neurotransmission. This is always troubling to my tiny brain, because I never understand quite how ‘intelligence’ works. People like to think that is some kind of learning, so if we can just learn better we’ll be smarter. And that’s what the authors of the article hint at.

But how does that even make sense? Learning faster is, in a way, like being hyperreactive to the world. There’s a reason that overlearning is a problem in machine learning! There is an optimal level of learning that, presumably, evolution has stuck us with. So is the supposition that we’re overreactive to conforming to stimuli in the world something that is good? Or is it that the modern world favors it whereas historically it would not have? Or what?

Addiction and free will

Free Will

Bethany Brookshire, aka Scicurious, has an awesome article on how we think of addiction:

None of these views are wrong. But none of them are complete, either. Addiction is a disorder of reward, a disorder of learning. It has genetic, epigenetic and environmental influences. It is all of that and more. Addiction is a display of the brain’s astounding ability to change — a feature called plasticity  — and it showcases what we know and don’t yet know about how brains adapt to all that we throw at them.

…Addiction involves pleasure and pain, motivation and impulsivity. It has roots in genetics and in environment. Every addict is different, and there are many, many things that scientists do not yet know. But one thing is certain: The only overall explanation for addiction is that the brain is adapting to its environment. This plasticity takes place on many levels and impacts many behaviors, whether it is learning, reward or emotional processing.  If the question is how we should think of addiction, the answer is from every angle possible.

But this Aeon piece on addiction is still stuck on the mind-body problem:

In an AA meeting, such setbacks are often seen as an ego out of control, a lack of will. Yet research describes a powerful chemical inertia that can begin early in life. In 96.5 per cent of cases, addiction risk is tied to age; using a substance before the age of 21 is highly predictive of dependence because of the brain’s vulnerability during development. And childhood trauma drives substance use in adolescence. A study of 8,400 adults, published in 2006 in the Journal of Adolescent Health, found that enduring one of several adverse childhood experiences led to a two- to three-fold increase in the likelihood of drinking by age 14.

…Her multiple relapses, according to recent science, are no ethical or moral failing – no failure of will. Instead, they are the brain reigniting the neurological and chemical pathways of addiction.

Is will not the result of chemicals, or do we believe in souls again? Here is a recent interview with Daniel Dennett on neuroscience and free will:

Given that we now know — and can even perturb — some of the brain mechanisms of morality, and we see perhaps more clearly than ever that this is biological, what are the implications for blame, credit and free will to us, to everyday people?

First, it’s no news that your mind is your brain, and that every decision you make and every thought you have and every memory you recall is somehow lodged in your brain and involves brain activity. Up until now, we haven’t been able to say much more than that. Now, it’s getting to the point where we can. But it has almost no implications for morality and free will.

…Somebody wrote a book called ‘My Brain Made Me Do It,’ and I thought, ‘What an outrageous title! Unless it’s being ironic.’ Of course my brain made me do it! What would you want, your stomach to make you do it?

If you said, ‘My mind made me do it,’ then people would say, ‘Yes, right.’ In other words, you’re telling me you did this on purpose, you knew what you were doing. Well, if you do something on purpose and you know what you’re doing and you did it for reasons good, bad or indifferent, then your brain made you do it, of course. It doesn’t follow that you were not the author of that deed. Why? Because you are your embodied brain.

What should we be allowed to forget?

Should we be dampening the emotional aspect of memory?

Two decades ago, scientists began to wonder if they could weaken traumatic memories by suppressing the hormonal rush that accompanies their formation. They turned to propranolol, which was already on the market as a treatment for hypertension and blocks the activity of hormones like epinephrine and norepinephrine….Next, in 2002, neuroscientists reported that emergency room patients who took propranolol within 6 hours of a traumatic event were less likely to experience the heightened emotions and arousal associated with PTSD one month later, compared with people who took placebos.

The hitch was that in order to interfere with memory consolidation, propranolol needed to be given within hours of a trauma, long before doctors knew whether someone would go on to develop PTSD. But around the same time, studies began to show that memories can once again become fragile when they are recalled…Perhaps, researchers hypothesized, propranolol could weaken emotional memories if PTSD patients took the drug after they conjured up the details of a painful experience. By blocking the effects of norepinephrine and epinephrine upon recall, propranolol might dampen down activity in the amygdala and disrupt reconsolidation.

I liked this comment someone left on the article:

If the memory of my trauma were to be removed, I would make no sense to myself.

If we could edit our memories, what is important to who we are? Is there a threshold of pain beyond which we should not be forced to endure our entire lives? What was adaptive one hundred thousand years ago may not be adaptive in modern society.

Chimps stick grass in their ears to be cool: notes on cultural transmission

grass in ear

1. In 2010, a female chimpanzee named Julie began repeatedly stuffing a stiff blade of grass into her ear. This Grass-in-ear behavior has affectionately been dubbed “GIEB” by the scientists who observed it.

2. Out of a group of twelve chimpanzees, eight engaged in GIEB. In three other groups of chimpanzees found in other locations in the same forest, only one was ever seen to GIEB.

3. The more that an individual associated with Julie, the more likely they were to GIEB.

4. After the inventor of GIEB died – if one could be said to invent a thing like putting grass in one’s ear – two chimpanzees continued to engage in the activity. They were never seen to do it together, let alone put grass in the other one’s ear.

5. A young monkey named Imo once noticed that her sweet potatoes were covered in sand and that if she dunked them in the water they would become clean. Within a few years, every monkey on her island was dunking sweet potatoes. She later learned that if she dunked them in the ocean instead, plunging them in after every bite, they would taste even better. The lesson? Monkeys love seasoned potatoes.

Japanese macaque stone handling

6. Some Japanese macaques like to play with stones, clicking and clacking them, rolling them along their hands, cuddling or pushing or throwing them. This was first invented by a young female monkey named Glance in 1979. Her playmates learned it first, followed by theirs. What began as transmission among friends has transformed into transmission among generations: now babies learn it from their mothers.

7. There are at least 11 mutations of this stone handling behavior, including “Rub With Hands”, “Grasp Walk”, and “Flinting”. These variations appear to be transmitted between tribes of monkeys when males migrate from one to another. Additionally, each generation appears to add complexity as each individual inadvertently contributes some new idea.

8. Monkeys are not the only animals with social transmission of ideas; many other animals do, though it may not necessarily be for the best. When young guppies are learning where to eat, they follow an older fellow to a source of food. Slowly, they learn from the older guppy which route to take to their food. As time goes on, one guppy learns from another and a route is set. However, this can be maladaptive when there is a faster route available: follow the group even if they know there is a quicker way.

9. One can digitize animals and ask how their theoretical equivalents toss around cultural traits. What causes these electronic cultures to die out? Simple: small groups, high mortality, poor transmission, and costly traits. Prestigious traits, or traits with group consensus, die out just as quickly as any other. In other words, a culture held in high esteem is just as mortal as any other.

10. The connections between members of a group aren’t uniformly random. Instead, they tend to form small worlds, where any two members are just a few steps away from each other. Thank you, Kevin Bacon. In any random network, as the set of connections reaches half the number of members, a “percolation” process causes many small groups to begin congealing into one large group. Much can be learned about sociality and culture using these ideas.

11. It is possible to classify social learning mechanisms in ten distinct ways: stimulus enhancement, local enhancement, observational conditioning, social enhancement of food preferences, response facilitation, social facilitation, contextual imitation, production imitation, observational response-stimulus learning, and emulation.

12. A computer tournament revealed that even indiscriminate copying is often better than trial and error learning. Copied individuals will often perform the best available behavior, and the better the behavior the more likely they were to survive. Thus, survival itself made behavior a non-random sample of the best behavior. Individuals were themselves highly useful filters of information waiting to be copied.

References

Huffman, M., Nahallage, C., & Leca, J. (2008). Cultured Monkeys: Social Learning Cast in Stones Current Directions in Psychological Science, 17 (6), 410-414 DOI: 10.1111/j.1467-8721.2008.00616.x

van Leeuwen, E., Cronin, K., & Haun, D. (2014). A group-specific arbitrary tradition in chimpanzees (Pan troglodytes) Animal Cognition DOI: 10.1007/s10071-014-0766-8

Laland, K., & Williams, K. (1998). Social transmission of maladaptive information in the guppy Behavioral Ecology, 9 (5), 493-499 DOI: 10.1093/beheco/9.5.493

Nunn, C., Thrall, P., Bartz, K., Dasgupta, T., & Boesch, C. (2009). Do transmission mechanisms or social systems drive cultural dynamics in socially structured populations? Animal Behaviour, 77 (6), 1515-1524 DOI: 10.1016/j.anbehav.2009.02.023

Stocker R, Green DG, & Newth D (2001). Consensus and cohesion in simulated social networks Journal of Artificial Societies and Social Simulation, 4 (4)

Rendell L, Fogarty L, Hoppitt WJ, Morgan TJ, Webster MM, & Laland KN (2011). Cognitive culture: theoretical and empirical insights into social learning strategies. Trends in cognitive sciences, 15 (2), 68-76 PMID: 21215677