aka what scientists really care about. Priorities, people, priorities.
aka what scientists really care about. Priorities, people, priorities.
There is a genetic basis to the food that we enjoy eating. Some people – which I call strange people – think cilantro has a strange, soapy taste at least partially because of a particular polymorphism in a odor receptor gene (OR6A2).
The question of why we enjoy certain foods and flavors is not solely a genetic one, but also a conceptual one. Take the questions of why we like spicy food. Other animals do not: they will eat spicy food but would rather prefer not to, thanks. If you ask people what the best spiciness level is, they will tell you that it is whatever is right below their pain threshold. A smidgen too much and it is unbearably hot. A smidgen too little and it is bland as they come. The molecule that gives something its spiciness is capsaicin which stimulates the same receptors that give information about the warmth of food. It is possible, then, that this is a byproduct of our adaptation to prefer cooked food. Food that has been roasted is digested more quickly and provides more calories.
But knowledge of genetics can give us insight into those we do not have direct experience with. We now have genomic sequence data from one Denisovan and two Neanderthals. Do they experience food similarly to modern humans?
In many ways, yes. One change that probably occurred after the invention of cooking is a reduction in certain masticatory muscles. Once you can cook, your needs to chew really really hard are reduced. And a gene responsible for this, MYH16, is expressed in chimpanzess (no fire) but not in humans (plenty of fire). It turns out that MYH16 is also not expressed in the Denisovan and Neanderthal samples.
We can also look at a taste receptor, such as TAS2R38 which responds to phenylthiocarbamide (PTC). This is a flavor that, depending on your genetic makeup you either cannot taste at all, or that tastes very bitter. There is variation across populations: 98% of people indigenous to the Americas can taste it while only 42% of those indigenous to Australia and New Guinea cannot. Interestingly, chimpanzees can also taste it but they do so in a different manner.
None of the Denisovan or Neanderthals had the human mutation that allowed PTC-tasting. But that shouldn’t stop them from tasting it: one of the Neanderthals had a different mutation from either humans or chimpanzees on the gene. This is convergent evolution at work, people.
Even more interestingly, the AMY1 is a gene responsible for the enzyme that starts the digestion of starch. Starch is responsible for something like 70% of the calories in human agricultural population. The more copies we have of this gene, the more of the enzyme we have in our saliva. Chimpanzees have two copies: humans have around 6 or 7 of them. And these Denisovans and Neanderthals? Only two!
You are what you eat, and what you eat is influenced by what you are. It’s pretty fun that we can get at what a Neanderthal enjoyed eating by looking at the genetics of their taste receptors…
Perry, G., Kistler, L., Kelaita, M., & Sams, A. (2015). Insights into hominin phenotypic and dietary evolution from ancient DNA sequence data Journal of Human Evolution DOI: 10.1016/j.jhevol.2014.10.018
This month neuroscience lost one of its great masters: Vernon B. Mountcastle, who first discovered the columnar organization of the cerebral cortex. His pioneering work has been awarded many prizes and laid the foundations for a lot of contemporary research in the field (including my PhD). Many excellent articles have already been written about it, but I wanted to pay my personal tribute to this great explorer of the brain. Here is how he would have appeared in Neurocomic, reaching new peaks of scientific discovery:
UCSD started one of the first (the first?) computational neuroscience departments. But when I started graduate school there, it was being folded into the general Neuroscience department; now it is just a specialization within the department. Why? Because we won. Because people who used to be computational neuroscientists are now just – neuroscientists. I could tell there was a change at UCSD when people trained in electrical engineering instead of biology didn’t even feel the need to join the specialization. What used to be a unique skill is becoming more and more common.
I have been thinking about this for the last few days after news trickled out about acceptances and rejections at Cosyne (note: I did not submit an abstract to the Cosyne main meeting.) The rejection rate this year was around 40%. Think about this for a minute: nearly half of the people who had wanted to present what they had been working on to their peers were not able to do so.
Now, people go to conferences for a wide variety of reasons. Some go to socialize, some to hear talks, some for a vacation. But the most important reason is to communicate your new research to your peers. And it’s a serious problem when half of the community just can’t do that.
Cosyne fills the very important role of bringing together the Computational and Systems fields of neuroscience (hence, CoSyNe). But when it was founded in 1996, this was not a big group of people. Perhaps the field has just gotten too big to accommodate everyone in one, medium sized conference; either the conference must grow or people need to flee to more specialized grounds – and repeat the process of growth and rebirth.
At dinner recently, I mentioned that it may be time for some smaller conferences to split off from Cosyne. Heads nodded in agreement; it’s not just me being contrary. There are other computational conferences – CNS, NIPS, SAND, RLDM. But none of them reside in the niche of Cosyne, none of them bring together experimentalists and theorists in the same way. The closest is RLDM which occupies a kind of intersection of Cosyne and Machine Learning. (edit: there is also CodeNeuro, though I don’t yet have a sense of the community there.)
We need more of that.
Put your money where your mouth is, as they say:
The goal of this competition is to facilitate the derivation of models that can capture the classical choice anomalies (including Allais, St. Petersburg, and Ellsberg paradoxes, and loss aversion) and provide useful forecasts of decisions under risk and ambiguity (with and without feedback).
The rules of the competition are described in http://departments.agri.huji.ac.il/cpc2015. The submission deadline is May17, 2015. The prize for the winners is an invitation to be a co-author of the paper that summarizes the competition (the first part can be downloaded fromhttp://departments.agri.huji.ac.il/economics/teachers/ert_eyal/CPC2015.pdf)…
Our analysis of these 90 problems (see http://departments.agri.huji.ac.il/cpc2015) shows that the classical anomalies are robust, and that the popular descriptive models (e.g., prospect theory) cannot capture all the phenomena with one set of parameters. We present one model (a baseline model) that can capture all the results, and challenge you to propose a better model.
There was a competition recently that asked people to predict seizures from EEG activity; the public blew the neuroscientists out of the water. How will the economists do?
“Register” by April 1. The submission deadline is May 17!
Left or right? Apple or orange? Selma or Birdman? One way to make these decisions is precisely what intuition tell us it should be: we weigh up the pros and cons of each choice. Then, when we have sufficient evidence for one over the other then we go ahead and make that choice.
How this is represented in the brain is quite straightforward: the firing of neurons would go up or down as evidence for one choice or another becomes clear and, when the firing had reached some fixed threshold, when the neurons had fired enough, a decision would be made.
The difficulty has been in figuring out precisely where the information is being encoded; in determining which neurons were increasing their activity in line with the evidence. In fact, multiple regions seem to be participating in the process.
So let us say that you are a little rodent who hears sound from the left and the right; little click click clicks. And you need to decide which side the most clicks are coming from. Every click on one side gives you a smidgen of evidence that that side will have the most, while a click on the other side will make it less likely. You don’t know when the clicks will end – so you have to stay ready.
Now there are two interesting areas of the brain that we can look at: the frontal orienting fields (FOF) that probably guide how you will orient your little snout (to the left? to the right?), and the posterior parietal cortex (PPC), which integrates diverse information from throughout the brain. Here is what the activity of these neurons look like if you plot how fast the neurons are firing, separated out by ‘accumulated value’ (how much evidence you have for one side or another; I will refer to this as left or right but is actually more like ipsilateral or contralateral):
It looks like PPC, the cortical integrator, fires progressively faster the more evidence the animal has to go left, and progressively slower the more evidence it has to go right. In other words, it is exactly the evidence accumulator we had been hoping for. The orienting region (FOF) has a different pattern, though. Its firing is separated into two clusters: low if there is a lot of evidence to go left, and high if there is a lot of evidence to go right. In other words, it is prepared to make a decision any second, like a spring ready to be released.
It is interesting that this is implemented by sharpening how tightly tuned neurons in each region are for the decision, going from something like a linear response to something more like a step function:
This is consistent with an idea from Anne Churchland’s lab that the PPC is integrating information from diverse sources to provide evidence for many different decisions. This information can then be easily ‘read out’ by drawing a straight line to separate the left from the right, a task that is trivial for a nervous system to accomplish in one step – say, from PPC to FOF.
And yet – there are mysteries. You could test the idea that FOF provides the information for left or right by removing it or just silencing it. If it was standing ready to make a decision, you would only care about the most recent neural activity. Indeed, ablating the region or just silencing it for a couple hundred milliseconds has the same effect of biasing the decision to the left or right. But it is only a bias – the information for the decision is still in the system somewhere!
Even more baffling is that the FOF begins to respond about 100 milliseconds after hearing a click – but PPC doesn’t start responding until 200 millseconds after a click. So how is FOF getting the information? Is FOF actually sending the information to PPC?
Decisions are hard. It is not a “step 1. hear information, step 2. add up pro/cons, step 3. make decision” kind of process. A linear 1, 2, 3 would be too simple for the real world. There are many different areas of the brain getting information, processing it and adding their own unique twist, sending their evidence to other areas, and processing it again. Again: even simple decisions are hard.
Hanks, T., Kopec, C., Brunton, B., Duan, C., Erlich, J., & Brody, C. (2015). Distinct relationships of parietal and prefrontal cortices to evidence accumulation Nature DOI: 10.1038/nature14066
Brunton, B., Botvinick, M., & Brody, C. (2013). Rats and Humans Can Optimally Accumulate Evidence for Decision-Making Science, 340 (6128), 95-98 DOI: 10.1126/science.1233912
And science steadily advances. It was only last July that I posted a video showing the activity of all the neurons in a brain. But that animal was stuck in place – not moving freely (though it was in virtual reality).
Jeffrey Nguyen and Andrew Leifer just uploaded their manuscript detailing their work imaging the whole brain of an animal that is freely moving. The animal is just locomoting around like nobody is their boss. That’s important as a lot of evidence points to neural activity being different when an animal is restrained and when it is allowed to move of its own volition. This technical feat is particularly exciting because the animal is C. elegans, which means that we know how all of the neurons are connected (we have the connectome). Here’s a video:
What you are seeing is a wormlike animal bend its nose from right to left (see the green lines moving out from the center mass? Those are processes sent to the sensory receptors at the very tip of the nose of the animal). I assume the animal is moving during this, but the whole image is stabilized.
There’s a great new Machine Learning podcast out called Talking Machines. They only have two episodes out but they are quite serious. They have traveled to NIPS and interviewed researchers, they have discussed A* sampling, and more.
On the most recent episode, they interviewed Ilya Sutskever on Deep Learning. He had two interesting things to say.
First, that DL works well (now) partially because we have figured out the appropriate initialization conditions: weights between units should be small, but not too small (specifically, the eigenvalues of the weight matrix should be ~1). This is what allows the backpropagation to work. Given that real neural networks don’t use backprop, how much thought should neuroscientists give to this? We know that homeostasis and plasticity keep things in a balanced range – you don’t want epilepsy, after all.
Second, that recursion in artificial networks is mostly interesting for temporal sequences. Recurrent connections – such as to the thalamus – always seems to be something that is understudied (or at least, that I don’t pay enough attention to).
Invisibilia is the other great new podcast that may interest people. It’s been pushed pretty heavily by NPR and the first two episodes are generally pretty good.
The first focused on thoughts – in particular, thoughts about thoughts. They told the story of Martin Pistorius, a man who, through sickness, went into a coma only to wake up four years later – but locked in. He couldn’t move. Slowly he gained control of his body but it took years before he could communicate. All he could do was sit there – and think. Now that he’s out, dude is about as smart and funny and all I want to do is read his book. But his case has got to be about as close to a “brain in a vat” experiment as you can get, right?
The second episode interviewed a patient of Damasio’s who cannot feel fear because of Urbach-Wiethe disease. The disease slowly calcifies, and hence lesions, the amygdala. So this woman could feel fear at one point – but now can’t. My biggest thought: how does she remember fear? What does it ‘feel’ like if you don’t have the circuitry to generate it?
Here’s a great set of experiments that they performed on her:
SPIEGEL: Somewhere in her teens, somewhere between the catfish and walking into Damasio’s office, SM’s ability to experience fear just slowly faded out and the world around her became benign, a place populated by people and things that only seemed to wish her well. Damasio and the other scientists who have studied SM know this because they’ve done all kinds of tests that prove it’s true. They’ve exposed her to the most terrifying animals that they could find, snakes.
DAMASIO: She had to be restrained from playing with the ones that would actually be quite dangerous to her.
SPIEGEL: They’ve tried to condition a fear response into her by randomly assaulting her with the sound of a loud, jarring horn – nothing. She just seems emotionally blind to the experience of fear.
And here’s the story of her life:
SM: OK. I was walking to the store, and I saw this man on a park bench. He said, come here please. So I went over to him. I said, what do you need? He grabbed me by the shirt, and he held a knife to my throat and told me he was going to cut me. I told him – I said, go ahead and cut me. And I said, I’ll be coming back, and I’ll hunt your ass. Oops. Am I supposed to say that? I’m sorry.
TRANEL: That’s OK. It’s an intense situation. How did you feel when that happened?
SM: I wasn’t afraid. And for some reason, he let me go. And I went home.
Anyway, the podcast is worth a lesson and may have some ideas that I’ll come back to and post about.