#sfn16 starts tomorrow in San Diego

I know many of you will be there! I am giving a talk on Sunday morning at 8am. Since everyone will be fresh-faced and excited for the conference, I am sure it will not be a problem to be there that early, right? Right? The talk will be in room SDCC 30B on a new method to for unsupervised analysis of behavior which I am really, really excited about (we have gone beyond what the abstract says so ignore that).

Advertisements

Illusions are life

OiM3Jop

Just adding the right combination of grey and white crosses really screws things up, doesn’t it? It seems likely that the illusion comes from the perceived illumination (it probably helps that these are essentially Gabors).

There’s a nice reminder in Science this week that we are not the only animals subject to illusions – here is one in yeast (from the abstract):

We systematically monitored growth of yeast cells under various frequencies of oscillating osmotic stress. Growth was severely inhibited at a particular resonance frequency, at which cells show hyperactivated transcriptional stress responses. This behavior represents a sensory misperception—the cells incorrectly interpret oscillations as a staircase of ever-increasing osmolarity. The misperception results from the capacity of the osmolarity-sensing kinase network to retrigger with sequential osmotic stresses. Although this feature is critical for coping with natural challenges—like continually increasing osmolarity—it results in a tradeoff of fragility to non-natural oscillatory inputs that match the retriggering time.

In other words, a very non-natural stimulus – a periodic change in salt concentration – leads the yeast to instead ‘see’ a constant increase in the concentration. Pretty cool.

(via Kyle Hill)

Rethinking fast and slow

Everyone except homo economicus knows that our brains have multiple processes to make decisions. Are you going to make the same decision when you are angry as when you sit down and meditate on a question? Of course not. Kahneman and Tversky have famously reduced this to ‘thinking fast’ (intuitive decisions) and ‘thinking slow’ (logical inference) (1).

Breaking these decisions up into ‘fast’ and ‘slow’ makes it easy to design experiments that can disentangle whether people use their lizard brains or their shiny silicon engines when making any given decision. Here’s how: give someone two options, let’s say a ‘greedy’ option or an ‘altruistic’ option. Now simply look at how long it takes them to to choose each option. Is it fast or slow? Congratulations, you have successfully found that greed is intuitive while altruism requires a person to sigh, restrain themselves, think things over, clip some coupons, and decide on the better path.

This method actually is a useful way of investigating how the brain makes decisions; harder decisions really do take longer to be processed by the brain and we have the neural data to prove it. But there’s the rub. When you make a decision, it is not simply a matter of intuitive versus deliberative. It is also how hard the question is. And this really depends on the person. Not everyone values money in the same way! Or even in the same way at different times! I really want to have a dollar bill on me when it is hot, humid, and I am front of a soda machine. I care about a dollar bill a lot less when I am at home in front of my fridge.

So let’s go back to classical economics; let’s pretend like we can measure how much someone values money with a utility curve. Measure everyones utility curve and find their indifference – the point at which they don’t care about making one choice over the other. Now you can ask about the relative speed. If you make each decision 50% of the time, but one decision is still faster then you can say something about the relative reaction times and ways of processing.

dictator game fast and slow

And what do you find? In some of the classic experiments – nothing! People make each decision equally as often and equally quickly! Harder decisions require more time, and that is what is being measured here. People have heterogeneous preferences, and you cannot accurately measure decisions without taking this into account subject by subject. No one cares about the population average: we only care what an individual will do.

temporal discounting fast and slow

But this is a fairly subtle point. This simple one-dimensional metric – how fast you respond to something – may not be able to disentangle the possibility that those who use their ‘lizard brain’ may simply have a greater utility for money (this is where brain imaging would come in to save the day).

No one is arguing that there are not multiple systems of decision-making in the brain – some faster and some slower, some that will come up with one answer and one that will come up with another. But we must be very very careful when attempting to measure which is fast and which is slow.

(1) this is still ridiculously reductive but still miles better than the ‘we compute utility this one way’ style of thinking

Reference

Krajbich, I., Bartling, B., Hare, T., & Fehr, E. (2015). Rethinking fast and slow based on a critique of reaction-time reverse inference Nature Communications, 6 DOI: 10.1038/ncomms8455

How well do we understand how people make choices? Place a bet on your favorite theory

Put your money where your mouth is, as they say:

The goal of this competition is to facilitate the derivation of models that can capture the classical choice anomalies (including Allais, St. Petersburg, and Ellsberg paradoxes, and loss aversion) and provide useful forecasts of decisions under risk and ambiguity (with and without feedback).

The rules of the competition are described in http://departments.agri.huji.ac.il/cpc2015. The submission deadline is May17, 2015. The prize for the winners is an invitation to be a co-author of the paper that summarizes the competition (the first part can be downloaded fromhttp://departments.agri.huji.ac.il/economics/teachers/ert_eyal/CPC2015.pdf)…

Our analysis of these 90 problems (see http://departments.agri.huji.ac.il/cpc2015) shows that the classical anomalies are robust, and that the popular descriptive models (e.g., prospect theory) cannot capture all the phenomena with one set of parameters. We present one model (a baseline model) that can capture all the results, and challenge you to propose a better model.

There was a competition recently that asked people to predict seizures from EEG activity; the public blew the neuroscientists out of the water. How will the economists do?

Register” by April 1. The submission deadline is May 17!

 

How do we integrate information?

Left or right? Apple or orange? Selma or Birdman? One way to make these decisions is precisely what intuition tell us it should be: we weigh up the pros and cons of each choice. Then, when we have sufficient evidence for one over the other then we go ahead and make that choice.

How this is represented in the brain is quite straightforward: the firing of neurons would go up or down as evidence for one choice or another becomes clear and, when the firing had reached some fixed threshold, when the neurons had fired enough, a decision would be made.

The difficulty has been in figuring out precisely where the information is being encoded; in determining which neurons were increasing their activity in line with the evidence. In fact, multiple regions seem to be participating in the process.

So let us say that you are a little rodent who hears sound from the left and the right; little click click clicks. And you need to decide which side the most clicks are coming from. Every click on one side gives you a smidgen of evidence that that side will have the most, while a click on the other side will make it less likely. You don’t know when the clicks will end – so you have to stay ready.

Now there are two interesting areas of the brain that we can look at: the frontal orienting fields (FOF) that probably guide how you will orient your little snout (to the left? to the right?), and the posterior parietal cortex (PPC), which integrates diverse information from throughout the brain. Here is what the activity of these neurons look like if you plot how fast the neurons are firing, separated out by ‘accumulated value’ (how much evidence you have for one side or another;  I will refer to this as left or right but is actually more like ipsilateral or contralateral):

PPC vs FOF

It looks like PPC, the cortical integrator, fires progressively faster the more evidence the animal has to go left, and progressively slower the more evidence it has to go right. In other words, it is exactly the evidence accumulator we had been hoping for. The orienting region (FOF) has a different pattern, though. Its firing is separated into two clusters: low if there is a lot of evidence to go left, and high if there is a lot of evidence to go right. In other words, it is prepared to make a decision any second, like a spring ready to be released.

It is interesting that this is implemented by sharpening how tightly tuned neurons in each region are for the decision, going from something like a linear response to something more like a step function:

value vs firing rate

This is consistent with an idea from Anne Churchland’s lab that the PPC is integrating information from diverse sources to provide evidence for many different decisions. This information can then be easily ‘read out’ by drawing a straight line to separate the left from the right, a task that is trivial for a nervous system to accomplish in one step – say, from PPC to FOF.

And yet – there are mysteries. You could test the idea that FOF provides the information for left or right by removing it or just silencing it. If it was standing ready to make a decision, you would only care about the most recent neural activity. Indeed, ablating the region or just silencing it for a couple hundred milliseconds has the same effect of biasing the decision to the left or right. But it is only a bias – the information for the decision is still in the system somewhere!

Even more baffling is that the FOF begins to respond about 100 milliseconds after hearing a click – but PPC doesn’t start responding until 200 millseconds after a click. So how is FOF getting the information? Is FOF actually sending the information to PPC?

Decisions are hard. It is not a “step 1. hear information, step 2. add up pro/cons, step 3. make decision” kind of process. A linear 1, 2, 3 would be too simple for the real world. There are many different areas of the brain getting information, processing it and adding their own unique twist, sending their evidence to other areas, and processing it again. Again: even simple decisions are hard.

References

Hanks, T., Kopec, C., Brunton, B., Duan, C., Erlich, J., & Brody, C. (2015). Distinct relationships of parietal and prefrontal cortices to evidence accumulation Nature DOI: 10.1038/nature14066

Brunton, B., Botvinick, M., & Brody, C. (2013). Rats and Humans Can Optimally Accumulate Evidence for Decision-Making Science, 340 (6128), 95-98 DOI: 10.1126/science.1233912

Talking Machines (part 2: The Animals – Invisibilia)

Invisibilia is the other great new podcast that may interest people. It’s been pushed pretty heavily by NPR and the first two episodes are generally pretty good.

The first focused on thoughts – in particular, thoughts about thoughts. They told the story of Martin Pistorius, a man who, through sickness, went into a coma only to wake up four years later – but locked in. He couldn’t move. Slowly he gained control of his body but it took years before he could communicate. All he could do was sit there – and think. Now that he’s out, dude is about as smart and funny and all I want to do is read his book. But his case has got to be about as close to a “brain in a vat” experiment as you can get, right?

I don’t know anything about the cognitive science of thinking, but here are two places that might get you started.

The second episode interviewed a patient of Damasio’s who cannot feel fear because of Urbach-Wiethe disease. The disease slowly calcifies, and hence lesions, the amygdala. So this woman could feel fear at one point – but now can’t. My biggest thought: how does she remember fear? What does it ‘feel’ like if you don’t have the circuitry to generate it?

Here’s a great set of experiments that they performed on her:

SPIEGEL: Somewhere in her teens, somewhere between the catfish and walking into Damasio’s office, SM’s ability to experience fear just slowly faded out and the world around her became benign, a place populated by people and things that only seemed to wish her well. Damasio and the other scientists who have studied SM know this because they’ve done all kinds of tests that prove it’s true. They’ve exposed her to the most terrifying animals that they could find, snakes.

DAMASIO: She had to be restrained from playing with the ones that would actually be quite dangerous to her.

SPIEGEL: They’ve tried to condition a fear response into her by randomly assaulting her with the sound of a loud, jarring horn – nothing. She just seems emotionally blind to the experience of fear.

And here’s the story of her life:

SM: OK. I was walking to the store, and I saw this man on a park bench. He said, come here please. So I went over to him. I said, what do you need? He grabbed me by the shirt, and he held a knife to my throat and told me he was going to cut me. I told him – I said, go ahead and cut me. And I said, I’ll be coming back, and I’ll hunt your ass. Oops. Am I supposed to say that? I’m sorry.

TRANEL: That’s OK. It’s an intense situation. How did you feel when that happened?

SM: I wasn’t afraid. And for some reason, he let me go. And I went home.

Anyway, the podcast is worth a lesson and may have some ideas that I’ll come back to and post about.

The brain-in-itself: Kant, Schopenhauer, cybernetics and neuroscience

Artem Kaznatcheev pointed me to this article on Kant, Schopenhauer, and cybernetics (emphasis added):

Kant introduced the concept of the thing-in-itself for that which will be left of a thing if we take away everything that we can learn about it through our sensations. Thus the thing-in- itself has only one property: to exist independently of the cognizant subject. This concept is essentially negative; Kant did not relate it to any kind or any part of human experience. This was done by Schopenhauer. To the question `what is the thing-in- itself?’ he gave a clear and precise answer: it is will. The more you think about this answer, the more it looks like a revelation. My will is something I know from within. It is part of my experience. Yet it is absolutely inaccessible to anybody except myself. Any external observer will know about myself whatever he can know through his sense organs. Even if he can read my thoughts and intentions — literally, by deciphering brain signals — he will not perceive my will. He can conclude about the existence of my will by analogy with his own. He can bend and crush my will through my body, he can kill it by killing me, but he cannot in any way perceive my will. And still my will exists. It is a thing-in- itself.

Let us examine the way in which we come to know anything about the world. It starts with sensations. Sensations are not things. They do not have reality as things. Their reality is that of an event, an action. Sensation is an interaction between the subject and the object, a physical phenomenon. Then the signals resulting from that interaction start their long path through the nervous system and the brain. The brain is tremendously complex system, created for a very narrow goal: to survive, to sustain the life of the individual creature, and to reproduce the species. It is for this purpose and from this angle that the brain processes information from sense organs and forms its representation of the world.

In neuroscience, what is the thing-in-itself when it comes to the brain? What is ‘the will’? Perhaps this is straining the analogy, but What do you have when you take away the sensory input and look at what directs movement and action? The rumbling, churning activity of the brain: the dynamics which are scaffolded by transcription of genes and experience with the environment. That which makes organisms more than a simple f(sensation) = action.

Then as neuroscience advances and we learn more about how the dynamics evolve, how genetic variation reacts to the environment – does the brain-in-itself become more constrained, more knowable? In a certain sense, ‘will’ is qualia; but in another it is that which feels uncaused but is in reality a consequence of our physical life. Will is not diminished by its predictability.

Just some thoughts from a snowy day before Thanksgiving. But Kant and Schopenhauer are worth thinking about…

Behavior is as much about environment as it is about cognition

Over at TalkingBrains, Greg Hickok points to a review on embodied cognition that has several neat examples of how distinct behavior arises just by placing an agent in the correct environment:

Robots with two sensors situated at 45 degree angles on the robot’s “head” and a simple program to avoid obstacles detected by the sensors will after a while tidy a room full of randomly distributed cubes into neat piles:

and

Female crickets need to find male crickets to breed with. Females prefer to breed with males who produce the loudest songs… Female crickets have a pair of eardrums, one on each front leg, which are connected to each other via a tube. It so happens that the eardrums connect to a small number of interneurons that control turning; female crickets always turn in the direction specified by the more active interneuron. Within a species of cricket, these interneurons have a typical activation decay rate. This means that their pattern of activation is maximized by sounds with a particular frequency. Male cricket songs are tuned to this frequency, and the net result is that, with no explicit computation or comparison required, the female cricket can orient toward the male of her own species producing the loudest song. The analysis of task resources indicates that the cricket solves the problem by having a particular body (eardrum configuration and interneuron connections) and by living in a particular environment (where male crickets have songs of particular frequencies).

(Emphasis added.)

This, of course, is a perfect example of why we need ethology in order to understand the nervous system – behaviors only make sense in the context of the ecology that they operate in!

Psychohydraulics

psychohydraulic model

On twitter, @mnxmnkmnd pointed me to Lorenz’ model of ‘psychohydraulics‘ as a theory of behavior. Wut?

From a book chapter (I can’t figure out which book):

Lorenz introduced the (artificial) concept of an action-specific energy, ac- cumulating in a tank with a valve. In this model, the level of action-specific energy is raised as a result of the passage of time (if the behavior is not being executed), leading to the eventual opening of the valve, and the flow of action-specific energy into a bucket with several holes on different levels, represent- ing different aspects of the behavior in question. The flow of action-specific behavior into the bucket can also be increased by external factors, represented by weights on a scale, connected to the valve by means of a string. As the energy flows into the bucket, the low-threshold parts of the behavior are im- mediately expressed, and higher-threshold aspects are expressed if the level of energy reaches sufficiently high. Before proceeding with a simple set of equations for this model, one should note that the modern view of motivation is more complex than the simple feedback model just described.

Wut? Here’s some equations, because that makes everything easier to understand:

psychohydraulics

Remember, they’re talking about animal motivation. This is what happens when you win a Nobel prize.

Here is more explanation and digressions.

Emodiversity: why a mix of emotions is good for you

Neuroskeptic covered a paper (pdf) that postulates that it is healthiest to have a mix of emotions:

It turns out that emotional diversity was a good thing (in terms of being associated with less depression etc.) for both positive and for negative emotions. This seems a little counter-intuitive. You might have expected that feeling many negative emotions would be worse than only feeling one of them – but in fact, it’s better.

The authors speculate that it could be due to the same resilience that biodiversity confers. They also suggest:

experiencing many different specific emotional states (e.g., anger, shame, and sadness) may have more adaptive value than experiencing fewer and/or more global states (e.g., feeling bad), as these specific emotions provide richer information about which behavior in one’s repertoire is more suited for dealing with a given affective situation

One way to think about this is to begin by asking, what are emotions for? Emotions provide instant and powerful information when making decisions. They have access to long-term experience as well as the internal state of the animal (think: HANGRY!). Famously, patients with mPFC damage are ‘overly-logical’ and, consequently, make very poor decisions; there’s just not enough information in the world to make Spock-like decisions all the time!

In a review we published recently (pdf), we discussed the possibility that emotions are a way of properly responding to information in the world. When you’re in a good mood, you’re more responsive to positive stimuli. Conversely, when you’re in a bad mood, you’re more responsive to negative stimuli. Therefore, if you want to respond to the world optimally, you’ll need the right mix of moods for the right environment: emodiversity.