The replicability of science

  1. What is the importance of a single experiment to science? Very little.
  2. A massive study attempted to replicate 100 psychology experiments. 36% still had ‘significant’ p-values and 47.4% still had a similar effect size.
  3. replicability of science
  4. This is a description of one attempt at replication:

    “The study he finally settled on was originally run in Germany. It looked at how ‘global versus local processing influenced the way participants used priming information in their judgment of others.’… Reinhard had to figure out how to translate the social context, bringing a study that ran in Germany to students at the University of Virginia. For example, the original research used maps from Germany. “We decided to use maps of one of the states in the US, so it would be less weird for people in Virginia,” he said…Another factor: Americans’ perceptions of aggressive behavior are different from Germans’, and the study hinged on participants scoring their perceptions of aggression. The German researchers who ran the original study based it on some previous research that was done in America, but they changed the ratings scale because the Germans’ threshold for aggressive behavior was much higher…Now Reinhard had to change them back — just one of a number of variables that had to be manipulated.”

  5. In the simplest “hypothesis-test” experiment, everything is held constant except one key parameter (word color; monetary endowment; stereotype threat). In reality, this is never true. You arrive at a laboratory tired, bored, stressed, content.
  6. Experiments are meant to introduce major variations into already noisy data. The hope is that these variations are larger than anything else that occurs during the experiment. Are they?
  7. Even experiments that replicate quite often can turn out to be false
  8. Even experiments that fail to be replicated can turn out to have grains of truth
  9. The important regression is the likelihood of replication given n independent replications already in existence. If 66% of experiments fail to replicate when contained in only one publication, what percent fail when contained in two? Three? Science is a society, not the sum of individuals.
  10. Given time and resource constraints, what is the optimal number of experiments that you expect to replicate? What if it should be lower?

Optogenetics patents that I did not realize existed

  1. Use of biological photoreceptors as directly light-activated ion channels (Bamberg, Hegemann, Nagel)
  2. Light-activated cation channel and uses thereof (Deisseroth, Boyden)
  3. Use of light sensitive genes [for the manufacture of a medicament for the treatment of blindness and a method for expressing said cell specific fashion] (Balya, Lagali, Muench, Roska; Novartis)
  4. Channelrhodopsins for optical control of cells (Klapoetke, Chow, Boyden, Wong, Cho)
  5. Heterologous stimulus-gated ion channels and methods of using same [especially TRPV1, TRPM8 or P2X2] (Miesenbock, Zemelman)
  6. Optically-controlled cns dysfunction (Tye, Fenno, Diesseroth)
  7. Optogenetic control of reward-related behaviors (Diesseroth, Witten)
  8. Control and characterization of memory function (Goshen, Diesseroth)

10 years of neural opsins

Just in time for Nature Neuroscience’s Optogenetics 10-year anniversary retrospective, Ed Boyden has announced the first (?) time the FDA has approved optegenetics for human testing.

The set of retrospective pieces that NN published are quite interesting. For instance:

(Deisseroth) It seems unlikely that the initial experiments described here would have been fundable, as such, by typical grant programs focusing on a disease state, on a translational question, or even on solidly justified basic science…In this way, progress over the last ten years has revealed not only much about the brain, but also something about the scientific process.

(Boyden) The study, which originated in ideas and experiments generated by Karl Deisseroth and myself, collaborating with Georg Nagel and Ernst Bamberg and later with the assistance of Feng Zhang, was not immediately a smash hit. Rejected by Science, then Nature, the discovery perhaps seemed too good to be true. Could you really just express a single natural algal gene, channelrhodopsin-2 (ChR2), in neurons to make them light-activatable?

These are from the history and future of optogenetics summaries, respectively. Many people looking back on it had similar thoughts:

(Josselyn) I thought the data were interesting, but likely not replicable and definitely not generalizable. I thought optogenetics would not work reliably and, even if it did, the technique would be so complicated as to be out of reach for most neuroscience labs. My initial impression was that optogenetics would be highly parameter-sensitive and would take lots of fiddling to get any kind of effect. I was definitely in the camp that didn’t think it would have an impact on my kind of neuroscience.

Think about their perspective at the time:

So why did it take time to develop and apply methods for placing these proteins into different classes of neurons in behaving animals? As mentioned above, the development of optogenetics was a biological three-body problem in which it was hard to resolve (or, even more importantly, to motivate attempts to resolve) any one of the three challenges without first addressing the other components. For example, microbial rhodopsin photocurrents were predicted to be exceedingly small, suggesting a difficult path forward even if efficient delivery and incorporation of the all-trans retinal chromophore were possible in adult non-retinal brain tissue, and even in the event of safe and correct trafficking of these evolutionarily remote proteins to the surface membrane of complex metazoan neurons. For these weak membrane conductance regulators to work, high gene-expression and light-intensity levels would have to be attained in living nervous systems while simultaneously attaining cell-type specificity and minimizing cellular toxicity. All of this would have to be achieved even though neurons were well known to be highly vulnerable to (and often damaged or destroyed by) overexpression of membrane proteins, as well as sensitive to side effects of heat and light. Motivating dedicated effort to exploration of microbial opsin-based optical control was difficult in the face of these multiple unsolved problems, and the dimmest initial sparks of hope would turn out to mean a great deal.

And the important thing to remember:

(Soltesz) But what made the rise of optogenetics so fast? I believe it was more than just the evident usefulness of the technology itself. Indeed, in my opinion, it is to the credit of Deisseroth and Boyden that they had recognized early that by freely sharing the reagents and methods they can make optogenetics as much of a basic necessity in neurosci-ence labs as PCs, iPhones and iPads came to be in the lives of everyday citizens. This is a part of their genius that made optogenetics spread like wildfire. The open-source philosophy that they adopted stands in stark contrast to numerous other techniques where the developers tightly control all material and procedural aspects of their methodology for short-term gain, which in most, albeit not all, cases has proven to be a rather penny-wise, pound-foolish attitude in the long run.

 

Go read this Q&A with many of the pioneers of the field. Stay through to the end of the “What was your first reaction when optogenetics came onto the scene 10 years ago?” question at least.

Here is the original paper. Don’t forget that Miesenbock’s group had optogenetics work that preceded (Boyden 2005) but never quite “made it”.

Unrelated to all that, 8/15 edition

This picture shows the top referring sites to someone’s latest study 

2CL7sHi

(The study is here)

A questionable use of animals

CMYVCB8WgAUX4nd

This is definitely a game that I would want to play

burd

‘Crime and Punishment’ to become a board game

“Literature lovers will soon be able to retrace Raskolnikov’s steps around St. Petersburg by playing a new board game.”

The real question: why?!

The Obscure Neuroscience Problem That’s Plaguing VR

But there’s another, less obvious flaw that could add to that off-kilter sensation: an eye-focusing problem called vergence-accommodation conflict. It’s only less obvious because, well, you rarely experience it outside of virtual reality…

Okay okay, so what’s the big deal with the vergence-accommodation conflict? Two things happen when you simply “look” at an object. First, you point your eyeballs. If an object is close, your eyes naturally converge on it; if it’s far, they diverge. Hence, vergence. If your eyes don’t line up correctly, you end up seeing double.

The second thing that happens is the lenses inside your eyes focus on the object, aka accommodation. Normally, vergence and accommodation are coupled…Strap on an Oculus Rift or Samsung Gear VR, though, and all bets are off.

The Neuron’s Secret Partner

When we speak of brain cells we usually mean neurons: those gregarious, energetic darlings of cell biology that intertwine their many branches in complex webs and constantly crackle with their own electric chatter. But neurons make up only half the cells in the brain. The rest, known as neuroglia or simply glia, have long lived in the neuron’s shadow.

Welcome to Liberland, the World’s Newest Country (Maybe)

‘‘No, no, you cannot,’’ the ambassador said. ‘‘It is not safe, it is impossible to take Uber here, you will not be safe. There is no time, and we must take the Metro. It will be quick. We do not even have to change the train.’’

The President relented. But he would draw the official line at being filmed carrying his own bag. Pillar, however, would not be arriving in Paris for another seven hours; the President’s assistant had once again failed to check him in, and he had been left at Heathrow.

The President turned to me with a look of anguish. He understood that this was a violation of propriety, but he also very strongly did not want to be filmed by French television carrying his own bag. I felt sorry for him and accepted a short tenure as his bag man.

Jedlicka was briefly mollified, but he was still very embarrassed to be seen on camera taking the Metro at all. Perhaps to compensate for the indignity, Boitel proceeded to cut a very long line of people to buy tickets, claiming that he was traveling on urgent official business.

Red algae are the “also-ran” of evolution. How come they didn’t make it?

Though they are by far the most diverse seaweeds in the ocean, they rarely occur in freshwater and never on land, and so almost no one has ever heard of them (though if you’ve ever eaten sushi, you’ve certainly had an intimate red algal encounter).

Why this might be has long been a mystery. But a team of European scientists discovered in 2013 that they have shockingly few genes for a multicellular organism – far fewer even than several single-celled green algae. And this may explain why such a diverse and abundant group of algae never packed their bags for land and why, when you look outside your window, you see a sea of green and not red. What happened to the poor red algae?

This Man Has Been Trying to Live Life as a Goat

1439305760349196

I have nothing to say about this.

How a Kalman filter works, in pictures

You can use a Kalman filter in any place where you have uncertain information about some dynamic system, and you can make an educated guess about what the system is going to do next. Even if messy reality comes along and interferes with the clean motion you guessed about, the Kalman filter will often do a very good job of figuring out what actually happened. And it can take advantage of correlations between crazy phenomena that you maybe wouldn’t have thought to exploit!

Kalman filters are ideal for systems which are continuously changing. They have the advantage that they are light on memory (they don’t need to keep any history other than the previous state), and they are very fast, making them well suited for real time problems and embedded systems.

Detecting Betrayal in Diplomacy Games

Working from these linguistic cues, a computer program could peg future betrayal 57 percent of the time. That might not sound like much, but it was better than the accuracy of the human players, who never saw it coming. And remember that by definition, a betrayer conceals the intention to betray; the breach is unexpected (that whole trust thing). Given that inherent deceit, 57 percent isn’t so bad.

 

What economics really is

Economics as a whole is really a combination of two kinds of people: those who are very practically oriented and those who are more like mathematical philosophers. The mathematical philosophers get most of the attention. They deal with the big unanswerable questions. Labor economists try to be more scientific: looking for very specific predictions and trying to test these as carefully as possible. The mathematical philosophers get very frustrated by labor economists. They come up with a broad general theory, and we tell them it doesn’t fit the evidence.

(via Noah Smith)

 Tangerine

This is the best movie released this year that I have seen

The paper rejection repository

Nobody likes to receive a letter from the editor of your favorite journal letting you know that your paper was rejected. Some journals have begun including reviewers’ comments with accepted papers to make the views of experts available to the reader. However, often the paper has been submitted to several journals and rejected before it is finally accepted. The rejection letters and comments are equally useful in helping to judge what kind of papers might be acceptable to a journal, and what kind of comments lead to rejections. Rather than hiding these low points in the trajectory of a scientific paper, this forum offers a place to publish these letters and comments to educate others.

[via Prerana Shrestha]

 A ‘curated’ list of data science blogs

[via Chris Said]

How to study dopamine in humans

CL43h5ZUcAIFCgO

The last days of the polymath

In the first half of 1802 a physician and scientist called Thomas Young gave a series of 50 lectures at London’s new Royal Institution, arranged into subjects like “Mechanics” and “Hydro­dynamics”. By the end, says Young’s biographer Andrew Robinson, he had pretty much laid out the sum of scientific knowledge. Robinson called his book “The Last Man Who Knew Everything”.

Young’s achievements are staggering. He smashed Newtonian orthodoxy by showing that light is a wave, not just a particle; he described how the eye can vary its focus; and he proposed the three-colour theory of vision. In materials science, engineers dealing with elasticity still talk about Young’s modulus; in linguistics, Young studied the grammar and voc­abulary of 400 or so languages and coined the term “Indo-European”; in Egyptology, Jean-François Champollion drew on his work to decode the Rosetta stone. Young even tinkered around with life insurance.

When Young was alive the world contained about a billion people. Few of them were literate and fewer still had the chance to experiment on the nature of light or to examine the Rosetta stone. Today the planet teems with 6.7 billion minds. Never have so many been taught to read and write and think, and then been free to choose what they would do with their lives. The electronic age has broken the shackles of knowledge. Never has it been easier to find something out, or to get someone to explain it to you.

The silent majority (of neurons)

Kelly Clancy has yet another fantastic article explaining a key idea in theoretical neuroscience (here is another):

Today we know that a large population of cortical neurons are “silent.” They spike surprisingly rarely, and some do not spike at all. Since researchers can only take very limited recordings from inside human brains (for example, from patients in preparation for brain surgery), they have estimated activity rates based on the brain’s glucose consumption. The human brain, which accounts for less than 2 percent of the body’s mass, uses 20 percent of its calorie budget, or three bananas worth of energy a day. That’s remarkably low, given that spikes require a lot of energy. Considering the energetic cost of a single spike and the number of neurons in the brain, the average neuron must spike less than once per second.4 Yet the cells typically recorded in human patients fire tens to hundreds of times per second, indicating a small minority of neurons eats up the bulk of energy allocated to the brain.

There are two extremes of neural coding: Perceptions might be represented through the activity of ensembles of neurons, or they might be encoded by single neurons. The first strategy, called the dense code, would result in a huge storage capacity: Given N neurons in the brain, it could encode 2Nitems—an astronomical figure far greater than the number of atoms in the universe, and more than one could experience in many lifetimes. But it would also require high activity rates and a prohibitive energy budget, because many neurons would need to be active at the same time. The second strategy—called the grandmother code because it implies the existence of a cell that only spikes for your grandmother—is much simpler. Every object in experience would be represented by a neuron in the same way each key on a keyboard represents a single letter. This scheme is spike-efficient because, since the vast majority of known objects are not involved in a given thought or experience, most neurons would be dormant most of the time. But the brain would only be able to represent as many concepts as it had neurons.

Theoretical neuroscientists struck on a beautiful compromise between these ideas in the late ’90s.6,7In this strategy, dubbed the sparse code, perceptions are encoded by the activity of several neurons at once, as with the dense code. But the sparse code puts a limit on how many neurons can be involved in coding a particular stimulus, similar to the grandmother code. It combines a large storage capacity with low activity levels and a conservative energy budget.

 

 

She goes on to discuss the sparse coding work of Bruno Olshausen, specifically this famous paper. This should always be read in the context of Bell & Sejnowski which shows the same thing with ICA. Why are the independent components and the sparse coding result the same? Bruno Olshausen has a manuscript explaining why this is the case, but the general reason is that both are just Hebbian learning!

She ends by asking, why are some neurons sparse and some so active? Perhaps these are two separate coding strategies? But they need not be: in order for codes to be sparse in general, it could require some few specific neurons to be highly active.

How a neural network can create music

Playing chess, composing classical music, __: computer programmers love creating ‘AIs’ that can do this stuff. Music, especially is always fun: there is a long history of programs that can create new songs that are so good that they fool professional musicians (who cannot tell the difference between a Chopin song and a generated song – listen to some here; here is another video).

I do not know how these have worked; I would guess a genetic algorithm, hidden markov model, or neural network of some sort. Thankfully Daniel Johnson has just created such a neural network and laid out the logic behind it in beautiful detail:

Music composing neural network

The power of this is that it enables the network to have a simple version of memory, with very minimal overhead. This opens up the possibility of variable-length input and output: we can feed in inputs one-at-a-time, and let the network combine them using the state passed from each time step.

One problem with this is that the memory is very short-term. Any value that is output in one time step becomes input in the next, but unless that same value is output again, it is lost at the next tick. To solve this, we can use a Long Short-Term Memory (LSTM) node instead of a normal node. This introduces a “memory cell” value that is passed down for multiple time steps, and which can be added to or subtracted from at each tick. (I’m not going to go into all of the details, but you can read more about LSTMs in the original paper.)…

However, there is still a problem with this network. The recurrent connections allow patterns in time, but we have no mechanism to attain nice chords: each note’s output is completely independent of every other note’s output. Here we can draw inspiration from the RNN-RBM combination above: let the first part of our network deal with time, and let the second part create the nice chords. But an RBM gives a single conditional distribution of a bunch of outputs, which is incompatible with using one network per note.

The solution I decided to go with is something I am calling a “biaxial RNN”. The idea is that we have two axes (and one pseudo-axis): there is the time axis and the note axis (and the direction-of-computation pseudo-axis). Each recurrent layer transforms inputs to outputs, and also sends recurrent connections along one of these axes. But there is no reason why they all have to send connections along the same axis!

What blows me away – and yes, I am often blown away these days – is how relatively simple all these steps are. By using logical, standard techniques for neural networks (and these are not deep), the programmer on the street can create programs that are easily able to do things that were almost unfathomable a decade ago. This is not just pattern separation, but also generation.

Clarity of thought, clarity of language

 

Clarity of language leads to clarity of thought: this is the lesson of apply mathematics and logic to science. But even when we don’t have those tools, we can be careful about the words that we use when describing behavior and the brain. Words can be ambiguous, can mean different things to different people, or just plain misused Here is a list of 50 terms not to use. Here are some that I like:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public

(16) Love molecule. Over 6000 websites have dubbed the hormone oxytocin the “love molecule” (e.g., Morse, 2011). Others have named it the “trust molecule” (Dvorsky, 2012), “cuddle hormone” (Griffiths, 2014), or “moral molecule” (Zak, 2013). Nevertheless, data derived from controlled studies imply that all of these appellations are woefully simplistic (Wong, 2012; Jarrett, 2015; Shen, 2015). Most evidence suggests that oxytocin renders individuals more sensitive to social information (Stix, 2014), both positive and negative.

(19) No difference between groups. Many researchers, after reporting a group difference that does not attain conventional levels of statistical significance, will go on to state that “there was no difference between groups.” Similarly, many authors will report that a non-significant correlation between two variables means that “there was no association between the variables.” But a failure to reject the null hypothesis does not mean that the null hypothesis, strictly speaking, has been confirmed.

(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way

(35) Comorbidity. This term, which has become ubiquitous in publications on the relations between two or more mental disorders (appearing in approximately 444,000 citations in Google Scholar), refers to the overlap between two diagnoses, such as major depression and generalized anxiety disorder…Nevertheless, “comorbidity” can mean two quite different things. It can refer to either the (a) covariation (or correlation) between two diagnoseswithin a sample or the population or (b) co-occurrence between two diagnoses within an individual

(45) Scientific proof. The concepts of “proof” and “confirmation” are incompatible with science, which by its very nature is provisional and self-correcting (McComas, 1996). Hence, it is understandable whyPopper (1959) preferred the term “corroboration” to “confirmation,” as all theories can in principle be overturned by new evidence.

 

And some quibbles –

(4) Brain region X lights up. Many authors in the popular and academic literatures use such phrases as “brain area X lit up following manipulation Y”…Hence, from a functional perspective, these areas may be being “lit down” rather than “lit up.”

I will actually go to bat for “brain region X lights up”, despite its uninformed use in the popular press. Despite the fact that to a professional audience it sounds amateurish, it has a clear meaning in terms of the delta in brain oxidation levels.

(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs

I thought that we had all agreed that nothing is 100% genetic, and genetically determined was equivalent to saying genetics have a “strong” impact on some behavior.

(18) Neural signature. One group of authors, after observing that compliance with social norms was associated with activations in certain brain regions (lateral orbitofrontal cortex and right dorsolateral cortex), referred to the “neural signature” of social norm compliance…Nevertheless, identifying a genuine neural signature would necessitate the discovery of a specific pattern of brain responses that possesses nearly perfect sensitivity and specificity for a given condition or other phenotype.

Is this the meaning of neural signature? I would never have used neural signature in this way. To me, a neural signature is a response that contains information about some stimulus or behavior.

(47) Empirical data. “Empirical” means based on observation or experience. As a consequence, with the possible exception of information derived from archival sources, all psychological data are empirical (what would “non-empirical” psychological data look like?).

Data from models is not empirical…

Before using a word, there are many things you must take into account: your audience, the way other words constrain the meaning of the chosen word, and so on. Even if I disagree on the meaning of, say, ‘neural signature’ I would not use it because it has such a multiplicity of meanings! Academic writing should always define its terms clearly and carefully; but lay writing must be equally careful not to allow the read to imply things that are not there. Be careful.

References

Lilienfeld SO, & et al (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases
Frontiers in Psychology

Dance the fine line between successful courtship and death

Peacock spider video game

I would kickstart the shit out of this game.

For those unaware, peacock spiders are some of the best dancers you’ve ever seen:

Unrelated to all that 8/2 edition

Oh, how time flies.

Hitchhiking robot makes it to Philadelphia before getting dismembered because…Philly

Here is a before and after photo. Some enterprising /r/philadelphia redditors are hoping to find and repair him, but no one has a clue where he is (though there are other suggestions).

Dabbawalas: Mumbai’s lunchbox carriers

Studied by consultants and business schools for the secrets of their proclaimed near-flawless efficiency, the dabbawalas have been feted by British royals (Prince Charles) and titans of industry (Richard Branson) alike. Even FedEx, which supposedly knows something about logistics, has paid them a visit. In 2010, the Harvard Business Review published a study of the dabbawala system entitled “On-Time Delivery, Every Time”. In it, the authors asserted that the dabbawalas operate to Six Sigma standards even though they have few special skills, charge a minimal fee (around $10-$13 a month) and use no IT…

Rishi Khiani, a serial entrepreneur, speaks a different language. His office is all new India — swipe cards at the entrance, bright young things at open-plan desks and green tea for guests. Khiani has recently acquired a company called Meals on Wheels, which he has jazzed up with the name Scootsy and kitted out with brightly coloured motorbikes. His deliverymen, who will earn slightly more than dabbawalas, are armed with Android devices and an app that allows customers to follow their orders on their smartphones. “They’re giving us pings back on our CRM,” says Khiani, using the acronym for “customer relationship management” tool as he flips through his PowerPoint presentation. “That tells us where any person is at any given time.” Scootsy won’t just deliver takeaways from QSRs, he says, meaning quick service restaurants. It will soon branch out into other categories — groceries, flowers, electronic goods. “It’s masspirational,” he says. “We’re going to be the Uber for everything.”

The man who studies everyday evil

The “bug crushing machine” offered the perfect way for Paulhus and colleagues to test whether that reflected real life behaviour. Unknown to the participants, the coffee grinder had been adapted to give insects an escape route – but the machine still produced a devastating crushing sound to mimic their shells hitting the cogs. Some were so squeamish they refused to take part, while others took active enjoyment in the task. “They would be willing not just to do something nasty to bugs but to ask for more,” he says, “while others thought it was so gross they didn’t even want to be in the same room.” Crucially, those individuals also scored very highly on his test for everyday sadism.

Really the best introduction to machine learning/decision trees that you will find

Screen Shot 2015-07-30 at 11.39.55 AM

Tom Insel offers suggestions for what to do with all the data we will get from the brain

While we don’t have a unified field theory of the brain, some of the early projects in the BRAIN Initiative are providing models of how behavior emerges from brain activity. One of the first grants issued by the BRAIN Initiative supported scientists at NIMH and the University of Maryland to understand how the activity of individual neurons is integrated into larger patterns of brain activity. This work builds on the observation that in nature, order sometimes emerges out of the chaos of individual interacting elements.2

His main suggestion: ¯\_(ツ)_/¯

This is what it will look like when nature reclaims a city

CLIE9DzW8AA1PVQ

 

 

Rethinking fast and slow

Everyone except homo economicus knows that our brains have multiple processes to make decisions. Are you going to make the same decision when you are angry as when you sit down and meditate on a question? Of course not. Kahneman and Tversky have famously reduced this to ‘thinking fast’ (intuitive decisions) and ‘thinking slow’ (logical inference) (1).

Breaking these decisions up into ‘fast’ and ‘slow’ makes it easy to design experiments that can disentangle whether people use their lizard brains or their shiny silicon engines when making any given decision. Here’s how: give someone two options, let’s say a ‘greedy’ option or an ‘altruistic’ option. Now simply look at how long it takes them to to choose each option. Is it fast or slow? Congratulations, you have successfully found that greed is intuitive while altruism requires a person to sigh, restrain themselves, think things over, clip some coupons, and decide on the better path.

This method actually is a useful way of investigating how the brain makes decisions; harder decisions really do take longer to be processed by the brain and we have the neural data to prove it. But there’s the rub. When you make a decision, it is not simply a matter of intuitive versus deliberative. It is also how hard the question is. And this really depends on the person. Not everyone values money in the same way! Or even in the same way at different times! I really want to have a dollar bill on me when it is hot, humid, and I am front of a soda machine. I care about a dollar bill a lot less when I am at home in front of my fridge.

So let’s go back to classical economics; let’s pretend like we can measure how much someone values money with a utility curve. Measure everyones utility curve and find their indifference – the point at which they don’t care about making one choice over the other. Now you can ask about the relative speed. If you make each decision 50% of the time, but one decision is still faster then you can say something about the relative reaction times and ways of processing.

dictator game fast and slow

And what do you find? In some of the classic experiments – nothing! People make each decision equally as often and equally quickly! Harder decisions require more time, and that is what is being measured here. People have heterogeneous preferences, and you cannot accurately measure decisions without taking this into account subject by subject. No one cares about the population average: we only care what an individual will do.

temporal discounting fast and slow

But this is a fairly subtle point. This simple one-dimensional metric – how fast you respond to something – may not be able to disentangle the possibility that those who use their ‘lizard brain’ may simply have a greater utility for money (this is where brain imaging would come in to save the day).

No one is arguing that there are not multiple systems of decision-making in the brain – some faster and some slower, some that will come up with one answer and one that will come up with another. But we must be very very careful when attempting to measure which is fast and which is slow.

(1) this is still ridiculously reductive but still miles better than the ‘we compute utility this one way’ style of thinking

Reference

Krajbich, I., Bartling, B., Hare, T., & Fehr, E. (2015). Rethinking fast and slow based on a critique of reaction-time reverse inference Nature Communications, 6 DOI: 10.1038/ncomms8455