Whither experimental economics?

When I was applying to graduate school, I looked at three options: neuroscience, computer science, and economics. I had, effectively, done an economics major as an undergrad and had worked at an economic consulting firm. But the lack of controlled experimentation in economics kept me from applying and I ended up as a neuroscientist. (There is, of course, a very experimental non-human economics which goes by the name of ecology, though I did not recognize it at the time.)

I profess to being confused as to the lack of experimentation in economics, especially for a field that constantly tries to defend its status as a science. (Well, I understand: incentives, existing capabilities, and all that.)

A recent dissertation on the history of experimental economics – as opposed to behavioral economics – is enlightening:

“We were describing this mechanism and Vernon says, “You know, I can test this whether it works or not.“ I said, “What do you mean?“ And he says, “I’ll run an experiment.“ I said, “What the heck are you talking about? What do you do?“ And so he hauls seven or eight graduate students into a classroom. He ran the mechanism and it didn’t work. It didn’t converge to the equilibrium. It didn’t produce the outcomes the theory said it would produce. And I thought, okay. So back to [doing] theory. I don’t care; this doesn’t bother me.

It bothered Vernon a lot because we sat around that evening talking and he says, “Oh, I know what I did wrong.“ And he went back the next day and he hauled the students back in the room, changed the rules just a little bit in ways that the theory wouldn’t notice the difference. From our theory point of view, it wouldn’t have mattered. But he changed the rules a little bit and bang that thing zapped in and converged.“

The difference between the two experiments was the information shared with the test subjects. The first time around, the subjects wrote down their number on a piece of paper and then Smith wrote them up on the board. Then he asked the subjects to send another message and if the messages were the same twice in a row he would stop, since that stability would be interpreted as having reached equilibrium. But the messages did not stop the first time Smith had run the experiment a day earlier…

The fact that the experiment did not converge at the first attempt, but did at the second with a change of only one rule (the information structure available to the participants) not required by theory to make its prediction made a lasting impact on Ledyard.

And this is exactly why we do experiments:

[T]he theory didn’t distinguish between those two rules, but Vernon knew how to find a rule that would lead to an equilibrium. It meant he knew something that I didn’t know and he had a way of demonstrating it that was really neat.

In psychology and neuroscience, there are many laboratories doing animal experiments testing some sort of economic decision-making hypothesis, though it is debatable how much of that work has filtered into the economic profession. What the two fields could really use, though, are economic ideas about more than just basic decision-making. Much of economics is about markets and mass decisions; there is very animal experimentation of these questions.

(via Marginal Revolution)

How well do we understand how people make choices? Place a bet on your favorite theory

Put your money where your mouth is, as they say:

The goal of this competition is to facilitate the derivation of models that can capture the classical choice anomalies (including Allais, St. Petersburg, and Ellsberg paradoxes, and loss aversion) and provide useful forecasts of decisions under risk and ambiguity (with and without feedback).

The rules of the competition are described in http://departments.agri.huji.ac.il/cpc2015. The submission deadline is May17, 2015. The prize for the winners is an invitation to be a co-author of the paper that summarizes the competition (the first part can be downloaded fromhttp://departments.agri.huji.ac.il/economics/teachers/ert_eyal/CPC2015.pdf)…

Our analysis of these 90 problems (see http://departments.agri.huji.ac.il/cpc2015) shows that the classical anomalies are robust, and that the popular descriptive models (e.g., prospect theory) cannot capture all the phenomena with one set of parameters. We present one model (a baseline model) that can capture all the results, and challenge you to propose a better model.

There was a competition recently that asked people to predict seizures from EEG activity; the public blew the neuroscientists out of the water. How will the economists do?

Register” by April 1. The submission deadline is May 17!

 

You can still change your mind when there is no news

The Beautiful Data Set – an economist uses soccer data:

Data from soccer can also illuminate one of the most prominent theories of the stock market: the efficient-market hypothesis. This theory posits that the market incorporates information so completely and so quickly that any relevant news is integrated into a stock’s price before anyone has a chance to act on it. This means that unless you have insider information, no stock is a better buy (i.e., undervalued) when compared with any other.

If this theory is correct, the price of an asset should jump up or down when news breaks and then remain perfectly flat until there is more news. But to test this in the real world is difficult. You would need to somehow stop the flow of news while letting trading continue. That seems impossible, since everything that happens in the real world, however boring or uneventful, counts as news…

The break in play at halftime provided a golden opportunity to study market efficiency because the playing clock stopped but the betting clock continued. Any drift in halftime betting values would have been evidence against market efficiency, since efficient prices should not drift when there is no news (or goals, in this case). It turned out that when goals arrived within seconds of the end of the first half, betting continued heavily throughout halftime — but the betting values remained constant, a necessary condition to prove that those markets were indeed efficient.

There is an extremely strong assumption here about how information about the world is extracted. In essence, it says that there is no thinking. This may be true in soccer if some linear combination of score, time of possession, etc is the best predictor of the eventual outcome.  Yet not all markets (or decisions) are like this. When you consider whether to take some action, you may have some initial idea of what you want do that gets more firm over time – but sometimes a new thought, a new possibility may pop into your head.

We can formalize this easily using concepts from Computer Science. Computer Science has a handy way of determining how long a given algorithm will take. Some things – like linear models – can be computed quickly. Other models take longer: think of chess algorithms, where the longer they take the more options they can consider.

alpha-beta shallow-prune

It is not at all clear to me the timescale over which markets should change in response to a pulse of new information. If prediction is easy, it will presumably happen instantly. If prediction is hard, you would expect the market to change as it makes new predictions. But even then, it is not obvious how it will change! If the space of possibilities is smoothly changing, you’d expect predictions to get more accurate across time. This means the range of plausible options is smaller and market actors have to deal with less risk. But as in Chess, the search space may vary wildly: you sacrifice that Queen and the board changes in dramatic ways. Then the markets could fluctuate suddenly up or down and still be efficient integrators of information.

I would be curious to see a psychology experiment along these lines (they’re probably out there, but I don’t know the references): a subject is forced to choose between two options A and B, but they have to determine something that is cognitively difficult. At different time points across time they are asked what they guess the answer is and how confident they are in that option. Does that always vary smoothly? Do large individual-level fluctuations in confidence average out?

And yes, this is a call for integrating computational complexity/algorithmic analysis with economics and psychology

Nash equilibrium and computation

In Beyond Nash Equilibrium: Solution Concepts for the 21st Century, Joseph Halpern cites three problems with the idea of Nash equilibrium that are inspired by computer science. These – and here I’m roughly quoting –  are that the equilibrium do not deal with “faulty” or “unexpected” behavior, they do not deal with computational concerns, and they assume that players have common knowledge of the structure of the game. I think the first and third can be roughly summed up as “Nash players should be all-knowing rationalists, but that is not always a useful assumption”.

The most immediately interesting to me is the need to take computation into account. The first example he gives is

You are given an n-bit number x. You can guess whether it is prime, or play safe and say nothing. If you guess right, you get $10; if you guess wrong, you lose $10; if you play safe, you get $1. There is only one Nash equilibrium in this 1-player game: giving the right answer. But if n is large, this is almost certainly not what people will do.

This is what I would call a problem that relies on pseudoperfect agents – ones that know what a prime number is, how to calculate whether a number is prime, but not, immediately, whether a given number is prime. A more typical lay person could just use what the probability distribution is for prime numbers may be. And of course, the cost of calculating the primality of a number in both physical and opportunity costs needs to be included in the final outcome.

But really: the computational complexity of a given situation will add implicit costs to the strategies and that does need to be taken into account. But how often does that actually happen?