Data from soccer can also illuminate one of the most prominent theories of the stock market: the efficient-market hypothesis. This theory posits that the market incorporates information so completely and so quickly that any relevant news is integrated into a stock’s price before anyone has a chance to act on it. This means that unless you have insider information, no stock is a better buy (i.e., undervalued) when compared with any other.
If this theory is correct, the price of an asset should jump up or down when news breaks and then remain perfectly flat until there is more news. But to test this in the real world is difficult. You would need to somehow stop the flow of news while letting trading continue. That seems impossible, since everything that happens in the real world, however boring or uneventful, counts as news…
The break in play at halftime provided a golden opportunity to study market efficiency because the playing clock stopped but the betting clock continued. Any drift in halftime betting values would have been evidence against market efficiency, since efficient prices should not drift when there is no news (or goals, in this case). It turned out that when goals arrived within seconds of the end of the first half, betting continued heavily throughout halftime — but the betting values remained constant, a necessary condition to prove that those markets were indeed efficient.
There is an extremely strong assumption here about how information about the world is extracted. In essence, it says that there is no thinking. This may be true in soccer if some linear combination of score, time of possession, etc is the best predictor of the eventual outcome. Yet not all markets (or decisions) are like this. When you consider whether to take some action, you may have some initial idea of what you want do that gets more firm over time – but sometimes a new thought, a new possibility may pop into your head.
We can formalize this easily using concepts from Computer Science. Computer Science has a handy way of determining how long a given algorithm will take. Some things – like linear models – can be computed quickly. Other models take longer: think of chess algorithms, where the longer they take the more options they can consider.
It is not at all clear to me the timescale over which markets should change in response to a pulse of new information. If prediction is easy, it will presumably happen instantly. If prediction is hard, you would expect the market to change as it makes new predictions. But even then, it is not obvious how it will change! If the space of possibilities is smoothly changing, you’d expect predictions to get more accurate across time. This means the range of plausible options is smaller and market actors have to deal with less risk. But as in Chess, the search space may vary wildly: you sacrifice that Queen and the board changes in dramatic ways. Then the markets could fluctuate suddenly up or down and still be efficient integrators of information.
I would be curious to see a psychology experiment along these lines (they’re probably out there, but I don’t know the references): a subject is forced to choose between two options A and B, but they have to determine something that is cognitively difficult. At different time points across time they are asked what they guess the answer is and how confident they are in that option. Does that always vary smoothly? Do large individual-level fluctuations in confidence average out?
And yes, this is a call for integrating computational complexity/algorithmic analysis with economics and psychology…