Mathematicians on planes: be careful of your sorcerous ways

menzio

Guido Menzio, an economist at UPenn, was on a plane, obsessively deriving some mathematical formulae, when…:

She decided to try out some small talk.

Is Syracuse home? She asked.

No, he replied curtly.

He similarly deflected further questions. He appeared laser-focused — perhaps too laser-focused — on the task at hand, those strange scribblings.

Rebuffed, the woman began reading her book. Or pretending to read, anyway. Shortly after boarding had finished, she flagged down a flight attendant and handed that crew-member a note of her own…

this quick-thinking traveler had Seen Something, and so she had Said Something.

That Something she’d seen had been her seatmate’s cryptic notes, scrawled in a script she didn’t recognize. Maybe it was code, or some foreign lettering, possibly the details of a plot to destroy the dozens of innocent lives aboard American Airlines Flight 3950. She may have felt it her duty to alert the authorities just to be safe. The curly-haired man was, the agent informed him politely, suspected of terrorism.

The curly-haired man laughed.

He laughed because those scribbles weren’t Arabic, or some other terrorist code. They were math.

Yes, math. A differential equation, to be exact.

…His nosy neighbor had spied him trying to work out some properties of the model of price setting he was about to present. Perhaps she couldn’t differentiate between differential equations and Arabic.

Somehow, this is not from The Onion.

Your friends determine your economy

What is it that distinguishes economies that take advantage of new products from those that don’t?

Matthew Jackson visited Princeton last week and gave a seminar on “Information and Gossip in Networks”. It was sadly lacking in any good gossip (if you have any, please send it to me), but he gave an excellent talk on how a village’s social network directly affects its economy.

He was able to collect data from a microfinance institution in India that began offering credit in 75 villages in Kerala. Yet despite being relatively homogeneous – they were all small, poor, widely dispersed villages in a single Indian state – there was a large amount of variability in how many people in each village participated in the program. What explains this?

Quite simply, the social connections do. When the microfinance institution entered the village, they did so by approaching village leaders and told them about the program, about its advantages and why they should participate. These village leaders were then responsible for informing the people in their village about the program.

social network

Jackson’s team was able to compile the complete social networks of everyone in these villages. They knew who went to temple with whom, who they trusted enough to lend money to, who they considered their friends, and so on. It is quite an impressive bit of work; unfortunately I cannot find any of his examples online anywhere. They found, for instance, incredible segregation by caste (not surprising, but nice that it falls out so naturally out of the data).

What determined the participation rate was how connected the leaders were to the rest of their village. Not just how many friends they had, but how many friends their friends had, and so on. To get an even better fit, they modeled the decision as a diffusion from the leaders out to their friends. They would slowly, randomly tell some of their friends, who would tell some of their other friends, and so on.

network diffusion

Jackson said that he got a rho^2 of 0.3 looking at traditional centrality measures and 0.47  (50% improvement) if you use his new model. The main difference with his new model (‘diffusion centrality’) appears to be time, which makes sense. When a program has been in a village for longer, more people will have taken advantage of it; people do not all rush out to get the Hot New Thing on the first day they can.

Village leaders are not the only people that they could have told. It would be nice if they could find more central individuals – people even better connected than the leaders. Impressively, they find that they can simply ask any random adult who would be the best person in the village to tell? And there is a good chance that they would know. This is exciting – it means people implicitly know about the social network structure of their world.

The moral of the story is that in order to understand economic processes, you need to understand the structure of the economy and you need to understand the dynamics. Static processes are insufficient – or at least, are much, much noisier.

References

Banerjee, A., Chandrasekhar, A., Duflo, E., & Jackson, M. (2014). Gossip: Identifying Central Individuals in a Social Network SSRN Electronic Journal DOI: 10.2139/ssrn.2425379

Banerjee A, Chandrasekhar AG, Duflo E, & Jackson MO (2013). The diffusion of microfinance. Science (New York, N.Y.), 341 (6144) PMID: 23888042

Optogenetics patents that I did not realize existed

  1. Use of biological photoreceptors as directly light-activated ion channels (Bamberg, Hegemann, Nagel)
  2. Light-activated cation channel and uses thereof (Deisseroth, Boyden)
  3. Use of light sensitive genes [for the manufacture of a medicament for the treatment of blindness and a method for expressing said cell specific fashion] (Balya, Lagali, Muench, Roska; Novartis)
  4. Channelrhodopsins for optical control of cells (Klapoetke, Chow, Boyden, Wong, Cho)
  5. Heterologous stimulus-gated ion channels and methods of using same [especially TRPV1, TRPM8 or P2X2] (Miesenbock, Zemelman)
  6. Optically-controlled cns dysfunction (Tye, Fenno, Diesseroth)
  7. Optogenetic control of reward-related behaviors (Diesseroth, Witten)
  8. Control and characterization of memory function (Goshen, Diesseroth)

Rethinking fast and slow

Everyone except homo economicus knows that our brains have multiple processes to make decisions. Are you going to make the same decision when you are angry as when you sit down and meditate on a question? Of course not. Kahneman and Tversky have famously reduced this to ‘thinking fast’ (intuitive decisions) and ‘thinking slow’ (logical inference) (1).

Breaking these decisions up into ‘fast’ and ‘slow’ makes it easy to design experiments that can disentangle whether people use their lizard brains or their shiny silicon engines when making any given decision. Here’s how: give someone two options, let’s say a ‘greedy’ option or an ‘altruistic’ option. Now simply look at how long it takes them to to choose each option. Is it fast or slow? Congratulations, you have successfully found that greed is intuitive while altruism requires a person to sigh, restrain themselves, think things over, clip some coupons, and decide on the better path.

This method actually is a useful way of investigating how the brain makes decisions; harder decisions really do take longer to be processed by the brain and we have the neural data to prove it. But there’s the rub. When you make a decision, it is not simply a matter of intuitive versus deliberative. It is also how hard the question is. And this really depends on the person. Not everyone values money in the same way! Or even in the same way at different times! I really want to have a dollar bill on me when it is hot, humid, and I am front of a soda machine. I care about a dollar bill a lot less when I am at home in front of my fridge.

So let’s go back to classical economics; let’s pretend like we can measure how much someone values money with a utility curve. Measure everyones utility curve and find their indifference – the point at which they don’t care about making one choice over the other. Now you can ask about the relative speed. If you make each decision 50% of the time, but one decision is still faster then you can say something about the relative reaction times and ways of processing.

dictator game fast and slow

And what do you find? In some of the classic experiments – nothing! People make each decision equally as often and equally quickly! Harder decisions require more time, and that is what is being measured here. People have heterogeneous preferences, and you cannot accurately measure decisions without taking this into account subject by subject. No one cares about the population average: we only care what an individual will do.

temporal discounting fast and slow

But this is a fairly subtle point. This simple one-dimensional metric – how fast you respond to something – may not be able to disentangle the possibility that those who use their ‘lizard brain’ may simply have a greater utility for money (this is where brain imaging would come in to save the day).

No one is arguing that there are not multiple systems of decision-making in the brain – some faster and some slower, some that will come up with one answer and one that will come up with another. But we must be very very careful when attempting to measure which is fast and which is slow.

(1) this is still ridiculously reductive but still miles better than the ‘we compute utility this one way’ style of thinking

Reference

Krajbich, I., Bartling, B., Hare, T., & Fehr, E. (2015). Rethinking fast and slow based on a critique of reaction-time reverse inference Nature Communications, 6 DOI: 10.1038/ncomms8455

Rationality and the machina economicus

Science magazine had an interesting series of review articles on Machine Learning last week. Two of them were different perspectives of the exact same question: how does traditional economic rationality fit into artificial intelligence?

At the core of much AI work are concepts of optimal ‘rational decision-makers’. That is, the intelligent program is essentially trying to maximize some defined objective function, known economics as maximizing utility. Where the computer and economic traditions diverge is in their implementation: computers need algorithms, and often need to take into account non-traditional resource constraints such as time, whereas in economics this is left unspecified outside of trivial cases.

economics of thinking

How can we move from the classical view of a rational agent who maximizes expected utility over an exhaustively enumerable state-action space to a theory of the decisions faced by resource-bounded AI systems deployed in the real world, which place severe demands on real-time computation over complex probabilistic models?

We see the attainment of an optimal stopping time, in which attempts to compute additional precision come at a net loss in the value of action. As portrayed in the figure, increasing the cost of computation would lead to an earlier ideal stopping time. In reality, we rarely have such a simple economics of the cost and benefits of computation. We are often uncertain about the costs and the expected value of continuing to compute and so must solve a more sophisticated analysis of the expected value of computation.

Humans and other animals appear to make use of different kinds of systems for sequential decision-making: “model-based” systems that use a rich model of the environment to form plans, and a less complex “model-free” system that uses cached values to make decisions. Although both converge to the same behavior with enough experience, the two kinds of systems exhibit different tradeoffs in computational complexity and flexibility. Whereas model-based systems tend to be more flexible than the lighter-weight model-free systems (because they can quickly adapt to changes in environment structure), they rely on more expensive analyses (for example, tree-search or dynamic programming algorithms for computing values). In contrast, the model-free systems use inexpensive, but less flexible, look-up tables or function approximators.

That being said, what does economics have to offer machine learning? Parkes and Wellman try to offer an answer and basically say – game theory. Which is not something that economics can ‘offer’ so much as ‘offered a long, long time ago’. A recent interview with Parkes puts this in perspective:

Where does current economic theory fall short in describing rational AI?

Machina economicus might better fit the typical economic theories of rational behavior, but we don’t believe that the AI will be fully rational or have unbounded abilities to solve problems. At some point you hit the intractability limit—things we know cannot be solved optimally—and at that point, there will be questions about the right way to model deviations from truly rational behavior…But perfect rationality is not achievable in many complex real-world settings, and will almost surely remain so. In this light, machina economicus may need its own economic theories to usefully describe behavior and to use for the purpose of designing rules by which these agents interact.

Let us admit that economics is not fantastic at describing trial-to-trial individual behavior. What can economics offer the field of AI, then? Systems for multi-agent interaction. After all, markets are what are at the heart of economics:

At the multi-agent level, a designer cannot directly program behavior of the AIs but instead defines the rules and incentives that govern interactions among AIs. The idea is to change the “rules of the game”…The power to change the interaction environment is special and distinguishes this level of design from the standard AI design problem of performing well in the world as given.

For artificial systems, in comparison, we might expect AIs to be truthful where this is optimal and to avoid spending computation reasoning about the behavior of others where this is not useful…. The important role of mechanism design in an economy of AIs can be observed in practice. Search engines run auctions to allocate ads to positions alongside search queries. Advertisers bid for their ads to appear in response to specific queries (e.g., “personal injury lawyer”). Ads are ranked according to bid amount (as well as other factors, such as ad quality), with higher-ranked ads receiving a higher position on the search results page.

Early auction mechanisms employed first-price rules, charging an advertiser its bid amount when its ad receives a click. Recognizing this, advertisers employed AIs to monitor queries of interest, ordered to bid as little as possible to hold onto the current position. This practice led to cascades of responses in the form of bidding wars, amounting to a waste of computation and market inefficiency. To combat this, search engines introduced second-price auction mechanisms, which charge advertisers based on the next-highest bid price rather than their own price. This approach (a standard idea of mechanism design) removed the need to continually monitor the bid- ding to get the best price for position, thereby end- ing bidding wars.

But what comes across most in the article is how much economics needs to seriously consider AI (and ML more generally):

The prospect of an economy of AIs has also inspired expansions to new mechanism design settings. Researchers have developed incentive-compatible multiperiod mechanisms, considering such factors as uncertainty about the future and changes to agent preferences because of changes in local context. Another direction considers new kinds of private inputs beyond preference information.

I would have loved to see an article on “what machine learning can teach economics” or how tools in ML are transforming the study of markets.

Science also had one article on “trends and prospects” in ML and one on natural language processing.

References

Parkes, D., & Wellman, M. (2015). Economic reasoning and artificial intelligence Science, 349 (6245), 267-272 DOI: 10.1126/science.aaa8403

Gershman, S., Horvitz, E., & Tenenbaum, J. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines Science, 349 (6245), 273-278 DOI: 10.1126/science.aac6076

John Nash, 1928 – 2015

Sad news that John Nash was killed yesterday when his taxi crashed on its way back from the airport. He and his wife were ejected from the taxi when it ran into the lane divider.

Nash is most famous from his biopic A Beautiful Mind though obviously it is his intellectual contributions that you should know about.

His 30 page PhD thesis was what won him the Nobel Prize. His work on game theory was influential not just in economics, but psychology and ecology among other fields.

Recently declassified letters to the NSA show how Nash was foundational to modern cryptography and its reliance on computational complexity. This is the description he included in his letter:

Nash-transmittingarrangement

When he was killed, he was returning from Norway where he received the Abel prize for work on nonlinear partial differential equations.

He continued to publish; his final paper (afaik) was “The agencies method for coalition formation in experimental games

He also maintained (?) a delightfully minimalist personal web page.

The future ecology of stock traders

I am beyond fascinated by the interactions between competing intelligences that exist in the stock market. It is a bizarre mishmash of humans, AIs, and both (cyborgpeople?).

One recent strategy that exploits this interaction is ‘spoofing‘. The description from the link:

  • You place an order to sell a million widgets at $104.
  • You immediately place an order to buy 10 widgets at $101.
  • Everyone sees the million-widget order and is like, “Wow, lotta supply, the market is going down, better dump my widgets!”
  • So someone is happy to sell you 10 widgets for $101 each.
  • Then you immediately cancel your million-widget order, leaving you with 10 widgets for which you paid $1,010.
  • Then you place an order to buy a million widgets for $101, and another order to sell 10 widgets at $104.
  • Everyone sees the new million-widget order, and since no one has any attention span at all, they are like, “Wow, lotta demand, the market is going up, better buy some widgets!”
  • So someone is happy to buy 10 widgets from you for $104 each.
  • Then you immediately cancel your million-widget order, leaving you with no widgets, no orders and $30 in sweet sweet profits.

Amusingly enough, you don’t even need a fancy computer program for it – you can just hire a bunch of people who are really good at fast video games and they can click click click those keys fast enough for you.

Now some day trader living in his parent’s basement is accused of using this technique and causing the flash crash of 2010 (it possibly wasn’t him directly, but he could have caused some cascade that led to it).

I’m sitting here with popcorn, waiting to see how the ecosystem of varied intelligences evolves in competition with each other. Sounds like Wall Street needs to take some crash courses in ecology.

Whither experimental economics?

When I was applying to graduate school, I looked at three options: neuroscience, computer science, and economics. I had, effectively, done an economics major as an undergrad and had worked at an economic consulting firm. But the lack of controlled experimentation in economics kept me from applying and I ended up as a neuroscientist. (There is, of course, a very experimental non-human economics which goes by the name of ecology, though I did not recognize it at the time.)

I profess to being confused as to the lack of experimentation in economics, especially for a field that constantly tries to defend its status as a science. (Well, I understand: incentives, existing capabilities, and all that.)

A recent dissertation on the history of experimental economics – as opposed to behavioral economics – is enlightening:

“We were describing this mechanism and Vernon says, “You know, I can test this whether it works or not.“ I said, “What do you mean?“ And he says, “I’ll run an experiment.“ I said, “What the heck are you talking about? What do you do?“ And so he hauls seven or eight graduate students into a classroom. He ran the mechanism and it didn’t work. It didn’t converge to the equilibrium. It didn’t produce the outcomes the theory said it would produce. And I thought, okay. So back to [doing] theory. I don’t care; this doesn’t bother me.

It bothered Vernon a lot because we sat around that evening talking and he says, “Oh, I know what I did wrong.“ And he went back the next day and he hauled the students back in the room, changed the rules just a little bit in ways that the theory wouldn’t notice the difference. From our theory point of view, it wouldn’t have mattered. But he changed the rules a little bit and bang that thing zapped in and converged.“

The difference between the two experiments was the information shared with the test subjects. The first time around, the subjects wrote down their number on a piece of paper and then Smith wrote them up on the board. Then he asked the subjects to send another message and if the messages were the same twice in a row he would stop, since that stability would be interpreted as having reached equilibrium. But the messages did not stop the first time Smith had run the experiment a day earlier…

The fact that the experiment did not converge at the first attempt, but did at the second with a change of only one rule (the information structure available to the participants) not required by theory to make its prediction made a lasting impact on Ledyard.

And this is exactly why we do experiments:

[T]he theory didn’t distinguish between those two rules, but Vernon knew how to find a rule that would lead to an equilibrium. It meant he knew something that I didn’t know and he had a way of demonstrating it that was really neat.

In psychology and neuroscience, there are many laboratories doing animal experiments testing some sort of economic decision-making hypothesis, though it is debatable how much of that work has filtered into the economic profession. What the two fields could really use, though, are economic ideas about more than just basic decision-making. Much of economics is about markets and mass decisions; there is very animal experimentation of these questions.

(via Marginal Revolution)

How well do we understand how people make choices? Place a bet on your favorite theory

Put your money where your mouth is, as they say:

The goal of this competition is to facilitate the derivation of models that can capture the classical choice anomalies (including Allais, St. Petersburg, and Ellsberg paradoxes, and loss aversion) and provide useful forecasts of decisions under risk and ambiguity (with and without feedback).

The rules of the competition are described in http://departments.agri.huji.ac.il/cpc2015. The submission deadline is May17, 2015. The prize for the winners is an invitation to be a co-author of the paper that summarizes the competition (the first part can be downloaded fromhttp://departments.agri.huji.ac.il/economics/teachers/ert_eyal/CPC2015.pdf)…

Our analysis of these 90 problems (see http://departments.agri.huji.ac.il/cpc2015) shows that the classical anomalies are robust, and that the popular descriptive models (e.g., prospect theory) cannot capture all the phenomena with one set of parameters. We present one model (a baseline model) that can capture all the results, and challenge you to propose a better model.

There was a competition recently that asked people to predict seizures from EEG activity; the public blew the neuroscientists out of the water. How will the economists do?

Register” by April 1. The submission deadline is May 17!

 

You can still change your mind when there is no news

The Beautiful Data Set – an economist uses soccer data:

Data from soccer can also illuminate one of the most prominent theories of the stock market: the efficient-market hypothesis. This theory posits that the market incorporates information so completely and so quickly that any relevant news is integrated into a stock’s price before anyone has a chance to act on it. This means that unless you have insider information, no stock is a better buy (i.e., undervalued) when compared with any other.

If this theory is correct, the price of an asset should jump up or down when news breaks and then remain perfectly flat until there is more news. But to test this in the real world is difficult. You would need to somehow stop the flow of news while letting trading continue. That seems impossible, since everything that happens in the real world, however boring or uneventful, counts as news…

The break in play at halftime provided a golden opportunity to study market efficiency because the playing clock stopped but the betting clock continued. Any drift in halftime betting values would have been evidence against market efficiency, since efficient prices should not drift when there is no news (or goals, in this case). It turned out that when goals arrived within seconds of the end of the first half, betting continued heavily throughout halftime — but the betting values remained constant, a necessary condition to prove that those markets were indeed efficient.

There is an extremely strong assumption here about how information about the world is extracted. In essence, it says that there is no thinking. This may be true in soccer if some linear combination of score, time of possession, etc is the best predictor of the eventual outcome.  Yet not all markets (or decisions) are like this. When you consider whether to take some action, you may have some initial idea of what you want do that gets more firm over time – but sometimes a new thought, a new possibility may pop into your head.

We can formalize this easily using concepts from Computer Science. Computer Science has a handy way of determining how long a given algorithm will take. Some things – like linear models – can be computed quickly. Other models take longer: think of chess algorithms, where the longer they take the more options they can consider.

alpha-beta shallow-prune

It is not at all clear to me the timescale over which markets should change in response to a pulse of new information. If prediction is easy, it will presumably happen instantly. If prediction is hard, you would expect the market to change as it makes new predictions. But even then, it is not obvious how it will change! If the space of possibilities is smoothly changing, you’d expect predictions to get more accurate across time. This means the range of plausible options is smaller and market actors have to deal with less risk. But as in Chess, the search space may vary wildly: you sacrifice that Queen and the board changes in dramatic ways. Then the markets could fluctuate suddenly up or down and still be efficient integrators of information.

I would be curious to see a psychology experiment along these lines (they’re probably out there, but I don’t know the references): a subject is forced to choose between two options A and B, but they have to determine something that is cognitively difficult. At different time points across time they are asked what they guess the answer is and how confident they are in that option. Does that always vary smoothly? Do large individual-level fluctuations in confidence average out?

And yes, this is a call for integrating computational complexity/algorithmic analysis with economics and psychology