Rationality and the machina economicus

Science magazine had an interesting series of review articles on Machine Learning last week. Two of them were different perspectives of the exact same question: how does traditional economic rationality fit into artificial intelligence?

At the core of much AI work are concepts of optimal ‘rational decision-makers’. That is, the intelligent program is essentially trying to maximize some defined objective function, known economics as maximizing utility. Where the computer and economic traditions diverge is in their implementation: computers need algorithms, and often need to take into account non-traditional resource constraints such as time, whereas in economics this is left unspecified outside of trivial cases.

economics of thinking

How can we move from the classical view of a rational agent who maximizes expected utility over an exhaustively enumerable state-action space to a theory of the decisions faced by resource-bounded AI systems deployed in the real world, which place severe demands on real-time computation over complex probabilistic models?

We see the attainment of an optimal stopping time, in which attempts to compute additional precision come at a net loss in the value of action. As portrayed in the figure, increasing the cost of computation would lead to an earlier ideal stopping time. In reality, we rarely have such a simple economics of the cost and benefits of computation. We are often uncertain about the costs and the expected value of continuing to compute and so must solve a more sophisticated analysis of the expected value of computation.

Humans and other animals appear to make use of different kinds of systems for sequential decision-making: “model-based” systems that use a rich model of the environment to form plans, and a less complex “model-free” system that uses cached values to make decisions. Although both converge to the same behavior with enough experience, the two kinds of systems exhibit different tradeoffs in computational complexity and flexibility. Whereas model-based systems tend to be more flexible than the lighter-weight model-free systems (because they can quickly adapt to changes in environment structure), they rely on more expensive analyses (for example, tree-search or dynamic programming algorithms for computing values). In contrast, the model-free systems use inexpensive, but less flexible, look-up tables or function approximators.

That being said, what does economics have to offer machine learning? Parkes and Wellman try to offer an answer and basically say – game theory. Which is not something that economics can ‘offer’ so much as ‘offered a long, long time ago’. A recent interview with Parkes puts this in perspective:

Where does current economic theory fall short in describing rational AI?

Machina economicus might better fit the typical economic theories of rational behavior, but we don’t believe that the AI will be fully rational or have unbounded abilities to solve problems. At some point you hit the intractability limit—things we know cannot be solved optimally—and at that point, there will be questions about the right way to model deviations from truly rational behavior…But perfect rationality is not achievable in many complex real-world settings, and will almost surely remain so. In this light, machina economicus may need its own economic theories to usefully describe behavior and to use for the purpose of designing rules by which these agents interact.

Let us admit that economics is not fantastic at describing trial-to-trial individual behavior. What can economics offer the field of AI, then? Systems for multi-agent interaction. After all, markets are what are at the heart of economics:

At the multi-agent level, a designer cannot directly program behavior of the AIs but instead defines the rules and incentives that govern interactions among AIs. The idea is to change the “rules of the game”…The power to change the interaction environment is special and distinguishes this level of design from the standard AI design problem of performing well in the world as given.

For artificial systems, in comparison, we might expect AIs to be truthful where this is optimal and to avoid spending computation reasoning about the behavior of others where this is not useful…. The important role of mechanism design in an economy of AIs can be observed in practice. Search engines run auctions to allocate ads to positions alongside search queries. Advertisers bid for their ads to appear in response to specific queries (e.g., “personal injury lawyer”). Ads are ranked according to bid amount (as well as other factors, such as ad quality), with higher-ranked ads receiving a higher position on the search results page.

Early auction mechanisms employed first-price rules, charging an advertiser its bid amount when its ad receives a click. Recognizing this, advertisers employed AIs to monitor queries of interest, ordered to bid as little as possible to hold onto the current position. This practice led to cascades of responses in the form of bidding wars, amounting to a waste of computation and market inefficiency. To combat this, search engines introduced second-price auction mechanisms, which charge advertisers based on the next-highest bid price rather than their own price. This approach (a standard idea of mechanism design) removed the need to continually monitor the bid- ding to get the best price for position, thereby end- ing bidding wars.

But what comes across most in the article is how much economics needs to seriously consider AI (and ML more generally):

The prospect of an economy of AIs has also inspired expansions to new mechanism design settings. Researchers have developed incentive-compatible multiperiod mechanisms, considering such factors as uncertainty about the future and changes to agent preferences because of changes in local context. Another direction considers new kinds of private inputs beyond preference information.

I would have loved to see an article on “what machine learning can teach economics” or how tools in ML are transforming the study of markets.

Science also had one article on “trends and prospects” in ML and one on natural language processing.

References

Parkes, D., & Wellman, M. (2015). Economic reasoning and artificial intelligence Science, 349 (6245), 267-272 DOI: 10.1126/science.aaa8403

Gershman, S., Horvitz, E., & Tenenbaum, J. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines Science, 349 (6245), 273-278 DOI: 10.1126/science.aac6076

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s