In Beyond Nash Equilibrium: Solution Concepts for the 21st Century, Joseph Halpern cites three problems with the idea of Nash equilibrium that are inspired by computer science. These – and here I’m roughly quoting – are that the equilibrium do not deal with “faulty” or “unexpected” behavior, they do not deal with computational concerns, and they assume that players have common knowledge of the structure of the game. I think the first and third can be roughly summed up as “Nash players should be all-knowing rationalists, but that is not always a useful assumption”.

The most immediately interesting to me is the need to take computation into account. The first example he gives is

You are given an *n*-bit number *x*. You can guess whether it is prime, or play safe and say nothing. If you guess right, you get $10; if you guess wrong, you lose $10; if you play safe, you get $1. There is only one Nash equilibrium in this 1-player game: giving the right answer. But if *n* is large, this is almost certainly not what people will do.

This is what I would call a problem that relies on pseudoperfect agents – ones that know what a prime number is, how to calculate whether a number is prime, but not, immediately, whether a given number is prime. A more typical lay person could just use what the probability distribution is for prime numbers *may be*. And of course, the cost of calculating the primality of a number in both physical and opportunity costs needs to be included in the final outcome.

But really: the **computational complexity of a given situation will add implicit costs** to the strategies and that *does* need to be taken into account. But how often does that actually happen?

### Like this:

Like Loading...

*Related*

There is an even deeper and less micro dynamical connection available from algorithmic game theory.

Finding a Nash equilibrium is PPAD-complete, this means that even if agents are completely rational, and even if they all know the game and can find out the payoff for any pair of pure strategies in constant time (I.e. no trivial hiding of hard problems in the computation of the payoff function as in your quote) then there is (under the assumption of P not equal PPAD; not proven, but widely believed like P != NP) no polynomial time algorithm to find the equilibrium. Since the economy is just another algorithm, this means economies cannot find equilibria even when everything is ‘obvious’.

I use similar ideas, and a cousin of PPAD known as PLS, in my recent paper to show that evolution cannot find local fitness peaks even on static fitness landscapes.

I’ll admit that PPAD is a term I had to look up, that one’s new to me. That’s a very cool result.

A question which I always forget the answer to: does this mean that there is no algorithm that can find the payoff in polynomial time in the worst-case scenario? And if it’s in nonpolynomial time, what is the input that it’s a function of? ie, it can be in nonpolynomial time but highly solvable if the input is small enough?

I also wonder if there’s an estimation method that can predict the equilibrium with some decent level of accuracy?

The input size is the payoff matrix (which is quadratic in the number of pure strategies). If you want to be very pedantic, then it also includes the bits required to specify the payoff values (just because otherwise it is trivial to make things intractable by simply giving you such large payoffs that you can’t comprehend them in polytime). So a very natural notion of size. I don’t know what you mean by “highly solvable if the input is small enough”. All theoretical computer science results are asymptotic, the analysis of performance on small inputs or a specific input is engineering.

Some people argue against this by saying “but in science, we care about structures of a specific size” or in the biology setting “the genome is not arbitrarily large, it has 30k genes and 3*10^9 nucliotides”. This would be a fine argument if any of the theories scientists presented actually used those numbers in a non-general way. However, we don’t really know how to think about such large numbers in non-general ways, and so most theories actually look like “for arbitrary n, but for my example set n = 30k”. In general, understanding seems to be defined as “analyzing an infinite number of similar cases” as opposed to just one specific case. Hence, asymptotic results are still good to think about for analyzing scientific theories.

In particular, if we were to look at just your laptop. We think that Turing Machines and asymptotic analysis are a good model for it. However, your computer has a finite amount of RAM and space, so it really is a finite state machine… but modeling it as such would be silly.

Nash equilibria cannot be efficiently approximated (to arbitrary precision, at least) and the counter-examples are not pathological (small perturbation of the input does not make the problem tractable):

Chen, X., Deng, X., & Teng, S. H. (2006, October). Computing Nash equilibria: Approximation and smoothed complexity. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on (pp. 603-612). IEEE.

But see here for more discussion. In particular, some kinds of approximation are possible, and even for the general case a quasi-polynomial time algorithm exists (worse than any polynomial but better than ).

Things are a little bit more complicated for PLS (the one I use for evolutionary equilibria). Approximating it in some sense, like getting a close fitness value to a local optimum, or being within a polynomial number of adaptive steps of a local optimum is impossible:

Klauck, H. (1996). On the hardness of global and local approximation. In Algorithm Theory—SWAT’96 (pp. 88-99). Springer Berlin Heidelberg.

Fischer, S. T. (1995). A note on the complexity of local search problems. Information Processing Letters, 53(2), 69-75.

But there is some way to define an idea of ‘approximately locally optimal’ that is achievable in polytime. I plan to write about what this means for biology in future blog posts/papers so stay tuned!