In Beyond Nash Equilibrium: Solution Concepts for the 21st Century, Joseph Halpern cites three problems with the idea of Nash equilibrium that are inspired by computer science. These – and here I’m roughly quoting – are that the equilibrium do not deal with “faulty” or “unexpected” behavior, they do not deal with computational concerns, and they assume that players have common knowledge of the structure of the game. I think the first and third can be roughly summed up as “Nash players should be all-knowing rationalists, but that is not always a useful assumption”.
The most immediately interesting to me is the need to take computation into account. The first example he gives is
You are given an n-bit number x. You can guess whether it is prime, or play safe and say nothing. If you guess right, you get $10; if you guess wrong, you lose $10; if you play safe, you get $1. There is only one Nash equilibrium in this 1-player game: giving the right answer. But if n is large, this is almost certainly not what people will do.
This is what I would call a problem that relies on pseudoperfect agents – ones that know what a prime number is, how to calculate whether a number is prime, but not, immediately, whether a given number is prime. A more typical lay person could just use what the probability distribution is for prime numbers may be. And of course, the cost of calculating the primality of a number in both physical and opportunity costs needs to be included in the final outcome.
But really: the computational complexity of a given situation will add implicit costs to the strategies and that does need to be taken into account. But how often does that actually happen?