# Culture and human evolution

Edge has an excellent interview with Joseph Henrich on cultural and biological evolution.  He argues that the distinction between the two is fuzzy; he says they are inseparable but I think what he really means is that we don’t know how to separate them yet.  Although they are distinct concepts, they have feedback on each other which makes the separability difficult-to-impossible (though does not mean they are not distinct!).  To get an example of what he’s saying here:

Another example here is fire and cooking. Richard Wrangham, for example, has argued that fire and cooking have been important selection pressures, but what often gets overlooked in understanding fire and cooking is that they’re culturally transmitted—we’re terrible at making fires actually. We have no innate fire-making ability. But once you got this idea for cooking and making fires to be culturally transmitted, then it created a whole new selection pressure that made our stomachs smaller, our teeth smaller, our gapes or holdings of our mouth smaller, it altered the length of our intestines. It had a whole bunch of downstream effects.

We did not evolve the ability to make fire.  But once we were able to make fire, biological evolution took hold.  Cultural evolution drove biological evolution.  An important point that he makes is that culture and technology can only reach a certain level of richness in any given population level.  More complex societies require larger – or more connected – populations:

I began this investigation by looking at a case study in Tasmania. Tasmania’s an island off the coast of Southern Victoria in Australia and the archeological record is really interesting in Tasmania. Up until about 10,000 years ago, 12,000 years ago, the archeology of Tasmania looks the same as Australia. It seems to be moving along together. It’s getting a bit more complex over time, and then suddenly after 10,000 years ago, it takes a downturn. It becomes less complex.

The ability to make fire is probably lost. Bone tools are lost. Fishing is lost. Boats are probably lost. Meanwhile, things move along just fine back on the continent, so there’s this kind of divergence, and one thing nice about this experiment is that there’s good reason to believe that peoples were genetically the same.

You start out with two genetically well-intermixed peoples. Tasmania’s actually connected to mainland Australia so it’s just a peninsula. Then about 10,000 years ago, the environment changes, it gets warmer and the Bass Strait floods, so this cuts off Tasmania from the rest of Australia, and it’s at that point that they begin to have this technological downturn. You can show that this is the kind of thing you’d expect if societies are like brains in the sense that they store information as a group and that when someone learns, they’re learning from the most successful member, and that information is being passed from different communities, and the larger the population, the more different minds you have working on the problem.

If your number of minds working on the problem gets small enough, you can actually begin to lose information. There’s a steady state level of information that depends on the size of your population and the interconnectedness. It also depends on the innovativeness of your individuals, but that has a relatively small effect compared to the effect of being well interconnected and having a large population.

The analogy between brains and population level is a good one: in the brain, it is not the individual neurons that give rise to complex behavior, but the interactions between them.  The number of neurons determines the complexity of patterns that can be extracted from the environment.  A simple example in computer science is the perceptron; if you have one neuron, you can make a linear decision between two choices.  As you connect more and more neurons, you’re able to increase the complexity of the decision by adding another linear filter; eventually you can be arbitrarily complex, but at low numbers of neurons you’re going to be really limited in the number of patterns that you can decode.

But the level of complexity also has an impact on how we interact with each other:

In the Ultimatum Game, two players are allotted a sum of money, say $100, and the first player can offer a portion of this$100 to the second player who can either accept or reject. If the second player accepts, they get the amount of the money, and the first player gets the remainder. If they reject, both players get zero. Just to give you an example, suppose the money is $100, and the first player offers$10 out of the $100 to the second player. If the second player accepts, he gets the$10 and the first player gets $90. If he rejects, both players go home with zero. If you place yourself in the shoes of the second player, then you should be inclined to accept any amount of money if you just care about making money. Now, if he offers you zero, you have the choice between zero and zero, so it’s ambiguous what you should do. But assuming it’s a positive amount, so$10, you should accept the $10, go home with$10 and let the other guy go home with \$90. But in experiments with undergraduates, Western undergraduates, going back to 1982, behavioral economists find that students give about half, sometimes a little bit less than half, and people are inclined to reject offers below about 30 percent.

…I was thinking that the Machiguenga would be a good test of this, because if they also showed this willingness to reject and to make equal offers, it would really demonstrate the innateness of this finding, because they don’t have any higher level institutions, and it would be hard to make a kind of cultural argument that they were bringing something into the experiment that was causing this behavior.  I went and I did it in 1995 and 1996 there, and what I found amongst the Machiguenga was that they were completely unwilling to reject, and they thought it was silly. Why would anyone ever reject? They would almost explain the subgame perfect equilibrium, the solution that the economists use, back to me by saying, “Well, why would anybody ever reject? You lose money then.” And they made low offers, the modal offer was 15 percent instead of 50, and the mean comes out to be about 25 percent.

We found we were able to explain a lot of the variation in these offers with two variables. One was the degree of market integration. More market-integrated societies offered more, and less market integrated societies offered less. But also, there seemed to be other institutions, institutions of cooperative hunting seemed to influence offers. Societies with more cooperative institutions offered more, and these were independent effects.

This creates a puzzle because typically people think of small-scale kinds of societies, where you study hunter-gatherers and horticultural scattered across the globe (ranging from New Guinea to Siberia to Africa) as being very pro social and cooperative. This is true, but the thing is those are based on local norms for cooperation with kin and local interactions in certain kinds of circumstances. Hunter-gatherers are famous for being great at food sharing, but these norms don’t extend beyond food sharing. They certainly don’t extend to ephemeral or strangers, and to make a large-scale society run you have to shift from investing in your local kin groups and your enduring relationships to being willing to pay to be fair to a stranger.

This is something that is subtle, and what people have trouble grasping is that if you’re going to be fair to a stranger, then you’re taking money away from your family. In the case of these dictator games, in order to give 50 percent to this other unknown person, it meant you were going home with less money, and that meant your family was going to have less money, and your kids would have less money. To observe modern institutions, to not hire your brother-in-law when you get a fancy job or you get elected to an office is to hurt your family. Your brother-in-law doesn’t have a job now. He has to have whatever other job he has, a less good job.

# The best way to extort an extortionist is to be fair

One of the most popular games in the study of cooperation is the iterated prisoner’s dilemma.  It is a game that lets players cooperate or defect, with the most beneficial strategy overall being both cooperating, but the best for a single player is to defect while the other player cooperates.  The most famously successful strategy is tit-for-tat: cooperate if your partner cooperated last turn, and defect otherwise.  Two tit-for-tat players will converge onto harmonious cooperation and maximize their reward, while a single tit-for-tat player will avoid being conned into cooperating with a persistent defector.

William H. Press and colleague Freeman Dyson (!) have found a new solution to the iterated prisoner’s dilemma.  Their paper has been covered in detail well elsewhere  and has some very good commentary by the authors, so I won’t spend a ton of time explaining it.  Basically, if you know this strategy you should be able to find a strategy where you can set your average score arbitrarily; similarly you can arbitrarily set your opponents’ score.  By combining these two strategies, you get what they call the ‘extortionate’ strategy, where you try to extort as much as possible from your opponent.

This set of strategies has two parameters; one of these parameters (“$\chi$“) measures how much you want to extort from your opponent.  The other (“$\phi$“) is a bit unclear, but I think we might be able to (kind of) give one intuition a bit later on.  An interesting point to note is that the tit-for-tat strategy is one case of this class of extortionate strategies; when the $chi$ parameter is set to its minimum, indicating fairness instead of total extortion, and the mysterious $phi$ parameter is set to its maximum, you get the tit-for-tat strategy.

I was curious, what happens when a bunch of extortionate strategies get together and duke it out?  What’s the best way to extort an extortionist?

Let’s look at the math of the strategy; feel free to skip this next paragraph if you wish.  Here, $p_{cd}$ for example is the probability that a player should cooperate if on the previous turn they cooperated (“c”) and their opponent defected (“d”).

$p_{cc}(\chi,\phi) = 1 - \phi (\chi - 1) \frac{cc-dd}{dd - cd}$

$p_{cd}(\chi,\phi) = 1 - \phi(1 + \chi \frac{dc-dd}{dd-cd})$

$p_{dc}(\chi,\phi) = \phi (\chi + \frac{dc-dd}{dd-cd})$

$p_{dd}(\chi, \phi) = 0$

The traditional payoff is (3,0,5,1) so we can simplify to:

$p_{cc}(\chi,\phi) = 1 - 2\phi (\chi - 1)$

$p_{cd}(\chi,\phi) = 1 - \phi(1 + 4\chi)$

$p_{dc}(\chi,\phi) = \phi (\chi + 4)$

And if we want fairness set $\chi$ to 1 (we’ll come back to this later).

$p_{cc}(\chi,\phi) = 1$

$p_{cd}(\chi,\phi) = 1 - 5\phi$

$p_{dc}(\chi,\phi) = 5\phi$

$p_{dd}(\chi, \phi) = 0$

$\phi$ has allowed values between 0 and 1/5.

Okay, back from the math!  We can figure out what the best strategies are by using a genetic algorithm.  Basically, every generation, extorters compete against each other many times and the top 20% are selected to breed the next generation.  But it’s not always a perfect copy; there is a 3% mutation rate to allow novel strategies to be introduced.  Let’s see what the average reward across time is (and, err, divide by 1000):

Aha!  Something happened there at generation ~350!  If you run the genetic algorithm for longer, it stays at this value.  It seems pretty clear that there’s one strategy that’s superior to the others.  And in fact, it’s the fairness strategy: $\chi=1$ dominates all the others in this model!  In other words, even if you are trying to extort as much as possible from other players, the best extortionate strategy against other extorters is to be perfectly fair!

But we don’t get back pure tit-for-tat; there’s that messy $\phi$ parameter to worry about.  At the end of 20000 generations, let’s see what the distribution of this value is between its minimum (=0) and maximum (=1, tit-for-tat):

It’s all over the place!  If you rerun the simulation again and again,you get a different distribution of these values, but they always seem to be >.3 and never settle at 1 (tit-for-tat)!  We can measure how diverse the distribution is by the entropy of possible states:

What this is basically showing is that after its initial random set of values, the distribution of strategies oscillates up and down around ~2-3 bits, or something like 4-8 strategies of relative importance.  Sometimes one will start to be more successful against others, sending diversity slowly down, until other strategies evolve against it sending diversity back up.  But fairness always wins.

(As an interesting side-note: the paper provides a formula for estimating expected reward when two strategies compete; when you two strategies with $\chi=1$, I get a singularity (“infinity”?)…am I doing something wrong or what’s going on…?)

So what is this mystery $\phi$ parameter?  If you have pure tit-for-tat, you can get into alternating defect-cooperate cycles, something less beneficial than everyone cooperating all the time.  By adding this new parameter, maybe you can push each other into that beneficial cycle of cooperation.  That would say that $\phi$ represents a search or exploration parameter.  My intuition for this strategy is that it has two parameters; one represents fairness and the other represents sociality.  Although fairness is best, exploring your options and understanding your opponent is also critical…to being an extortionist.

Update: See this post which is much more informative than mine!  It explains all…

Reference

Press WH, & Dyson FJ (2012). Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent. Proceedings of the National Academy of Sciences of the United States of America, 109 (26), 10409-13 PMID: 22615375