MeatballWiki

IteratedPrisonersDilemma

The PrisonersDilemma is a mathematical game in which locally optimal choices lead the players to betray each other even though all would do better if all cooperated. It predicts the collapse of society into a puddle of selfishness.

When the dilemma is iterated - that is, repeated over and over again - the prediction changes. An Iterated Prisoner's Dilemma gives the players something to gain from cooperation and trust, and a way to punish betrayers. The difference relies on players being able to identify each other, so they know when they are playing against the same opponent and can reward or punish them according to their behaviour in the previous games.

In small communities, all dilemmas are iterated. Players necessarily have the same opponents because there is simply no-one else to play against. Also, everyone gets to know everyone else. The upshot is that nobody can get away with cheating.

As the community size scales upward, players come to know each other by reputation rather than directly. Although a given player may not have played against a given opponent personally, they can have seen how the opponent behaves in other games and respond appropriately. Nobody wants the reputation of a cheat.

This again relies on players having persistent identity across games. In a large online community, most games are played with strangers. Identity is usually left unclear or totally absent. Cheats can go then unpunished. This gives rational players a local incentive to cheat, even if it leads to a less-optimal result (of community collapse) in the long run. The logic becomes, quite correctly, "If I don't, somebody else will". Hence TheTragedyOfTheCommons. Even if all players are acting in GoodFaith, trust is fragile in these circumstances.

In order to build large communities we should strive to turn Prisoners Dilemmas into IteratedPrisonersDilemma''s.

Some match results: http://www.georgetown.edu/faculty/baynards/

I wonder how the game would change it you got to see how other players played others, before you played them, after all InformationWantsToBeFree. -- ErikMeade.

See also: CommunityMayNotScale


This actually is a big issue in RealLife. The military used the prisoner's dilemma as a model for nuclear disarmament. It'd be great if everyone threw away their nuclear weapons, but it'd really really really be bad if only one side did, so both sides made more and more. Critics modeled IteratedPrisonersDilemma and conjectured that this was a better model for human interaction, and thus a better model for international interaction. There's more about it in a book called War in the Age of Intelligent Machines, which is about the hardest read I've ever experienced. Some of the issues about this show up in TheArtOfWar.


A great reference for more information on the prisoners dilemma: http://www.aridolan.com/ad/adb/PD.html

Especially noteworthy is http://www.aridolan.com/ad/adb/PD.html which extends the basic concept by discussion of Gandhi, Nazis, and the Scorpion Game. Discusses TPD as it is played out in business, love, the movies, politics, and even software development.


Not explicitly mentioned above -- in the IteratedPrisonersDilemma, tit-for-tat is a very robust and successful strategy (although whether it is optimal depends on the strategies played against it). Start with a positive opening. If your opponent does something bad, punish them on your move. If however they do something good, be generous. If the opponent cooperates, this leads to immediate benefits. If the opponent tries to cheat, he succeeds only once, and has to bite the bullet once, to be able to cheat again. See Robert Axelrod's The Evolution of Cooperation for a more extensive analysis of tit-for-tat's fitness..


Edit this page | History