11 Comments
Jun 23Liked by David Hugh-Jones

Great sense of humor In this piece -enjoyable read.

“Get into a fight with that idiot” encompasses so much of human nature.

Expand full comment

Very interesting article, liked it a lot. I do have one small bone of contention though. It’s been shown over and over in iterative prisoners dilemma type games that the most successful strategy is Tit for Tat. It is undefeated Tit for Tat involves initial cooperation followed by mirroring your opponents actions after that. I think iterative games are a better representation of real life strategies as opposed to single prisoners dilemma choices. One thing of note about prisoners dilemma is they are “prisoners”! The authors chose that specific scenario to make their point but most of us don’t get ourselves into the position of the prisoner. The binary choices of people in situations where there is no sympathy for them in their immediate vicinity (prison guards won’t care about your priors) are quite different from most human choices in everyday life.

One can always find special situations where seemingly counter intuitive strategies are called for. Like smothering your crying baby because you are in hiding with a large group and risk being found if you don’t stay quiet.

Expand full comment
author
Jun 27·edited Jun 27Author

Ordinary life has repeated games, but also one-shot ones. One prediction of the article's theory: social scientists and/or public discourse will talk more about repeated games than one-shot games, and will try to persuade people that one-shot games are "really" repeated.

Small point: tit-for-tat is not nec the best, see https://www.sciencedirect.com/science/article/abs/pii/S0022519307001439

Expand full comment

There are plenty of lies afoot concerning ethics. But the lie that community good is contrary to individual good is no less pernicious than the lie that community good entails individual good.

Expand full comment
author

Really? I think it's common sense that sometimes what is good for me is not good for other people.

Expand full comment

"Sometimes" is the key word. So why not spend our energies attempting to identify when that "sometimes" is in play rather than posing as if the "not sometimes" is a "lie."

Expand full comment
author

I certainly would never claim that community and individual interests are always opposed. And sure, it’s more advantageous to the community if I spend lots of time thinking about how my community’s interests and mine are aligned, and very little time thinking about the opposite. That is the point of what I wrote!

Expand full comment

But you seem committed to the bizarre idea that the personal gains that you could receive due to your integration with community interests are significantly less than those you could receive due your abandonment of community interests. This is the thing that you need to demonstrate rather than assume. By simply presenting one side of the coin without giving an opportunity for the other side, you aren't doing anyone (least of all your self) any favors.

Expand full comment
author

I don’t know why you think I’m committed to that idea, or why I need to demonstrate it!

Expand full comment
Jun 23·edited Jun 23

> One argument against Defect being rational is still current and gets seen among online Rationalists sometimes. [...]

Even if you accept this argument's assumption about decision theory, it still has the limitation that it requires that each player be able to predict the other's choice well. In the paper you link to, that is because the players know that both of them are perfectly rational; on LessWrong (IIRC) one context it was proposed in was that both players are hypothetical AIs with interpretable source-code which each can analyze to determine how the other may choose. But perfect knowledge of how the other player will react is impossible in the real world, so for a real person playing against another real person, these cases do not precisely apply.

It is nonetheless possible that such an argument would work if both players are able to predict each other's choices in a similar way well but not perfectly; then what is rational depends on a calculation of the expected value. Let the payoffs & probabilities be defined as follows:

a is the payoff to both parties from mutual cooperation.

b is the payoff to the defector, & -c to the cooperator, when only one player defects.

-d is the payoff to both players from mutual cooperation.

P(C|D) is the probability that the other player cooperates if you defect, & the other probabilities are represented analogously.

Note that as defined, a,b,c,d are all >0; that P(C|D)+P(D|D)=1=P(C|C)+P(D|C); & that, for this to be a prisoner's dilemma, it is necessary that b>a and c>d. Now, the expected values of cooperating & defecting are, respectively, E(C)=aP(C|C)-c(1-P(C|C))=(a+c)P(C|C)-c & E(D)=bP(C|D)-d(1-P(C|D))=(b+d)P(C|D)-d. Then E(C)≥E(D) whenever (a+c)P(C|C)-c≥(b+d)P(C|D)-d. For this particular argument, we assume that the other player is trying to predict your action & their choice depends on your prediction, so that P(C|C)=P(they predict you cooperate|you cooperate)P(they cooperate|they predict you cooperate)+P(they predict you defect|you cooperate)P(they cooperate|they predict you defect), & P(C|D) can be found similarly. Then whether cooperating in a prisoner's dilemma is rational depends on what each of those probabilities are; it may turn out that cooperating is rational when playing against someone you know well enough that you can predict each other's choices well, but cooperating probably isn't rational when you're playing against someone you don't know.

Expand full comment
author

This all depends on the basic argument that correlation is causation. Somehow by cooperating I’m supposed to make it more likely that the other player cooperates. Yet the game is simultaneous…

Expand full comment