grantstenger

Wiki Contributions

Comments

Sorted by

Again, from Reddit:

To your credit, you do appear to have actually read and understood the math of the prisoners dilemma, which is far more than most people claiming to refute a well established result in a blog post.

Anyway, there's a lot to unpack to your argument, but superficially it looks like you're basically substituting the iterated variant of the prisoners dilemma for the normal case on the basis that assuming you might be in an iterated situation is the rational thing to do in general? I don't know if that's true, but even if it is, I don't know that its useful to "refute" the the prisoners dilemma since its main use is as a pedagogical example rather than an important result in its own right.

I appreciate the kind words. I can go back and clarify the language of the post to convey that I'm not actually smuggling in a form of iterated prisoner's dilemma, rather this is specifically describing a one-shot play of the prisoner's dilemma where Alice and Bob need not have ever met before or ever meet again.

I'm sharing a comment from Reddit here because I think this will be a common response.

You lost me at this sentence:

The core issue is the assumption of independence between the players.

That assumption is built into what the prisoner’s dilemma is. If the independence of the players is not there, it as a genuinely different game. This is either because the utilities are shifted in some way (either by external forces like a threat by the mob, a contract), or by one’s own ethical sense (violation of one’s ethics can be compiled into loss of utility in players utility function, as the guilt/shame of behaving contrary to one’s ethic).

One of the important takeaways of prisoners dilemma (and variants) are that, when we find natural occurrences of this in society, it is often in our collective interests to modify the utilities involved to create better overall outcomes.

Fantastic question, this gets straight to the heart of the analysis.

I wholeheartedly agree that whatever Alice does within her cell will not have any causal effect on Bob's decision. They are separate in the sense that Alice cannot ripple the atoms in the universe in any way to affect Bob differently in either case. There are no downstream causal dependencies between them.

However, there is upstream causal dependence. Here's an analogy. Imagine I have two papers with C(ooperate) written on them and two papers with D(efect) written on them. Then I blindly select either two Cs or two Ds. Whichever two papers I choose, I put them in envelopes labeled A and B, and I take those envelopes and shoot them each a light year in opposite directions along with Alice and Bob correspondingly. Before Alice opens her envelope up, she has no idea what Bob has––to her, it really is 50% C or D. When she opens up her envelope and sees that she has a C, she hasn't causally affected Bob's envelope, but now she does have information about it. Namely, she knows that Bob also has a C. When Alice sees that she has a C or a D, she gets new information about D's envelope because of the upstream causal dependence I incorporated by placing the same letters in the envelopes. This corresponds with the clone case. To get the more subtle cases, suppose I choose Alice's envelope content randomly. Then I can flip a weighted coin to determine whether or not to choose Bob's contents randomly or to put in the same contents that Alice has. This corresponds to the rho-analysis in the second to last section.

Hopefully this metaphor helps clarify what exactly I mean when I say that Alice and Bob's decisions share upstream causal dependencies.