I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.
To which I said:
Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?
And lo, JGWeissman saved me a lot of writing when he replied thus:
Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.
So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.
What does the source code really impart? Certainty in the other process' workings. But why would you need certainty? Is being a co-operator really so extraordinary a claim that to support it you need overwhelming evidence that leaves no other possibilities?
The problem is that there are three salient possibilities for what the other player is:
Between co-operator and deceiver, all else equal, you should expect the evidence given by co-operator to be stronger than evidence given by deceiver. Deceiver has to support a complex edifice of his lies, separate from reality, while co-operator can rely on the whole of reality for support of his claims. As a result, each argument a co-operator makes should on average bring you closer to believing that he really is a co-operator, as opposed to being a deceiver. This process may be too slow to shift your expectation from the prior of very strongly disbelieving in existence of co-operators to posterior of believing that this one is really a co-operator, and this may be a problem. But this problem is only as dire as the rarity of co-operators and the deceptive eloquence of deceivers.
We clearly disagree strongly on the probabilities here. I agree that all things being equal you have a better shot at convincing him than I do, but I think it is small. We both do the same thing in the Defector case. In the co-operator course, he believes you with probability P+Q and me with probability P. Assuming you know if he trusts you in this case (we count anything else as deceivers) you save (P+Q) 2 +(1-P-Q) 1, I save (P) 3+(1-P) 1, both times the percentage of co-operators R. So you have to be at least twice as successful as I am even if there ... (read more)