I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.
To which I said:
Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?
And lo, JGWeissman saved me a lot of writing when he replied thus:
Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.
So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.
What makes us assume this? I get why in examples where you can see each others' source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don't see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don't see how this corresponds to such efficient lie detection in both directions for both of us.
Obviously if we had a tangible precommitment option that was sufficient when a billion lives were at stake, I would take it. And I agree that if the payoffs were 1 person vs. 2 billion people on both sides, this would be a risk I'd be willing to take. But I don't see how we can suppose that the correspondance between "he thinks I will choose C if he agrees to choose C, and in fact then chooses C" and "I actually intend to choose C if he agrees to choose C" is not all that high. If the flat Earther in question is the person on whom they based Dr. Cal Lightman I still don't choose C because I'd feel that even if he believed me he'd probably choose D anyway. Do you think mosthumans are this good at lie detection (I know that I am not), and if so do you have evidence for it?
What does the source code really impart? Certainty in the other process' workings. But why would you need certainty? Is being a co-operator really so extraordinary a cl... (read more)