As Omega led the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.
I could do that, but it seems simpler to make a convulsive effort to convince him that Omega, who clearly is no good Christian, almost certainly believes in the truth of evolution.
(Of course this is not relevant, but seemed worth pointing out. Cleverness is usually a dangerous thing, but in this case it seems worth dusting off.)
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.
It seems like it would be wiser to forgo the ar...
And then -- I hope -- you would cooperate.
Why do you hope I'd let a billion people die (from a proposed quantification in another comment)?
This is actually rather different from a classic PD, to the extent that Cooperate (cooperate) is not the collectively desirable outcome.
Payoffs: You(Creationist): Defect(D): 1 Billion live D(C): 3 Billion live C(D): 0 live C(C): 2 Billion live
Under the traditional PD, D(C) is best for you, but worst for him. Under this PD, D(C) is best for both of you. He wants you to defect and he wants to cooperate; he just doesn't...
I think you've all seen enough PDs that I can leave the numbers as an exercise
Actually, since this is an unusual setup, I think it's worth spelling out:
To the atheist, Omega gives two choices, and forces him to choose between D and C:
D. Omega saves 1 billion people if the Earth is old.
C. Omega saves 2 billion people if the Earth is young.
To the creationist, Omega gives two choices, and forces him to choose between D and C:
D. Omega saves an extra 1 billion people if the Earth is young.
C. Omega saves an extra 2 billion people if the Earth is old.
...And the
you would cooperate
As I understand it, to the extent that it makes sense to cooperate, the thing that cooperates is not you, but some sub-algorithm implemented in both you and your opponent. Is that right? If so, then maybe by phrasing it in this way we can avoid philosophers balking.
I will point out to the defectors that the scenario described is no more plausible than creationism (after all it involves a deity behaving even more capriciously than the creationist one). If we postulate that your fictional self is believing in the scenario, surely your fictional self should no longer be quite so certain of the falsehood of creationism?
Given the stakes, it seems to me the most rational thing to do here is to try to convince the other person that you should both cooperate, and then defect.
The difference between this dilemma and Newcomb is that Newcomb's Omega predicts perfectly which box you'll take, whereas the Creationist cannot predict whether you'll defect or not.
The only way you can lose is if you screw up so badly at trying to convincing him to cooperate (i.e. you're a terrible liar or bad at communicating in general and confuse him), that instead he's convinced he should defect now. So the biggest factor when deciding whether to cooperate or defect should be your ability to convince.
The Standard PD is set up so there are only two agents and only their choices and values matter. I tend to think of rationality in these dilemmas as being largely a matter of reputation, even when the situation is circumscribed and described as one-shot. Hofstadter's concept of super-rationality is part of how I think about this. If I have a reputation as someone who cooperates when that's the game-theoretically optimal thing to do, then it's more likely that whoever I've been partnered with will expect that from me, and cooperate if he understands why...
The young Earth creationist is right, because the whole earth was created in a simulation by Omega that took about 5000 years to run.
You can't win with someone that much smarter than you. I don't see how this means anything but 'it's good to have infinite power, computational and otherwise.'
the atheist will choose between each of them receiving $5000 if the earth is less than 1 million years old or each receiving $10000 if the earth is more than 1 million years old
Isn't this backwards? The dilemma occurs if payoff(unbelieved statement) > payoff(believed statement).
And then -- I hope -- you would cooperate.
This is to value your own "rationality" over that which is to be protected: the billion lives at stake. (We may add: such a "rationality" fetish isn't really rational at all.) Why give us even more to weep about?
My thinking is, if you are stupid (or ignorant, or irrational, or whatever) enough to be a creationist, you are probably also stupid enough not to know the high-order strategy for the prisoner's dilemma, and therefore cooperating with you is useless. You'll make your decision about whether or not to cooperate based on whatever stupid criteria you have, but they probably won't involve an accurate prediction of my decision algorithm, because you are stupid. I can't influence you by cooperating, so I defect and save some lives.
I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
To which I said:
And lo, JGWeissman saved me a lot of writing when he replied thus:
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.