I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.
To which I said:
Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?
And lo, JGWeissman saved me a lot of writing when he replied thus:
Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.
So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.
No such thing as future property. This isn't a factual disagreement on my part, just a quibble over terms; disregard it.
Your example isn't about signaling or precommitment, it's changing the game into multiple-shot, modifying the agent's utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn't help much in true one-shot (or last-play) situations.
On the other hand, the ideal platonic PD is also quite rare in reality - not as rare as Newcomb's, but still. You may remember us having an isomorphic argument about Newcomb's some time ago, with roles reversed - you defending the ideal platonic Newcomb's Problem, and me questioning its assumptions :-)
Me, I don't feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer's "true PD").
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future.
Yes, it was jus... (read more)