I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.
To which I said:
Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?
And lo, JGWeissman saved me a lot of writing when he replied thus:
Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.
So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.
I feel that this is related to the intuitions on free will. When a stone is thrown your way, you can't change what you'll do, you'll either duck, or you won't. If you duck, it means that you are a stone-avoider, a system that has a property of avoiding stones, that processes data indicating the fact that a stone is flying your way, and transforms it into the actions of impact-avoiding.
The precommitment is only useful because [you+precommitment] is a system with a known characteristic of co-operator, that performs cooperation in return to the other co-operators. What you need in order to arrange mutual cooperation is to signal the other player that you are a co-operator, and to make sure that the other player is also a co-operator. Signaling the fact that you are a co-operator is easy if you attach a precommitment crutch to your natural decision-making algorithm.
Since co-operators win more than mutual defectors, being a co-operator is rational, and so it's often just said that if you and your opponent are rational, you'll cooperate.
There is a stigma of being just human, but I guess some kind of co-operator certification or a global meta-commitment of reflective consistency could be arranged to both signal that you are now a co-operator and enforce actually making co-operative decisions.
Instead of answering AllanCrossman's question, you have provided a stellar example of how scholastics turns brains to mush. Read this.
Update 2: maybe, to demonstrate my point, I should quote some hilarious examples of faulty thinking from the article I linked to. Here we go:
19 Three is not an object at all, but an essence; not a thing, but a thought; not a particular, but a universal.
28 The number three is neither an idle Platonic universal, nor a blank Lockean substratum; it is a concrete and specific energy in things, and can be detected at work in... (read more)