I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:

I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.

To which I said:

Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?

And lo, JGWeissman saved me a lot of writing when he replied thus:

Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.

So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.

I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).

It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate".  I hope I have now broken some of that emotional symmetry.

As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."

But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.

New to LessWrong?

New Comment
72 comments, sorted by Click to highlight new comments since: Today at 5:02 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As Omega led the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.

I could do that, but it seems simpler to make a convulsive effort to convince him that Omega, who clearly is no good Christian, almost certainly believes in the truth of evolution.

(Of course this is not relevant, but seemed worth pointing out. Cleverness is usually a dangerous thing, but in this case it seems worth dusting off.)

5JGWeissman15y
For a less convenient world, suppose that the creationist perceives Omega as God, offering a miracle. Miracles can apparently include one person being saved from disaster that kills hundreds, so the fact that Omega doesn't just save everybody would not be compelling to the creationist.
2Alicorn15y
Then I guess you'd have to try the "God is testing whether your compassion exceeds your arrogance" angle, and hope they didn't counter with "God is testing whether my faith is strong enough to avoid being swayed by your lies".
0MBlume15y
The assumption that the creationist actually buys "creationism is true iff omega believes it's true" is by far the weakest aspect of this scenario. As always, I just assume that Omega has some off-screen demonstration of his own trustworthiness that is Too Awesome To Show (insert standard 'TV Tropes is horribly addictive' disclaimer here) For the same reason, I've often wondered what a worldwide prediction market on theism would look like, if there was any possible way of providing payouts. Sadly, this is the closest I've seen.

It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.

As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.

It seems like it would be wiser to forgo the ar... (read more)

-1[anonymous]15y
I do not concur. Yes he can be blamed, and I do. Humans have learned to harness false belief in the face of overwhelming evidence and wield it as a weapon far more effectively than teeth, claws and even clubs. While I once excused destructive behavior based on 'sincere belief that they were doing the right thing' I no longer do so. I have no particular inclination to do that. I've been given no information about either Omega's incentives or his abilities in this situation. All I know is that he has arrived and offered to save millions of people in a somewhat bizarre manner. I'd prefer he saved everyone but better some be saved than the entire planet be obliterated.
1MrHen15y
Anything that has the ability to save untold billions and will only do so if two particular individuals figure out how old the earth is evil. Or, at the very least, does not have the best interests of humanity in mind. To belabor the point, if Omega held his hands behind his back and asked you and me to guess at whether the number of fingers he is holding up is odd or even and, if and only if we were correct, he would save lives it would be the OP's example with certainty dropped to 0. Would we be held to blame if we failed? Increasing our certainty does not increase our moral responsibility. (Note) I think the formatting in your post may be off. The third quote looks like it may have too much included.
0randallsquared15y
Since I'd say that evil is just having goals which are fundamentally incompatible with mine (or whoever is considering this), I don't think there's necessarily a difference between those two statements.

And then -- I hope -- you would cooperate.

Why do you hope I'd let a billion people die (from a proposed quantification in another comment)?

This is actually rather different from a classic PD, to the extent that Cooperate (cooperate) is not the collectively desirable outcome.

Payoffs: You(Creationist): Defect(D): 1 Billion live D(C): 3 Billion live C(D): 0 live C(C): 2 Billion live

Under the traditional PD, D(C) is best for you, but worst for him. Under this PD, D(C) is best for both of you. He wants you to defect and he wants to cooperate; he just doesn't... (read more)

I think you've all seen enough PDs that I can leave the numbers as an exercise

Actually, since this is an unusual setup, I think it's worth spelling out:

To the atheist, Omega gives two choices, and forces him to choose between D and C:

D. Omega saves 1 billion people if the Earth is old.
C. Omega saves 2 billion people if the Earth is young.

To the creationist, Omega gives two choices, and forces him to choose between D and C:

D. Omega saves an extra 1 billion people if the Earth is young.
C. Omega saves an extra 2 billion people if the Earth is old.

And the

... (read more)
3prase15y
Agreed. In this situation, you can be very sure that the creationist runs very different algorithm. Otherwise, he wouldn't be a creationist.
1Zvi15y
Seems simple enough to me, too, as my answer yesterday implied. The probability the Earth is that young is close enough to 0 that it doesn't factor into my utility calculations, so Omega is asking me if I want to save a billion people. Do whatever you have to do to convince him, then save a billion people.
2Vladimir_Nesov15y
With this attitude, you won't be able to convince him. He'll expect you to defect, no matter what you say. It's obvious to you what you'll do, and it's obvious for him. By refusing to save a billion people, and instead choosing the meaningless alternative option, you perform an instrumental action that results in your opponent saving 2 billion people. You control the other player indirectly. Choosing the option other than saving 1 billion people doesn't have any terminal value, but it does have instrumental value, more of it than there is in directly saving 1 billion people. This is not to say that you can place this kind of trust easily, for humans you may indeed require making a tangible precommitment. Humans are by default broken, in some situations you don't expect the right actions from them, the way you don't expect the right actions from rocks. An external precommitment is a crutch that compensates for the inborn ailments.
2Zvi15y
What makes us assume this? I get why in examples where you can see each others' source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don't see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don't see how this corresponds to such efficient lie detection in both directions for both of us. Obviously if we had a tangible precommitment option that was sufficient when a billion lives were at stake, I would take it. And I agree that if the payoffs were 1 person vs. 2 billion people on both sides, this would be a risk I'd be willing to take. But I don't see how we can suppose that the correspondance between "he thinks I will choose C if he agrees to choose C, and in fact then chooses C" and "I actually intend to choose C if he agrees to choose C" is not all that high. If the flat Earther in question is the person on whom they based Dr. Cal Lightman I still don't choose C because I'd feel that even if he believed me he'd probably choose D anyway. Do you think mosthumans are this good at lie detection (I know that I am not), and if so do you have evidence for it?
0Vladimir_Nesov15y
What does the source code really impart? Certainty in the other process' workings. But why would you need certainty? Is being a co-operator really so extraordinary a claim that to support it you need overwhelming evidence that leaves no other possibilities? The problem is that there are three salient possibilities for what the other player is: * Defector, who really will defect, and will give you evidence of being a defector * Co-operator, who will really cooperate (with another who he believes to be a co-operator), and will give you evidence of being a co-operator * Deceiver, who will really defect, but will contrive evidence that he is a co-operator Between co-operator and deceiver, all else equal, you should expect the evidence given by co-operator to be stronger than evidence given by deceiver. Deceiver has to support a complex edifice of his lies, separate from reality, while co-operator can rely on the whole of reality for support of his claims. As a result, each argument a co-operator makes should on average bring you closer to believing that he really is a co-operator, as opposed to being a deceiver. This process may be too slow to shift your expectation from the prior of very strongly disbelieving in existence of co-operators to posterior of believing that this one is really a co-operator, and this may be a problem. But this problem is only as dire as the rarity of co-operators and the deceptive eloquence of deceivers.
4Zvi15y
We clearly disagree strongly on the probabilities here. I agree that all things being equal you have a better shot at convincing him than I do, but I think it is small. We both do the same thing in the Defector case. In the co-operator course, he believes you with probability P+Q and me with probability P. Assuming you know if he trusts you in this case (we count anything else as deceivers) you save (P+Q) 2 +(1-P-Q) 1, I save (P) 3+(1-P) 1, both times the percentage of co-operators R. So you have to be at least twice as successful as I am even if there are no deceivers on the other side. Meanwhile, there's some percentage A who are decievers and some probability B that you'll believe a deceiver, or just A and 1 if you count anyone you don't believe as a simple Defector. You think that R (P+Q) 2 + R (1-P-Q) 1 > R P 3 + R (1-P) 1 + A B 1. I strongly disagree. But if you convinced me otherwise, I would change my opinion.
0saturn15y
Here's an older thread about this
0Vladimir_Nesov15y
That may be for one step, but my point is that the truth ultimately should win over lies. If you proceed to the next point of argument, you expect to distinguish Cooperator from Defector a little bit better, and as the argument continues, your ability to distinguish the possibilities should improve more and more. The problem may be that it's not a fast enough process, but not that there is some fundamental limitation on how good the evidence may get. If you study the question thoroughly, you should be able to move long way away from uncertainty in the direction of truth.
1AllanCrossman15y
How does it to that, please? How does my action affect his?
0Vladimir_Nesov15y
Maybe it's not enough, maybe you need to do more than just doing the right thing. But it you actually plan to defect, you have no hope of convincing the other player that you won't. (See the revised last paragraph of the above comment.)
1AllanCrossman15y
Why? My opponent is not a mind-reader. Yes, if we can both pre-commit in a binding way, that's great. But what if we can't?
1Vladimir_Nesov15y
I feel that this is related to the intuitions on free will. When a stone is thrown your way, you can't change what you'll do, you'll either duck, or you won't. If you duck, it means that you are a stone-avoider, a system that has a property of avoiding stones, that processes data indicating the fact that a stone is flying your way, and transforms it into the actions of impact-avoiding. The precommitment is only useful because [you+precommitment] is a system with a known characteristic of co-operator, that performs cooperation in return to the other co-operators. What you need in order to arrange mutual cooperation is to signal the other player that you are a co-operator, and to make sure that the other player is also a co-operator. Signaling the fact that you are a co-operator is easy if you attach a precommitment crutch to your natural decision-making algorithm. Since co-operators win more than mutual defectors, being a co-operator is rational, and so it's often just said that if you and your opponent are rational, you'll cooperate. There is a stigma of being just human, but I guess some kind of co-operator certification or a global meta-commitment of reflective consistency could be arranged to both signal that you are now a co-operator and enforce actually making co-operative decisions.
-2cousin_it15y
Instead of answering AllanCrossman's question, you have provided a stellar example of how scholastics turns brains to mush. Read this. Update 2: maybe, to demonstrate my point, I should quote some hilarious examples of faulty thinking from the article I linked to. Here we go: 19 Three is not an object at all, but an essence; not a thing, but a thought; not a particular, but a universal. 28 The number three is neither an idle Platonic universal, nor a blank Lockean substratum; it is a concrete and specific energy in things, and can be detected at work in such observable processes as combustion. 32 Since the properties of three are intelligible, and intelligibles can exist only in the intellect, the properties of three exist only in the intellect. 35 We get the concept of three only through the transcendental unity of our intuitions as being successive in time. Ring any bells?
2orthonormal15y
If you think Vladimir is being opaque with his writing, and you disagree with his conclusion, that is not the same as asserting that he's writing nonsense. Charity (and the evidence of his usual clarity) demand that you ask for clarification before accusing him of such.
0Vladimir_Nesov15y
Actually, I thought that I made a relatively clear argument, and I'm surprised that it's not upvoted (the same goes for the follow-up here). Maybe someone could constructively comment on why that is. I expect that the argument is not easy to understand, and maybe I failed at seeing the inferential distance between my argument and intended audience, so that people who understood the argument already consider it too obvious to be of notice, and people who disagree with the conclusion didn't understand the argument... Anyway, any constructive feedback on meta level would be appreciated. On the concept of avoiders, see Dennett's lecture here. Maybe someone can give a reference in textual form.
0cousin_it15y
Uh... AllanCrossman asked: what if we can't precommit? You answered: it's good to be able to precommit, maybe we can still arrange it somehow. Thus simplified, it doesn't look like an answer. But you didn't say it in simple words. You added philosophical fog that, when parsed and executed, completely cancels out, giving us no indication how to actually precommit. Disagree?
0Vladimir_Nesov15y
My reply can be summarized as explaining why "precommiting in binding way" is not a clear-cut necessity for this problem. If you are a cooperator, there is no need to precommit.
0cousin_it15y
In your terms, being a cooperator for this specific problem is synonymous to precommitting. You're just shunting words around. All right, how do I actually be a cooperator?
0Vladimir_Nesov15y
No, it's not synonymous. If you precommit, you become a cooperator, but you can also be one without precommiting. If you are an AI that is written to be a cooperator, you'll be one. If you decide to act as a cooperator, you may be one. Being a cooperator is relatively easy. Being a cooperator and successfully signating that you are one, without precommitment, is in practice much harder. And a related problem, if you are a cooperator, you have to recognize a signal that the other person is a cooperator also, which may be too hard if he hasn't precommited.
0cousin_it15y
What? The implication goes both ways. If you're a cooperator (in your terms), then you're precommitted to cooperating (in classical terms). Maybe you misunderstand the word "precommitment"? It doesn't necessarily imply that some natural power forces the other guy to believe you.
0Vladimir_Nesov15y
If you define precommitment this way, then every property becomes a precommitment to having that property, and the concept of precommitment becomes tautological. For example, is it a precommitment to always prefer good over evil (defined however you like)?
0cousin_it15y
Not every property. Every immutable property. They're very rare. Your example isn't a precommitment because it's not immutable.
1Vladimir_Nesov15y
What's "mutable"? Changing in time? Cooperation may be a one-off encounter, with no multiple occasions to change over. You may be a cooperator for the duration of one encounter, and a rock elsewhere. Every fact is immutable, so I don't know what you imply here.
0cousin_it15y
Yes, mutable means changing in time. Precommitment is an interaction between two different times: the time when you're doing cheap talk with the opponent, and the time when you're actually deciding in the closed room. The time you burn your ships, and the time your troops go to battle. Signaling time and play time. If a property is immutable (preferably physically immutable) between those two times, that's precommitment. Sounds synonymous to your "being a cooperator" concept.
0Vladimir_Nesov15y
In other words, my point is that if the signaling is about your future property, at the moment when you have to perform the promised behavior, there is no need for any kind of persistence, thus according to your definition precommitment is unnecessary. Likewise, signaling doesn't need to consist in you presenting any kind of argument, it may already be known that you (will be) a cooperator. For example, the agent in question may be selected from a register of cooperators, where 99% of them are known to be cooperators. And cooperators themselves might as well be human, who decided to follow this counterintuitive algorithm, and benefit from doing so when interacting with other known cooperators, without any tangible precommitment system in place, no punishment for not being cooperators. This example may be implemented through reputation system.
0cousin_it15y
No such thing as future property. This isn't a factual disagreement on my part, just a quibble over terms; disregard it. Your example isn't about signaling or precommitment, it's changing the game into multiple-shot, modifying the agent's utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn't help much in true one-shot (or last-play) situations. On the other hand, the ideal platonic PD is also quite rare in reality - not as rare as Newcomb's, but still. You may remember us having an isomorphic argument about Newcomb's some time ago, with roles reversed - you defending the ideal platonic Newcomb's Problem, and me questioning its assumptions :-) Me, I don't feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer's "true PD").
0Vladimir_Nesov15y
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future. Yes, it was just an example of how to set up cooperation without precommitment. It's clear that signaling being a one-off cooperator is a very hard problem, if you are only human and there are no Omegas flying around.
0[anonymous]15y
My cat has a property of being dead in the future. Not with probability one, it doesn't.
0Vladimir_Nesov15y
This doesn't place the future in a privileged position. Even though I'm certain I saw my cat 10 minutes ago, it wasn't alive a week ago with probability one, either.
0cousin_it15y
Sorry. I deleted my comment to acknowledge my stupidity in making it. By now it's clear that we don't disagree substantively.
0thomblake15y
My answer to this would be that people have dispositions to behavior, and these dispositions color everything we do. If one might profit by showing courage, a coward will not do as well as a courageous man. Of course, the relative success of such people at faking in appropriate situations is perhaps an empirical question. ETA: this makes less sense as a direct response since you edited your comment. However, I think the difference is that "being a cooperator" regards a disposition that is part of the sort of person you are (though I think the above comment uses it more narrowly as a disposition that might only affect this one action), while a precommitment... well, I'm not sure actual people really do have those, if they're immutable.
0Vladimir_Nesov15y
He is no fool either.
4AllanCrossman15y
I don't understand. You need to make it clear how my intention to defect or my intention to cooperate influences the other guy's actions, even if what I say to him is identical in both cases. Assume I'm a good liar.
0Nick_Tarleton15y
Um... are you asserting that deception between humans is impossible?

you would cooperate

As I understand it, to the extent that it makes sense to cooperate, the thing that cooperates is not you, but some sub-algorithm implemented in both you and your opponent. Is that right? If so, then maybe by phrasing it in this way we can avoid philosophers balking.

0Vladimir_Nesov15y
It has to add up to normality, there should be a you somewhere. If each time you act on your better judgment over gut instinct it is "not you" that does the acting, why is it invited in your mind? Is the whole of deliberate reasoning not you? In my book, when future-you fights a previously made informed commitment, then it is a case where future-you is not you anymore, where it stops caring about your counterfactuals. Not when the future-you remains reflectively consistent. But possibly, this reflectively consistent creature can't a person anymore, and is not what we'd like to be, with our cognitive ritual morally significant after all, a thing to protect in itself.

I will point out to the defectors that the scenario described is no more plausible than creationism (after all it involves a deity behaving even more capriciously than the creationist one). If we postulate that your fictional self is believing in the scenario, surely your fictional self should no longer be quite so certain of the falsehood of creationism?

0jimmy15y
This doesn't sound like the most inconvenient world to me. Not all unlikely things are correlated, so choose a world where they're not.
0Lightwave15y
In this scenario you can actually replace Omega with a person (e.g. a mad scientist or something), who just happens to be the only one who has, say, a cure for the disease which is about to kill a couple of billion people.
0rwallace15y
Then you may well be 99% sure of the truth of evolution, but can you be 99% sure of the judgement an admitted madman will make? If not, you should give more thought to cooperating.

Given the stakes, it seems to me the most rational thing to do here is to try to convince the other person that you should both cooperate, and then defect.

The difference between this dilemma and Newcomb is that Newcomb's Omega predicts perfectly which box you'll take, whereas the Creationist cannot predict whether you'll defect or not.

The only way you can lose is if you screw up so badly at trying to convincing him to cooperate (i.e. you're a terrible liar or bad at communicating in general and confuse him), that instead he's convinced he should defect now. So the biggest factor when deciding whether to cooperate or defect should be your ability to convince.

1Simulacra15y
If you don't think you could convince him to cooperate then you still defect because he will, and if you cooperate 0 people are saved. Cooperating generates either 0 or 2 billion saved, defecting generates either 1 or 3 billion saved. Defect is clearly the better option. If you were going to play 100 rounds for 10 or 20 million lives each, cooperate by all means. But in a single round PD defect is the winning choice (assuming the payout is all that matters to you; if your utility function cares about the other persons feelings towards you after the choice, cooperate can become the highest utility)

The Standard PD is set up so there are only two agents and only their choices and values matter. I tend to think of rationality in these dilemmas as being largely a matter of reputation, even when the situation is circumscribed and described as one-shot. Hofstadter's concept of super-rationality is part of how I think about this. If I have a reputation as someone who cooperates when that's the game-theoretically optimal thing to do, then it's more likely that whoever I've been partnered with will expect that from me, and cooperate if he understands why... (read more)

The young Earth creationist is right, because the whole earth was created in a simulation by Omega that took about 5000 years to run.

You can't win with someone that much smarter than you. I don't see how this means anything but 'it's good to have infinite power, computational and otherwise.'

the atheist will choose between each of them receiving $5000 if the earth is less than 1 million years old or each receiving $10000 if the earth is more than 1 million years old

Isn't this backwards? The dilemma occurs if payoff(unbelieved statement) > payoff(believed statement).

0orthonormal15y
It's most definitely a typo, but we all know what the payoff matrix is supposed to be.
0Nick_Tarleton15y
I actually wasn't sure until I saw Allan Crossman's comment, though if that hadn't been there I probably would've been able to figure it out with a bit more effort.
0JGWeissman15y
Yes, it was a typo. I have fixed the original comment.

And then -- I hope -- you would cooperate.

This is to value your own "rationality" over that which is to be protected: the billion lives at stake. (We may add: such a "rationality" fetish isn't really rational at all.) Why give us even more to weep about?

1orthonormal15y
I can see how it looks to you as if MBlume's strategy prizes his ritual of cognition over that which he should protect— but be careful and charitable before you sling that accusation around here. This is a debate with a bit of a history on LW. If you can't convince the creationist of evolution in the time available, but there is a way for both of you to bindingly precommit, it's uncontroversial that (C,C) is the lifesaving choice, because you save 2 billion rather than 1. The question is whether there is a general way for quasi-rational agents to act as if they had precommitted to the Pareto equilibrium when dealing with an agent of the same sort. If they could do so and publicly (unfakeably) signal as much, then such agents would have an advantage in general PDs. A ritual of cognition such as this is an attempt to do just that. EDIT: In case it's this ambiguity, MBlume's strategy isn't "cooperate in any scenario", but "visibly be the sort of person who can cooperate in a one-shot PD with someone else who also accept this strategy, and try and convince the creationist to think the same way". If it looks like the creationist will try to defect, MBlume will defect as well.
0RichardChappell15y
Ah. It did look to me as though he was suggesting that. For, after describing how we would try to convince the creationist to cooperate (by trying to convince them of their epistemic error), he writes: I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection. In that case, to suggest that we ought to co-operate nonetheless would seem futile in the extreme -- hence my comment about merely adding to the reasons to weep. But I take it your proposal is that MBlume meant something else: not that we would fail to convince the creationist to co-operate, but rather that we would fail to convince them to let us defect. That would make more sense. (But it is not at all clear from what he wrote.)
3orthonormal15y
I read it as saying that if the creationist could have been convinced of evolution, then 3 billion rather than 2 billion could have been saved; after the door shuts, MBlume then follows the policy of "both cooperate if we still disagree" that he and the creationist both signaled they were genuinely capable of. I have to agree— MBlume, you should have written this post so that someone reading it on its own doesn't get a false impression. It makes sense within the debate, and especially in context of your previous post, but is very ambiguous if it's the first thing one reads. There's perhaps one more source of ambiguity: the distinction between * the assertion that "cooperate without communication, given only mutual knowledge of complete rationality in decision theory" is part of the completely rational decision theory, and * the discussion of "agree to mutually cooperate in such a fashion that you each unfakeably signal your sincerity" as a feasible PD strategy for quasi-rational human beings. If all goes well, I'd like to post on this myself soon.
0RichardChappell15y
(Negative points? Anyone care to explain?)
0MrHen15y
I did not vote one way or the other, but if I had to vote I would vote down. Reasonings below. "Rationality", as best as I can tell, is pointing toward the belief that cooperating is the rationalistic approach to the example. Instead of giving a reason that it is not rational you dismiss it out of hand. This is not terribly useful to the discussion. If it is actually pointing to the player's beliefs about the age of the universe, than the statement also has ambiguity against it. This is somewhat interesting but not really presented in a manner that makes it discussible. It basically says the same thing as the sentence before it but adds loaded words. "Why give us even more to weep about?" implies that you may have missed the entire point of the original article. The point was that it is rational to cooperate even though you are weeping. The explanation is given in the previous post. Your comment simply states that you disagree but do not address the author's reasonings and do not give reasonings of your own. If I had to rewrite your post I would probably do something like this: You would also have to give a good excuse for the other player not cooperating. I think a few others around here have presented some.
1Nick_Tarleton15y
I am far from convinced that superrationality applies between you or me and humans in general, let alone humans with epistemic processes that permit them to be creationists. At least, it's obvious that my decision process is not correlated in any relevant sense with that of someone who hasn't heard/thought of or doesn't subscribe to superrationality.
2JGWeissman15y
Keep in mind, this creationist, despite his epistemic problems, has manages so far not to die from believing that "when they drink deadly poison, it will not hurt them at all". Maybe he has some rationality you can work with, maybe even enough that he thinks that saving an extra billion lives is worth cooperating with an atheist (so long as the atheist is likewise rational enough to save an extra billion lives by cooperating with a creationist).
2Nick_Tarleton15y
Not killing yourself in such grossly obvious ways is pretty easy (very few people, even creationists, let their abstract beliefs control their actions that much), and doesn't seem at all relevant to me. I'm sure he already thinks that, not being an alien monster and all – his problem is epistemic, not moral. So long as the creationist thinks that, you mean. Again, he's almost certainly not aware of superrationality, so I should persuade him to cooperate however I can, then defect. (Modulo the possibility that privately precommitting to cooperate could make me more persuasive, but on casual introspection I doubt I could actually do that.) In the unlikely event the creationist is superrational, I expect we'd both start out trying to persuade each other, so we could notice the symmetry, mutually determine that we're superrational (since causal decision theorists could also start out persuading), and both cooperate (resulting in a worse outcome than if he hadn't been superrational).
0JGWeissman15y
You seriously think that the fact that the creationist doesn't let his abstract belief control his actions is not relevant to the question of whether he will let his abstract belief control his actions? The point is, he has ways of overcoming the foolishness of his beliefs when faced with an important problem. So, if you agree he would be willing to cooperate with an atheist, why would he not cooperate by exchanging his choice for the higher payoff in the event that the atheist is right for the atheist's choice for the higher payoff in the event the creationist is right? Recognizing a Pareto improvement is not hard even if one has never heard of Pareto. It seems you are prepared to recognize this. Are you also prepared to recognize that he did not start out superrational, but is persuaded by your arguments?
0Nick_Tarleton15y
I think that the fact that he doesn't let his abstract belief cause him to drink poison, when everyone around him with the same abstract belief obviously doesn't drink poison, when common sense (poison is bad for you) opposes the abstract belief, and when the relevant abstract belief probably occupies very little space in his mind* is of little relevance to whether he will let an abstract belief that is highly salient and part of his identity make him act in a way that isn't nonconforming and doesn't conflict with common sense. *If any; plenty of polls show Christians to be shockingly ignorant of the Bible, something many atheists seem to be unaware of. No doubt he would, which is why I would try to persuade him, but he is not capable of discerning what action I'll take (modulo imperfect deception on my part, but again I seriously doubt I could do better by internally committing), nor is his decision process correlated with mine. I would rather persuade him to cooperate but not to be superrational (allowing the outcome to be D/C) than persuade him to be superrational (forcing C/C), and I doubt the latter would be easier. (Caveat: I'm not entirely sure about the case where the creationist is not superrational, but knows me very well.)
0JGWeissman15y
The creationist does not have to contradict his belief about the age of the earth to cooperate. He only needs to recognize that the way to get the best result given his belief is to exchange cooperation for cooperation, using common sense (saving 2 billion people given that the earth is young is better than saving 1 billion people given that the earth is young). Yes, understanding the prisoner's dilemma is harder than understanding poison is bad, but it is still a case where common sense should overcome a small bias, if there is one at all. You might have some work to convince the creationist that his choice does not need to reflect his belief, just as your choice to cooperate would not indicate that you actually believe the earth is young. Why is he going to cooperate unless you offer to cooperate in return? Unless you actually convinced him to reject young earth creationism, he would see that as saving 0 people instead of 1 billion. Or do you intend to trick him into believing that you would cooperate? I don't think I could do that; I would have to be honest to be convincing.
0gwern12y
For those not familiar with superrationality, see http://www.gwern.net/docs/1985-hofstadter

My thinking is, if you are stupid (or ignorant, or irrational, or whatever) enough to be a creationist, you are probably also stupid enough not to know the high-order strategy for the prisoner's dilemma, and therefore cooperating with you is useless. You'll make your decision about whether or not to cooperate based on whatever stupid criteria you have, but they probably won't involve an accurate prediction of my decision algorithm, because you are stupid. I can't influence you by cooperating, so I defect and save some lives.