Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The True Epistemic Prisoner's Dilemma

9 Post author: MBlume 19 April 2009 08:57AM

I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:

I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.

To which I said:

Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?

And lo, JGWeissman saved me a lot of writing when he replied thus:

Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.

So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.

I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).

It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate".  I hope I have now broken some of that emotional symmetry.

As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."

But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.

Comments (70)

Comment author: MrHen 19 April 2009 12:58:01PM 5 points [-]

It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.

As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.

It seems like it would be wiser to forgo the arguments for evolution and spend your time talking about cooperating.

But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.

By the way, while we are adding direct emotional weight to this example, the real villain here is Omega. In all honesty, the Young Earth Creationist cannot be blamed for sending untold numbers to their death because of a bad belief. The bad belief has nothing to do with the asteroid and any moral link between the two should be placed on Omega.

Comment deleted 19 April 2009 02:37:59PM [-]
Comment author: MrHen 19 April 2009 05:59:54PM *  1 point [-]

Anything that has the ability to save untold billions and will only do so if two particular individuals figure out how old the earth is evil. Or, at the very least, does not have the best interests of humanity in mind.

To belabor the point, if Omega held his hands behind his back and asked you and me to guess at whether the number of fingers he is holding up is odd or even and, if and only if we were correct, he would save lives it would be the OP's example with certainty dropped to 0. Would we be held to blame if we failed? Increasing our certainty does not increase our moral responsibility.

(Note) I think the formatting in your post may be off. The third quote looks like it may have too much included.

Comment author: randallsquared 19 April 2009 10:23:19PM 0 points [-]

Anything that has the ability to save untold billions and will only do so if two particular individuals figure out how old the earth is evil. Or, at the very least, does not have the best interests of humanity in mind.

Since I'd say that evil is just having goals which are fundamentally incompatible with mine (or whoever is considering this), I don't think there's necessarily a difference between those two statements.

Comment author: Psychohistorian 19 April 2009 10:16:55PM *  4 points [-]

And then -- I hope -- you would cooperate.

Why do you hope I'd let a billion people die (from a proposed quantification in another comment)?

This is actually rather different from a classic PD, to the extent that Cooperate (cooperate) is not the collectively desirable outcome.

Payoffs: You(Creationist): Defect(D): 1 Billion live D(C): 3 Billion live C(D): 0 live C(C): 2 Billion live

Under the traditional PD, D(C) is best for you, but worst for him. Under this PD, D(C) is best for both of you. He wants you to defect and he wants to cooperate; he just doesn't know it. Valuing his utility does not save this like it does the traditional PD. Assuming he's vaguely rational, he will end up happier if you choose to defect, regardless of his choice. Furthermore, he thinks you will be happier if he defects, so he has absolutely no reason to cooperate.

If only by cooperating can you guarantee his cooperation, you should do so. However the PD generally assumes such prior commitments are not possible. And unlike the traditional PD, C(C) does not lead to the best possible collective outcome. Thus, you should try your hardest to convince him to cooperate, then you should defect. He'll thank you for it when another billion people don't die.

The medical situation is more confusing because I don't think it's realistic. I sincerely doubt you would have two vaguely rational doctors who would both put 99% confidence on a diagnosis knowing that another doctor was at least 99% confident that that diagnosis was incorrect. Thus, you should both amend your estimates substantially downwards, and thus should probably cooperate. If you take the hypothetical at face value, it seems like you both should defect, even though again D(C) would be the optimal solution from your perspective.

The real problem I'm having with some of these comments is that they assume my decision to defect or cooperate affects his decision, which does not seem to be a part of the hypothetical. Frankly I don't see how people can come to this conclusion in this context, given that it's a 1-shot game with a different collective payoff matrix than the traditional PD.

Comment author: AllanCrossman 19 April 2009 10:53:21AM *  3 points [-]

I think you've all seen enough PDs that I can leave the numbers as an exercise

Actually, since this is an unusual setup, I think it's worth spelling out:

To the atheist, Omega gives two choices, and forces him to choose between D and C:

D. Omega saves 1 billion people if the Earth is old.
C. Omega saves 2 billion people if the Earth is young.

To the creationist, Omega gives two choices, and forces him to choose between D and C:

D. Omega saves an extra 1 billion people if the Earth is young.
C. Omega saves an extra 2 billion people if the Earth is old.

And then -- I hope -- you would cooperate.

No, I certainly wouldn't. I would however lie to the creationist and suggest that we both cooperate. I'd then defect, which, regardless of what he does, is still the best move. If I choose C then my action saves no lives at all, since the Earth isn't young.

My position on one-shot PDs remains that cooperation is only worthwhile in odd situations where the players' actions are linked somehow, such that my cooperating makes it more likely that he will cooperate; e.g. if we're artificial agents running the same algorithm.

Comment author: prase 19 April 2009 06:34:45PM 3 points [-]

My position on one-shot PDs remains that cooperation is only worthwhile in odd situations where the players' actions are linked somehow, such that my cooperating makes it more likely that he will cooperate; e.g. if we're artificial agents running the same algorithm.

Agreed. In this situation, you can be very sure that the creationist runs very different algorithm. Otherwise, he wouldn't be a creationist.

Comment author: Zvi 19 April 2009 11:43:29AM 1 point [-]

Seems simple enough to me, too, as my answer yesterday implied. The probability the Earth is that young is close enough to 0 that it doesn't factor into my utility calculations, so Omega is asking me if I want to save a billion people. Do whatever you have to do to convince him, then save a billion people.

Comment author: Vladimir_Nesov 19 April 2009 12:30:49PM *  2 points [-]

With this attitude, you won't be able to convince him. He'll expect you to defect, no matter what you say. It's obvious to you what you'll do, and it's obvious for him. By refusing to save a billion people, and instead choosing the meaningless alternative option, you perform an instrumental action that results in your opponent saving 2 billion people. You control the other player indirectly.

Choosing the option other than saving 1 billion people doesn't have any terminal value, but it does have instrumental value, more of it than there is in directly saving 1 billion people.

This is not to say that you can place this kind of trust easily, for humans you may indeed require making a tangible precommitment. Humans are by default broken, in some situations you don't expect the right actions from them, the way you don't expect the right actions from rocks. An external precommitment is a crutch that compensates for the inborn ailments.

Comment author: Zvi 19 April 2009 01:02:18PM *  2 points [-]

What makes us assume this? I get why in examples where you can see each others' source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don't see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don't see how this corresponds to such efficient lie detection in both directions for both of us.

Obviously if we had a tangible precommitment option that was sufficient when a billion lives were at stake, I would take it. And I agree that if the payoffs were 1 person vs. 2 billion people on both sides, this would be a risk I'd be willing to take. But I don't see how we can suppose that the correspondance between "he thinks I will choose C if he agrees to choose C, and in fact then chooses C" and "I actually intend to choose C if he agrees to choose C" is not all that high. If the flat Earther in question is the person on whom they based Dr. Cal Lightman I still don't choose C because I'd feel that even if he believed me he'd probably choose D anyway. Do you think mosthumans are this good at lie detection (I know that I am not), and if so do you have evidence for it?

Comment author: Vladimir_Nesov 19 April 2009 01:39:31PM *  0 points [-]

I get why in examples where you can see each others' source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don't see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don't see how this corresponds to such efficient lie detection in both directions for both of us.

What does the source code really impart? Certainty in the other process' workings. But why would you need certainty? Is being a co-operator really so extraordinary a claim that to support it you need overwhelming evidence that leaves no other possibilities?

The problem is that there are three salient possibilities for what the other player is:

  • Defector, who really will defect, and will give you evidence of being a defector
  • Co-operator, who will really cooperate (with another who he believes to be a co-operator), and will give you evidence of being a co-operator
  • Deceiver, who will really defect, but will contrive evidence that he is a co-operator

Between co-operator and deceiver, all else equal, you should expect the evidence given by co-operator to be stronger than evidence given by deceiver. Deceiver has to support a complex edifice of his lies, separate from reality, while co-operator can rely on the whole of reality for support of his claims. As a result, each argument a co-operator makes should on average bring you closer to believing that he really is a co-operator, as opposed to being a deceiver. This process may be too slow to shift your expectation from the prior of very strongly disbelieving in existence of co-operators to posterior of believing that this one is really a co-operator, and this may be a problem. But this problem is only as dire as the rarity of co-operators and the deceptive eloquence of deceivers.

Comment author: Zvi 19 April 2009 03:52:45PM *  4 points [-]

We clearly disagree strongly on the probabilities here. I agree that all things being equal you have a better shot at convincing him than I do, but I think it is small. We both do the same thing in the Defector case. In the co-operator course, he believes you with probability P+Q and me with probability P. Assuming you know if he trusts you in this case (we count anything else as deceivers) you save (P+Q) * 2 +(1-P-Q) *1, I save (P) * 3+(1-P) * 1, both times the percentage of co-operators R. So you have to be at least twice as successful as I am even if there are no deceivers on the other side. Meanwhile, there's some percentage A who are decievers and some probability B that you'll believe a deceiver, or just A and 1 if you count anyone you don't believe as a simple Defector.

You think that R * (P+Q) * 2 + R * (1-P-Q) * 1 > R * P * 3 + R * (1-P) * 1 + A * B * 1. I strongly disagree. But if you convinced me otherwise, I would change my opinion.

Comment author: saturn 21 April 2009 05:23:08PM 0 points [-]
Comment author: Vladimir_Nesov 21 April 2009 12:00:44AM 0 points [-]

In the co-operator course, he believes you with probability P+Q and me with probability P.

That may be for one step, but my point is that the truth ultimately should win over lies. If you proceed to the next point of argument, you expect to distinguish Cooperator from Defector a little bit better, and as the argument continues, your ability to distinguish the possibilities should improve more and more.

The problem may be that it's not a fast enough process, but not that there is some fundamental limitation on how good the evidence may get. If you study the question thoroughly, you should be able to move long way away from uncertainty in the direction of truth.

Comment author: AllanCrossman 19 April 2009 12:48:21PM *  1 point [-]

By refusing to save a billion people, and instead choosing the meaningless alternative option, you perform an instrumental action that results in your opponent saving 2 billion people.

How does it to that, please? How does my action affect his?

Comment author: Vladimir_Nesov 19 April 2009 12:52:53PM *  0 points [-]

How does it do that, please?

Maybe it's not enough, maybe you need to do more than just doing the right thing. But it you actually plan to defect, you have no hope of convincing the other player that you won't. (See the revised last paragraph of the above comment.)

Comment author: AllanCrossman 19 April 2009 12:53:59PM *  1 point [-]

if you actually plan to defect, you have no hope of convincing the other player that you won't

Why? My opponent is not a mind-reader.

An external precommitment is a crutch that compensates for the inborn ailments.

Yes, if we can both pre-commit in a binding way, that's great. But what if we can't?

Comment author: Vladimir_Nesov 19 April 2009 01:09:22PM 1 point [-]

Yes, if we can both pre-commit in a binding way, that's great. But what if we can't?

I feel that this is related to the intuitions on free will. When a stone is thrown your way, you can't change what you'll do, you'll either duck, or you won't. If you duck, it means that you are a stone-avoider, a system that has a property of avoiding stones, that processes data indicating the fact that a stone is flying your way, and transforms it into the actions of impact-avoiding.

The precommitment is only useful because [you+precommitment] is a system with a known characteristic of co-operator, that performs cooperation in return to the other co-operators. What you need in order to arrange mutual cooperation is to signal the other player that you are a co-operator, and to make sure that the other player is also a co-operator. Signaling the fact that you are a co-operator is easy if you attach a precommitment crutch to your natural decision-making algorithm.

Since co-operators win more than mutual defectors, being a co-operator is rational, and so it's often just said that if you and your opponent are rational, you'll cooperate.

There is a stigma of being just human, but I guess some kind of co-operator certification or a global meta-commitment of reflective consistency could be arranged to both signal that you are now a co-operator and enforce actually making co-operative decisions.

Comment author: Vladimir_Nesov 19 April 2009 12:55:28PM 0 points [-]

My opponent is not a mind-reader.

He is no fool either.

Comment author: AllanCrossman 19 April 2009 12:58:21PM 3 points [-]

He is no fool either.

I don't understand.

You need to make it clear how my intention to defect or my intention to cooperate influences the other guy's actions, even if what I say to him is identical in both cases. Assume I'm a good liar.

Comment author: Nick_Tarleton 20 April 2009 01:20:45AM 0 points [-]

With this attitude, you won't be able to convince him. He'll expect you to defect, no matter what you say.

Um... are you asserting that deception between humans is impossible?

Comment author: Eliezer_Yudkowsky 19 April 2009 01:52:20PM 8 points [-]

As Omega led the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.

I could do that, but it seems simpler to make a convulsive effort to convince him that Omega, who clearly is no good Christian, almost certainly believes in the truth of evolution.

(Of course this is not relevant, but seemed worth pointing out. Cleverness is usually a dangerous thing, but in this case it seems worth dusting off.)

Comment author: JGWeissman 19 April 2009 04:18:27PM 5 points [-]

For a less convenient world, suppose that the creationist perceives Omega as God, offering a miracle. Miracles can apparently include one person being saved from disaster that kills hundreds, so the fact that Omega doesn't just save everybody would not be compelling to the creationist.

Comment author: Alicorn 19 April 2009 04:33:25PM 3 points [-]

Then I guess you'd have to try the "God is testing whether your compassion exceeds your arrogance" angle, and hope they didn't counter with "God is testing whether my faith is strong enough to avoid being swayed by your lies".

Comment author: MBlume 20 April 2009 04:12:35AM 0 points [-]

The assumption that the creationist actually buys "creationism is true iff omega believes it's true" is by far the weakest aspect of this scenario. As always, I just assume that Omega has some off-screen demonstration of his own trustworthiness that is Too Awesome To Show

(insert standard 'TV Tropes is horribly addictive' disclaimer here)

For the same reason, I've often wondered what a worldwide prediction market on theism would look like, if there was any possible way of providing payouts. Sadly, this is the closest I've seen.

Comment author: RichardChappell 20 April 2009 12:45:00AM 0 points [-]

And then -- I hope -- you would cooperate.

This is to value your own "rationality" over that which is to be protected: the billion lives at stake. (We may add: such a "rationality" fetish isn't really rational at all.) Why give us even more to weep about?

Comment author: orthonormal 20 April 2009 11:18:41PM *  1 point [-]

I can see how it looks to you as if MBlume's strategy prizes his ritual of cognition over that which he should protect— but be careful and charitable before you sling that accusation around here. This is a debate with a bit of a history on LW.

If you can't convince the creationist of evolution in the time available, but there is a way for both of you to bindingly precommit, it's uncontroversial that (C,C) is the lifesaving choice, because you save 2 billion rather than 1.

The question is whether there is a general way for quasi-rational agents to act as if they had precommitted to the Pareto equilibrium when dealing with an agent of the same sort. If they could do so and publicly (unfakeably) signal as much, then such agents would have an advantage in general PDs. A ritual of cognition such as this is an attempt to do just that.

EDIT: In case it's this ambiguity, MBlume's strategy isn't "cooperate in any scenario", but "visibly be the sort of person who can cooperate in a one-shot PD with someone else who also accept this strategy, and try and convince the creationist to think the same way". If it looks like the creationist will try to defect, MBlume will defect as well.

Comment author: RichardChappell 21 April 2009 03:59:56AM *  0 points [-]

In case it's this ambiguity, MBlume's strategy isn't "cooperate in any scenario"

Ah. It did look to me as though he was suggesting that. For, after describing how we would try to convince the creationist to cooperate (by trying to convince them of their epistemic error), he writes:

But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance.

I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection. In that case, to suggest that we ought to co-operate nonetheless would seem futile in the extreme -- hence my comment about merely adding to the reasons to weep.

But I take it your proposal is that MBlume meant something else: not that we would fail to convince the creationist to co-operate, but rather that we would fail to convince them to let us defect. That would make more sense. (But it is not at all clear from what he wrote.)

Comment author: orthonormal 21 April 2009 04:20:44PM *  2 points [-]

I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection.

I read it as saying that if the creationist could have been convinced of evolution, then 3 billion rather than 2 billion could have been saved; after the door shuts, MBlume then follows the policy of "both cooperate if we still disagree" that he and the creationist both signaled they were genuinely capable of.

(But it is not at all clear from what he wrote.)

I have to agree— MBlume, you should have written this post so that someone reading it on its own doesn't get a false impression. It makes sense within the debate, and especially in context of your previous post, but is very ambiguous if it's the first thing one reads.

There's perhaps one more source of ambiguity: the distinction between

  • the assertion that "cooperate without communication, given only mutual knowledge of complete rationality in decision theory" is part of the completely rational decision theory, and
  • the discussion of "agree to mutually cooperate in such a fashion that you each unfakeably signal your sincerity" as a feasible PD strategy for quasi-rational human beings.

If all goes well, I'd like to post on this myself soon.

Comment author: RichardChappell 20 April 2009 11:09:24PM 0 points [-]

(Negative points? Anyone care to explain?)

Comment author: MrHen 20 April 2009 11:25:16PM *  0 points [-]

(Negative points? Anyone care to explain?)

I did not vote one way or the other, but if I had to vote I would vote down. Reasonings below.

This is to value your own "rationality" over that which is to be protected: the billion lives at stake.

"Rationality", as best as I can tell, is pointing toward the belief that cooperating is the rationalistic approach to the example. Instead of giving a reason that it is not rational you dismiss it out of hand. This is not terribly useful to the discussion.

If it is actually pointing to the player's beliefs about the age of the universe, than the statement also has ambiguity against it.

(We may add: such a "rationality" fetish isn't really rational at all.)

This is somewhat interesting but not really presented in a manner that makes it discussible. It basically says the same thing as the sentence before it but adds loaded words.

"Why give us even more to weep about?" implies that you may have missed the entire point of the original article. The point was that it is rational to cooperate even though you are weeping. The explanation is given in the previous post. Your comment simply states that you disagree but do not address the author's reasonings and do not give reasonings of your own.

If I had to rewrite your post I would probably do something like this:

Choosing to cooperate because it could result in a larger outcome is not rationality since the other player is not likely to do the same. Doing it anyway because you are "supposed" to cooperate in a prisoner's dilemma just sends billions of people to their death.

You would also have to give a good excuse for the other player not cooperating. I think a few others around here have presented some.

Comment author: Nick_Tarleton 20 April 2009 11:43:21PM *  1 point [-]

The point was that it is rational to cooperate even though you are weeping. The explanation is given in the previous post.

I am far from convinced that superrationality applies between you or me and humans in general, let alone humans with epistemic processes that permit them to be creationists. At least, it's obvious that my decision process is not correlated in any relevant sense with that of someone who hasn't heard/thought of or doesn't subscribe to superrationality.

Comment author: JGWeissman 21 April 2009 01:36:19AM 1 point [-]

Keep in mind, this creationist, despite his epistemic problems, has manages so far not to die from believing that "when they drink deadly poison, it will not hurt them at all". Maybe he has some rationality you can work with, maybe even enough that he thinks that saving an extra billion lives is worth cooperating with an atheist (so long as the atheist is likewise rational enough to save an extra billion lives by cooperating with a creationist).

Comment author: Nick_Tarleton 21 April 2009 02:37:39AM 1 point [-]

Keep in mind, this creationist, despite his epistemic problems, has manages so far not to die from believing that "when they drink deadly poison, it will not hurt them at all".

Not killing yourself in such grossly obvious ways is pretty easy (very few people, even creationists, let their abstract beliefs control their actions that much), and doesn't seem at all relevant to me.

maybe even enough that he thinks that saving an extra billion lives is worth cooperating with an atheist

I'm sure he already thinks that, not being an alien monster and all – his problem is epistemic, not moral.

(so long as the atheist is likewise rational enough to save an extra billion lives by cooperating with a creationist)

So long as the creationist thinks that, you mean. Again, he's almost certainly not aware of superrationality, so I should persuade him to cooperate however I can, then defect. (Modulo the possibility that privately precommitting to cooperate could make me more persuasive, but on casual introspection I doubt I could actually do that.)

In the unlikely event the creationist is superrational, I expect we'd both start out trying to persuade each other, so we could notice the symmetry, mutually determine that we're superrational (since causal decision theorists could also start out persuading), and both cooperate (resulting in a worse outcome than if he hadn't been superrational).

Comment author: JGWeissman 21 April 2009 03:35:49AM 0 points [-]

Not killing yourself in such grossly obvious ways is pretty easy (very few people, even creationists, let their abstract beliefs control their actions that much), and doesn't seem at all relevant to me.

You seriously think that the fact that the creationist doesn't let his abstract belief control his actions is not relevant to the question of whether he will let his abstract belief control his actions? The point is, he has ways of overcoming the foolishness of his beliefs when faced with an important problem.

I'm sure he already thinks that, not being an alien monster and all

So, if you agree he would be willing to cooperate with an atheist, why would he not cooperate by exchanging his choice for the higher payoff in the event that the atheist is right for the atheist's choice for the higher payoff in the event the creationist is right? Recognizing a Pareto improvement is not hard even if one has never heard of Pareto.

In the unlikely event the creationist is superrational ...

It seems you are prepared to recognize this. Are you also prepared to recognize that he did not start out superrational, but is persuaded by your arguments?

Comment author: Nick_Tarleton 21 April 2009 04:10:55AM *  0 points [-]

You seriously think that the fact that the creationist doesn't let his abstract belief control his actions is not relevant to the question of whether he will let his abstract belief control his actions?

I think that the fact that he doesn't let his abstract belief cause him to drink poison, when everyone around him with the same abstract belief obviously doesn't drink poison, when common sense (poison is bad for you) opposes the abstract belief, and when the relevant abstract belief probably occupies very little space in his mind* is of little relevance to whether he will let an abstract belief that is highly salient and part of his identity make him act in a way that isn't nonconforming and doesn't conflict with common sense.

*If any; plenty of polls show Christians to be shockingly ignorant of the Bible, something many atheists seem to be unaware of.

So, if you agree he would be willing to cooperate with an atheist, why would he not cooperate by exchanging his choice for the higher payoff in the event that the atheist is right for the atheist's choice for the higher payoff in the event the creationist is right? Recognizing a Pareto improvement is not hard even if one has never heard of Pareto.

No doubt he would, which is why I would try to persuade him, but he is not capable of discerning what action I'll take (modulo imperfect deception on my part, but again I seriously doubt I could do better by internally committing), nor is his decision process correlated with mine.

It seems you are prepared to recognize this. Are you also prepared to recognize that he did not start out superrational, but is persuaded by your arguments?

I would rather persuade him to cooperate but not to be superrational (allowing the outcome to be D/C) than persuade him to be superrational (forcing C/C), and I doubt the latter would be easier.

(Caveat: I'm not entirely sure about the case where the creationist is not superrational, but knows me very well.)

Comment author: JGWeissman 21 April 2009 05:02:25AM 0 points [-]

The creationist does not have to contradict his belief about the age of the earth to cooperate. He only needs to recognize that the way to get the best result given his belief is to exchange cooperation for cooperation, using common sense (saving 2 billion people given that the earth is young is better than saving 1 billion people given that the earth is young). Yes, understanding the prisoner's dilemma is harder than understanding poison is bad, but it is still a case where common sense should overcome a small bias, if there is one at all. You might have some work to convince the creationist that his choice does not need to reflect his belief, just as your choice to cooperate would not indicate that you actually believe the earth is young.

I would rather persuade him to cooperate but not to be superrational (allowing the outcome to be D/C) than persuade him to be superrational (forcing C/C), and I doubt the latter would be easier.

Why is he going to cooperate unless you offer to cooperate in return? Unless you actually convinced him to reject young earth creationism, he would see that as saving 0 people instead of 1 billion. Or do you intend to trick him into believing that you would cooperate? I don't think I could do that; I would have to be honest to be convincing.

Comment author: gwern 16 April 2012 08:15:06PM 0 points [-]

For those not familiar with superrationality, see http://www.gwern.net/docs/1985-hofstadter

Comment author: Nominull 19 April 2009 10:28:16PM -1 points [-]

My thinking is, if you are stupid (or ignorant, or irrational, or whatever) enough to be a creationist, you are probably also stupid enough not to know the high-order strategy for the prisoner's dilemma, and therefore cooperating with you is useless. You'll make your decision about whether or not to cooperate based on whatever stupid criteria you have, but they probably won't involve an accurate prediction of my decision algorithm, because you are stupid. I can't influence you by cooperating, so I defect and save some lives.

Comment author: steven0461 21 April 2009 03:25:33PM 1 point [-]

you would cooperate

As I understand it, to the extent that it makes sense to cooperate, the thing that cooperates is not you, but some sub-algorithm implemented in both you and your opponent. Is that right? If so, then maybe by phrasing it in this way we can avoid philosophers balking.

Comment author: Vladimir_Nesov 21 April 2009 04:57:22PM *  0 points [-]

As I understand it, to the extent that it makes sense to cooperate, the thing that cooperates is not you, but some sub-algorithm implemented in both you and your opponent.

It has to add up to normality, there should be a you somewhere. If each time you act on your better judgment over gut instinct it is "not you" that does the acting, why is it invited in your mind? Is the whole of deliberate reasoning not you?

In my book, when future-you fights a previously made informed commitment, then it is a case where future-you is not you anymore, where it stops caring about your counterfactuals. Not when the future-you remains reflectively consistent.

But possibly, this reflectively consistent creature can't a person anymore, and is not what we'd like to be, with our cognitive ritual morally significant after all, a thing to protect in itself.

Comment author: rwallace 19 April 2009 10:28:00PM 1 point [-]

I will point out to the defectors that the scenario described is no more plausible than creationism (after all it involves a deity behaving even more capriciously than the creationist one). If we postulate that your fictional self is believing in the scenario, surely your fictional self should no longer be quite so certain of the falsehood of creationism?

Comment author: jimmy 21 April 2009 05:58:40AM 0 points [-]

This doesn't sound like the most inconvenient world to me. Not all unlikely things are correlated, so choose a world where they're not.

Comment author: Lightwave 20 April 2009 01:28:41AM *  0 points [-]

In this scenario you can actually replace Omega with a person (e.g. a mad scientist or something), who just happens to be the only one who has, say, a cure for the disease which is about to kill a couple of billion people.

Comment author: rwallace 20 April 2009 04:47:37AM 0 points [-]

Then you may well be 99% sure of the truth of evolution, but can you be 99% sure of the judgement an admitted madman will make? If not, you should give more thought to cooperating.

Comment author: Lightwave 19 April 2009 05:40:32PM *  1 point [-]

Given the stakes, it seems to me the most rational thing to do here is to try to convince the other person that you should both cooperate, and then defect.

The difference between this dilemma and Newcomb is that Newcomb's Omega predicts perfectly which box you'll take, whereas the Creationist cannot predict whether you'll defect or not.

The only way you can lose is if you screw up so badly at trying to convincing him to cooperate (i.e. you're a terrible liar or bad at communicating in general and confuse him), that instead he's convinced he should defect now. So the biggest factor when deciding whether to cooperate or defect should be your ability to convince.

Comment author: Simulacra 19 April 2009 08:01:50PM 1 point [-]

If you don't think you could convince him to cooperate then you still defect because he will, and if you cooperate 0 people are saved. Cooperating generates either 0 or 2 billion saved, defecting generates either 1 or 3 billion saved. Defect is clearly the better option.

If you were going to play 100 rounds for 10 or 20 million lives each, cooperate by all means. But in a single round PD defect is the winning choice (assuming the payout is all that matters to you; if your utility function cares about the other persons feelings towards you after the choice, cooperate can become the highest utility)

Comment author: ChrisHibbert 20 April 2009 08:07:55PM 0 points [-]

The Standard PD is set up so there are only two agents and only their choices and values matter. I tend to think of rationality in these dilemmas as being largely a matter of reputation, even when the situation is circumscribed and described as one-shot. Hofstadter's concept of super-rationality is part of how I think about this. If I have a reputation as someone who cooperates when that's the game-theoretically optimal thing to do, then it's more likely that whoever I've been partnered with will expect that from me, and cooperate if he understands why that strategy works.

Since it would buttress that reputation, I keep hoping that rationalists, generally, would come to embrace some interpretation of super-rationality, but I keep seeing self-professed rationalists whose choices seem short-sightedly instrumentalist to me.

But this seems to be a completely different situation. Rather than attempting to cooperate with someone who I should assume to be my partner, and who has my interests at heart, I'm asked to play a game with someone who doesn't reason the way I do, and who explicitly mistrusts my reasoning. In addition, the payoff isn't to me and the other player, the payoff is to a huge number of uninvolved other people. MBlume seems to want me to think of it in terms of something valuable in my preference ranking, but he's actually set it up so that it's not a prisoner's dilemma, it's a hostage situation in which I have a clearly superior choice, and an opportunity to try to convince someone whose reasoning is alien to my own.

I defect. I do my best to convince my friend that the stakes are too high to justify declaring his belief in god. So you can get me to defect, but only by setting up a situation in which my allies aren't sitting on the other side of the bargaining table.

Comment author: spriteless 20 April 2009 04:30:22AM 0 points [-]

The young Earth creationist is right, because the whole earth was created in a simulation by Omega that took about 5000 years to run.

You can't win with someone that much smarter than you. I don't see how this means anything but 'it's good to have infinite power, computational and otherwise.'

Comment author: Nick_Tarleton 20 April 2009 01:02:07AM 0 points [-]

the atheist will choose between each of them receiving $5000 if the earth is less than 1 million years old or each receiving $10000 if the earth is more than 1 million years old

Isn't this backwards? The dilemma occurs if payoff(unbelieved statement) > payoff(believed statement).

Comment author: orthonormal 20 April 2009 10:57:14PM 0 points [-]

It's most definitely a typo, but we all know what the payoff matrix is supposed to be.

Comment author: Nick_Tarleton 20 April 2009 11:04:50PM *  0 points [-]

I actually wasn't sure until I saw Allan Crossman's comment, though if that hadn't been there I probably would've been able to figure it out with a bit more effort.

Comment author: JGWeissman 21 April 2009 01:06:49AM 0 points [-]

Yes, it was a typo. I have fixed the original comment.