# Sleeping Beauty gets counterfactually mugged

**Related to: **Counterfactual Mugging, Newcomb's Problem and Regret of Rationality

Omega is continuing his eternal mission: To explore strange new philosophical systems... To seek out new paradoxes and new counterfactuals... To boldly go where no decision theory has gone before.

In his usual totally honest, quasi-omniscient, slightly sadistic incarnation, Omega has a new puzzle for you, and it involves the Sleeping Beauty problem as a bonus.

He will offer a similar deal to that in the counterfactual mugging: he will flip a coin, and if it comes up tails, he will come round and ask you to give him £100.

If it comes up heads, instead he will simulate you, and check whether you would give him the £100 if asked (as usual, the use of randomising device in the decision is interpreted as a refusal). From this counterfactual, if you would give him the cash, he’ll send you £260; if you wouldn’t, he’ll give you nothing.

Two things are different from the original setup, both triggered if the coin toss comes up tails: first of all, if you refuse to hand over any cash, he will give you an extra £50 compensation. Second of all, if you do give him the £100, he will force you to take a sedative and an amnesia drug, so that when you wake up the next day, you will have forgotten about the current day. He will then ask you to give him the £100 again.

To keep everything fair and balanced, he will feed you the sedative and the amnesia drug whatever happens (but will only ask you for the £100 a second time if you accepted to give it to him the first time).

Would you want to precommit to giving Omega the cash, if he explained everything to you? The odds say yes: precommitting to accepting to hand over the £100 will give you an expected return of 0.5 x £260 + 0.5 x (-£200) = £30, while precommitting to a refusal gives you an expected return of 0.5 x £0 + 0.5 x £50 = £25.

But now consider what happens at the moment when he actually asks you for the cash.

A standard way to approach these types of problems it to act as if you didn’t know whether you were the real you or the simulated you. This avoids a lot of complications and gets you to the heart of the problem. Here, if you decide to give Omega the cash, there are three situations you can be in: the simulation, reality on the first day, or reality on the second day. The Dutch book odds of being in any of these three situations is the same, 1/3. So the expected return is 1/3(£260-£100-£100) = £20, twenty of her majesty’s finest English pounds.

However, if you decide to refuse the hand-over, then you are in one of two situations: the simulation, or reality on the first day (as you will not get asked on the second day). The Dutch book odds are even, so the expected return is 1/2(£0+£50) = £25, a net profit of £5 over accepting.

So even adding ‘simulated you’ as an extra option, a hack that solves most Omega type problems, does not solve this paradox: the option you precommit to has the lower expected returns when you actually have to decide.

Note that if you depart from the Dutch book odds (what did the Dutch do to deserve to be immortalised in that way, incidentally?), then Omega can put you in situations where you lose money with certainty.

So, what do you do?

## Comments (23)

Best"History of the Term Dutch Book"

Thanks.

This is a case where a modern (or even science fictional) problem can be solved with a piece of technology that was known to the builders of the pyramids.

The technology in question is the promise. If the overall deal is worth while then the solution is for me to agree to it upfront. After that I don't have to do any more utility calculations; I simply follow through on my agreement.

The game theorists don't believe in promises, if there are no consequences for breaking them. That's what all the "Omega subsequently leaves for a distant galaxy" is about.

If you're using game theory as a normative guide to making decisions, then promises become problematic.

Personally, I think keeping promises is excellent, and I think I could and would, even in absence of consequences. However, everyone would agree that I am only of bounded rationality, and the game theorists have a very good explanation for why I would loudly support keeping promises - pro-social signaling - so my claim might not mean that much.

Recall, however, that the objective is not to be someone who would do well in fictional game theory scenarios, but someone who does well in real life.

So one answer is that real life people don't suddenly emigrate to a distant galaxy after one transaction.

But the deeper answer is that it's not just the negative consequences of breaking one promise, but of being someone who has a policy of breaking promises whenever it superficially appears useful.

*9 points [-]This is where the mistake happens. You forgot that the expected number of decisions you will have to make is 3/2, so the expected return is1/3(£260-£100-£100)*3/2 = £30, not £20. This agrees with the earlier calculation, as it should, and there's no paradox.

This is true, of course; but the paradox comes at the moment when you are asked by Omega. There, you are facing a single decision, not a fraction of a decision, and you don't get to multiply by 3/2.

*1 point [-]The subjective probability is defined for any of the agent's 3 possible states. Yet you count the decisions in a different state space under a different probability measure. You are basically using the fact that the subjective probabilities are equal. The number of decisions in each of the branches corresponds to the total subjective measure of the associated events, so it can be used to translate to the "objective" measure.

This is not the standard (state space+probability measure+utility function) model. When you convert your argument to the standard form, you get the 1/2-position on the Sleeping Beauty.

I'm not sure where you get 3/2 expected decisions. Care to elaborate?

Here's how I worked through i (ignoring expected decisions because I don't think I understand that yet)t:

if you're in the simulation, you get 260. If you're in reality day 1 (rd1), you lose 100 and expect to lose 100 on the next day if you're in reality day 2 (rd2), you lose 100

so 1/3(260-200-100) = -40/3

For rd1, if you give Omega the 100, then you know that when you wake up on rd2, you won't recall giving Omega the 100. So you'll be in exactly the same situation as you are right now, as far as you can tell. So you'll give Omega the 100 again.

What's wrong with the above reasoning? I'm not too experienced with game theoretic paradoxes, so my different line of reasoning probably means I'm wrong.

btw, If I attempt to calculate the expected decisions, I get 4/3

*1 point [-]If Omega's coin flip comes up heads, then you make one decision, to pay or not pay, as a simulation. If it comes up tails, then you make two decisions, to pay or not to pay, as a real person. These each have probability 0.5, so you expect to make 0.5(1+2)=1.5 decisions total.

The expected total value is the sum of the outcome for each circumstance times the expected number of times that circumstance is encountered. You can figure out the expected number of times each circumstance is encountered directly from the problem statement (0.5 for simulation, 0.5 for reality day 1, 0.5 for reality day 2). Alternatively, you can compute the expected number of times a circumstance C is encountered as P(individual decision is in C) * E(decisions made), which is (1/3)*(3/2), or 0.5, for simulation, reality day 1, and reality day 2. The mistake that Stuart_Armstrong made is in confusing E(times circumstance C is encountered) for P(individual decision is in C); these are not the same.

(Also, you double-counted the 100 you lose in reality day 2, messing up your expected value computation again.)

*1 point [-]Apparently I had gestalt switched out of considering the coin. Thanks.

the double counting was intentional. My intuition was that if your on reality day 1, you expect to lose 100 today and 100 again tomorrow since you know you will give Omega the cash when he asks you. However, you don't really know that in this thought experiment. He may give you amnesia, but he doesn't get your brain in precisely the same physical state when he asks you the second time. So the problem seems resolved to me. This does suggest another thought experiment though.

*2 points [-]Stuart, I think this paradox is not related to Counterfactual Mugging, but is purely Sleeping Beauty. Consider the following modification to your setup, which produces a decision problem with the same structure:

If the coin comes up heads, Omega does not simulate you, but simply asks you to give him £100. If you agree, he gives you back £360. If you don't, no consequences ensue.

*1 point [-]It's a combination of Sleeping Beauty and Counterfactual Mugging, with the decision depending on the resolution of both problems. It doesn't look like the problems

interact, but if you are a 1/3-er, you don't give away the money, and if you don't care about the counterfactual, you don't give it away either. You factored out the Sleeping Beauty in your example, and equivalently the Counterfactual Mugging can be factored out by asking the question before the coin toss.*1 point [-]I think it's not quite the Sleeping Beauty problem. That's about the semantics of belief; this is about the semantics of what a "decision" is.

Making a decision to give or not to give means making the decision for both days, and you're aware of that in the scenario. Since the problem requires that Omega can simulate you and predict your answer, you can't be a being that can say yes on one day and no on another day. It would be the same problem if there were no amnesia and he asked you to give him 200 pounds once.

In other words, you don't get to make 2 independent decisions on the two days, so it is incorrect to say you are making decisions on those days. The scenario is incoherent.

Comment deleted27 March 2009 04:25:22PM*[-]<i>There's no way I'm going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That's crazy. If I did that I'd end up making silly mistakes such as weighing the decisions based on 'tails' coming up as twice as important as those that come after 'heads'. Why on earth would I expect that to work?</i>

Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always - and any other method gives you the wrong answer (give me your method and I'll show you).

The key to the paradox here is not the simulated realities, or even the sleeping beauty part - it's the fact that the amount of times you are awoken depends upon your decision! That's what breaks it; if it were not the case, it doesn't fall apart. If, say, Omega were to ask you on the second day whatever happens (but not give you the extra £50 on the second day, to keep the same setup) then your expectations are accept: £20, refuse £50/3, which is what you'd expect.

(Small style note: it'd be better if you quoted that text using a '>', or used real italics, which in Markdown are underscores '_' instead of <i> tags.)

How do I know that Omega simulated only one copy of me?

It doesn't actually matter whether he simulates on or several copies of you - if you divide the expected return by the number of copies that he simulates, the math works out fine.

Standard Sleeping Beauty type problems are also fine. It only in this peculiar set up that there seems to be a paradox.

Hmm I'm still a bit confused. You did the math when there is one simulation of you and found that he expects to make 20 by giving away money and 25 by not.

If there are two simulations doesn't it go the other way? If your strategy is to give away money there are now four indistinguishable siutations. 1/4(£260+£260-£100-£100) = £80

And if you decide not to give money away there are three indistinguishable situations. 1/3(£0+£0+£50) = £16.67

This may be a simplifying way of formulating the problem, but it's not an accurate description of what happens: there is no "simulated you", Omega only computes an answer to one question, your decision, about the counterfactual you. Omega's computation

refersto the counterfactual truth about you. In the problem statement, the topic of discussion is therealyou.If, instead of paying £50 to refusers, Omega doesn't do that but takes away an additional £50 from payers (without giving them any choice), the problem seems to go away (at least, as far as I can tell - I need to check the maths later).

Yet we would expect this case to be identical to your case wouldn't we?

If he did that, it would no longer be rational to accept to pay (expected return 0.5x(£260) + 0.5x(-£150 -£150) = -£20). The fact that this case is NOT identical to the first one is due to the whole Sleeping Beauty set-up.

*1 point [-]No: I meant, if you pay, you pay a total of £250.

i.e. a slightly clearer statement of my imagined setup is this:

Omega flips a coin. On tails, he asks you if you'll pay £125 now, knowing that if this is day 1 he'll wipe your memory and ask you again tomorrow.

On heads, he simulates you and sends you £260 if you pay.

There is never any money paid to non-payers.

(Basically, the only difference between this version and yours is that both paying and not paying have a return that is £25 lower than in your version. That surely shouldn't make a difference, but it makes the problem go away.)

Isn't that correct? I admit I suck at maths.