Related to: Counterfactual Mugging, Newcomb's Problem and Regret of Rationality

Omega is continuing his eternal mission: To explore strange new philosophical systems... To seek out new paradoxes and new counterfactuals... To boldly go where no decision theory has gone before.

In his usual totally honest, quasi-omniscient, slightly sadistic incarnation, Omega has a new puzzle for you, and it involves the Sleeping Beauty problem as a bonus.

He will offer a similar deal to that in the counterfactual mugging: he will flip a coin, and if it comes up tails, he will come round and ask you to give him £100.

If it comes up heads, instead he will simulate you, and check whether you would give him the £100 if asked (as usual, the use of randomising device in the decision is interpreted as a refusal). From this counterfactual, if you would give him the cash, he’ll send you £260; if you wouldn’t, he’ll give you nothing.

Two things are different from the original setup, both triggered if the coin toss comes up tails: first of all, if you refuse to hand over any cash, he will give you an extra £50 compensation. Second of all, if you do give him the £100, he will force you to take a sedative and an amnesia drug, so that when you wake up the next day, you will have forgotten about the current day. He will then ask you to give him the £100 again.

To keep everything fair and balanced, he will feed you the sedative and the amnesia drug whatever happens (but will only ask you for the £100 a second time if you accepted to give it to him the first time).

Would you want to precommit to giving Omega the cash, if he explained everything to you? The odds say yes: precommitting to accepting to hand over the £100 will give you an expected return of 0.5 x £260 + 0.5 x (-£200) = £30, while precommitting to a refusal gives you an expected return of 0.5 x £0 + 0.5 x £50 = £25.

But now consider what happens at the moment when he actually asks you for the cash.

A standard way to approach these types of problems it to act as if you didn’t know whether you were the real you or the simulated you. This avoids a lot of complications and gets you to the heart of the problem. Here, if you decide to give Omega the cash, there are three situations you can be in: the simulation, reality on the first day, or reality on the second day. The Dutch book odds of being in any of these three situations is the same, 1/3. So the expected return is 1/3(£260-£100-£100) = £20, twenty of her majesty’s finest English pounds.

However, if you decide to refuse the hand-over, then you are in one of two situations: the simulation, or reality on the first day (as you will not get asked on the second day). The Dutch book odds are even, so the expected return is 1/2(£0+£50) = £25, a net profit of £5 over accepting.

So even adding ‘simulated you’ as an extra option, a hack that solves most Omega type problems, does not solve this paradox: the option you precommit to has the lower expected returns when you actually have to decide.

Note that if you depart from the Dutch book odds (what did the Dutch do to deserve to be immortalised in that way, incidentally?), then Omega can put you in situations where you lose money with certainty.

So, what do you do?

 

New to LessWrong?

New Comment
34 comments, sorted by Click to highlight new comments since: Today at 11:25 AM

The Dutch book odds of being in any of these three situations is the same, 1/3. So the expected return is 1/3(£260-£100-£100) = £20

This is where the mistake happens. You forgot that the expected number of decisions you will have to make is 3/2, so the expected return is1/3(£260-£100-£100)*3/2 = £30, not £20. This agrees with the earlier calculation, as it should, and there's no paradox.

This is true, of course; but the paradox comes at the moment when you are asked by Omega. There, you are facing a single decision, not a fraction of a decision, and you don't get to multiply by 3/2.

[-][anonymous]15y10

This is true.

[-][anonymous]15y10

This is true.

[-][anonymous]15y10

This is true.

[-][anonymous]15y10

This is true.

[-][anonymous]15y10

This is true.

The subjective probability is defined for any of the agent's 3 possible states. Yet you count the decisions in a different state space under a different probability measure. You are basically using the fact that the subjective probabilities are equal. The number of decisions in each of the branches corresponds to the total subjective measure of the associated events, so it can be used to translate to the "objective" measure.

This is not the standard (state space+probability measure+utility function) model. When you convert your argument to the standard form, you get the 1/2-position on the Sleeping Beauty.

I'm not sure where you get 3/2 expected decisions. Care to elaborate?

Here's how I worked through i (ignoring expected decisions because I don't think I understand that yet)t:

if you're in the simulation, you get 260. If you're in reality day 1 (rd1), you lose 100 and expect to lose 100 on the next day if you're in reality day 2 (rd2), you lose 100

so 1/3(260-200-100) = -40/3

For rd1, if you give Omega the 100, then you know that when you wake up on rd2, you won't recall giving Omega the 100. So you'll be in exactly the same situation as you are right now, as far as you can tell. So you'll give Omega the 100 again.

What's wrong with the above reasoning? I'm not too experienced with game theoretic paradoxes, so my different line of reasoning probably means I'm wrong.

btw, If I attempt to calculate the expected decisions, I get 4/3

If Omega's coin flip comes up heads, then you make one decision, to pay or not pay, as a simulation. If it comes up tails, then you make two decisions, to pay or not to pay, as a real person. These each have probability 0.5, so you expect to make 0.5(1+2)=1.5 decisions total.

The expected total value is the sum of the outcome for each circumstance times the expected number of times that circumstance is encountered. You can figure out the expected number of times each circumstance is encountered directly from the problem statement (0.5 for simulation, 0.5 for reality day 1, 0.5 for reality day 2). Alternatively, you can compute the expected number of times a circumstance C is encountered as P(individual decision is in C) E(decisions made), which is (1/3)(3/2), or 0.5, for simulation, reality day 1, and reality day 2. The mistake that Stuart_Armstrong made is in confusing E(times circumstance C is encountered) for P(individual decision is in C); these are not the same.

(Also, you double-counted the 100 you lose in reality day 2, messing up your expected value computation again.)

Apparently I had gestalt switched out of considering the coin. Thanks.

(Also, you double-counted the 100 you lose in reality day 2, messing up your expected value computation again.)

the double counting was intentional. My intuition was that if your on reality day 1, you expect to lose 100 today and 100 again tomorrow since you know you will give Omega the cash when he asks you. However, you don't really know that in this thought experiment. He may give you amnesia, but he doesn't get your brain in precisely the same physical state when he asks you the second time. So the problem seems resolved to me. This does suggest another thought experiment though.

This is a case where a modern (or even science fictional) problem can be solved with a piece of technology that was known to the builders of the pyramids.

The technology in question is the promise. If the overall deal is worth while then the solution is for me to agree to it upfront. After that I don't have to do any more utility calculations; I simply follow through on my agreement.

The game theorists don't believe in promises, if there are no consequences for breaking them. That's what all the "Omega subsequently leaves for a distant galaxy" is about.

If you're using game theory as a normative guide to making decisions, then promises become problematic.

Personally, I think keeping promises is excellent, and I think I could and would, even in absence of consequences. However, everyone would agree that I am only of bounded rationality, and the game theorists have a very good explanation for why I would loudly support keeping promises - pro-social signaling - so my claim might not mean that much.

Recall, however, that the objective is not to be someone who would do well in fictional game theory scenarios, but someone who does well in real life.

So one answer is that real life people don't suddenly emigrate to a distant galaxy after one transaction.

But the deeper answer is that it's not just the negative consequences of breaking one promise, but of being someone who has a policy of breaking promises whenever it superficially appears useful.

[-][anonymous]15y10

We are trying to figure out a formal decision theory of how you

How do I know that Omega simulated only one copy of me?

It doesn't actually matter whether he simulates on or several copies of you - if you divide the expected return by the number of copies that he simulates, the math works out fine.

Standard Sleeping Beauty type problems are also fine. It only in this peculiar set up that there seems to be a paradox.

Hmm I'm still a bit confused. You did the math when there is one simulation of you and found that he expects to make 20 by giving away money and 25 by not.

If there are two simulations doesn't it go the other way? If your strategy is to give away money there are now four indistinguishable siutations. 1/4(£260+£260-£100-£100) = £80

And if you decide not to give money away there are three indistinguishable situations. 1/3(£0+£0+£50) = £16.67

[-][anonymous]15y00

It doesn't actually matter whether he simulates one or several copies of you - if you divide the expected return by the number of copies that he simulates, the math works out fine.

It doesn't actually matter whether he bugs you for cash then drugs you once or many times - if you divide the expected return by the number of times he bugs 'n drugs you, the math works out fine.

Stuart, I think this paradox is not related to Counterfactual Mugging, but is purely Sleeping Beauty. Consider the following modification to your setup, which produces a decision problem with the same structure:

If the coin comes up heads, Omega does not simulate you, but simply asks you to give him £100. If you agree, he gives you back £360. If you don't, no consequences ensue.

It's a combination of Sleeping Beauty and Counterfactual Mugging, with the decision depending on the resolution of both problems. It doesn't look like the problems interact, but if you are a 1/3-er, you don't give away the money, and if you don't care about the counterfactual, you don't give it away either. You factored out the Sleeping Beauty in your example, and equivalently the Counterfactual Mugging can be factored out by asking the question before the coin toss.

I think it's not quite the Sleeping Beauty problem. That's about the semantics of belief; this is about the semantics of what a "decision" is.

Making a decision to give or not to give means making the decision for both days, and you're aware of that in the scenario. Since the problem requires that Omega can simulate you and predict your answer, you can't be a being that can say yes on one day and no on another day. It would be the same problem if there were no amnesia and he asked you to give him 200 pounds once.

In other words, you don't get to make 2 independent decisions on the two days, so it is incorrect to say you are making decisions on those days. The scenario is incoherent.

[-][anonymous]15y20

I'd tell him to give me the cash and bugger off. If he wants me to put any effort into his sadistic schemes he can omnisciently win some lottery and get some real cash to offer. I value the extra day that will be gone from my life without me remembering it at well over 25 pounds and to be honest I'm a bit wary of his nasty mind altering drugs.

Considering pounds as a reliable measure of utilons:

A standard way to approach these types of problems it to act as if you didn’t know whether you were the real you or the simulated you. This avoids a lot of complications and gets you to the heart of the problem. Here, if you decide to give Omega the cash, there are three situations you can be in: the simulation, reality on the first day, or reality on the second day. The Dutch book odds of being in any of these three situations is the same, 1/3. So the expected return is 1/3(£260-£100-£100) = £20, twenty of her majesty’s finest English pounds.

There's no way I'm going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That's crazy. If I did that I'd end up making silly mistakes such as weighing the decisions based on 'tails' coming up as twice as important as those that come after 'heads'. Why on earth would I expect that to work?

Add a Newcombish problem to a sleeping beauty problem if you want, but you cannot just add all decisions each of them implies together, divide by three and expect to come up with sane decisions.

I'm either in the 'heads sim' or I'm in 'tails real'.

  • If I Omega got heads and my decision would have been cooperate then I gain £260.
  • If I Omega got heads and my decision would have been defect then I gain £0.
  • If I Omega got tails and my decision would have been cooperate then I lose £200.
  • If I Omega got tails and my decision would have been defect then I gain £50.

Given that heads and tails are equally important, when I make my decision I'll end up with a nice simple 0.5 x £260 + 0.5 x (-£100 - £100) = £30 vs 0.5 x £0 + 0.5 x £50 = £25. I've got no particular inclination to divide by 3.

If Omega got carried away with his amnesiatic drug fetish and decided to instead ask for £20 ten days running then my math would be: 0.5 x £0 + 0.5 x £50 = £25 vs 0.5 x £260 + 0.5 x (-£20 - £20 - £20 - £20 - £20 - £20 - £20 - £20 - £20 - £20) = £30. I'm definitely not going to decide to divide by eleven and weigh the 10 inevitable but trivial decisions of the tails getting cooperator as collectively 10 times more significant than the single choice of the more fortunate cooperative sim.

If my decision were to change based on how many times the penalty for unfortunate cooperation is arbitrarily divided then it would suggest my decision making strategy is bogus. No 1/3 or 1/11 for me!

There's no way I'm going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That's crazy. If I did that I'd end up making silly mistakes such as weighing the decisions based on 'tails' coming up as twice as important as those that come after 'heads'. Why on earth would I expect that to work?

Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always - and any other method gives you the wrong answer (give me your method and I'll show you).

The key to the paradox here is not the simulated realities, or even the sleeping beauty part - it's the fact that the amount of times you are awoken depends upon your decision! That's what breaks it; if it were not the case, it doesn't fall apart. If, say, Omega were to ask you on the second day whatever happens (but not give you the extra £50 on the second day, to keep the same setup) then your expectations are accept: £20, refuse £50/3, which is what you'd expect.

(Small style note: it'd be better if you quoted that text using a '>', or used real italics, which in Markdown are underscores '_' instead of tags.)

[-][anonymous]15y00

Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always

You have identified a shortcut that seems to rely on a certain assumption. It sounds like you have identified a way to violate that assumption and will hopefully not make that mistake again. There's no paradox. Just lazy math.

and any other method gives you the wrong answer (give me your method and I'll show you).

Method? I didn't particularly have a cached algorithm to fall back on. So my method was "Read problem. Calculate outcomes for cooperate and defect in each situation. Multiply by appropriate weights. Try not to do anything stupid and definitely don't consider tails worth more than heads based on a gimmick."

If you have an example where most calculations people make would give the wrong answer then I'd be happy to tackle it.

[-][anonymous]15y10

The Dutch book odds of being in any of these three situations is the same, 1/3.

I'm probably not sufficiently familiar with the relevant philosophical literature to understand what you mean by that. The probabilities here are on the states of different variables, so they don't even need to sum up to 100%. For example, what are the probabilities for my house standing on its usual place this evening and tomorrow morning? Near-100% in both cases.

A standard way to approach these types of problems it to act as if you didn’t know whether you were the real you or the simulated you.

This may be a simplifying way of formulating the problem, but it's not an accurate description of what happens: there is no "simulated you", Omega only computes an answer to one question, your decision, about the counterfactual you. Omega's computation refers to the counterfactual truth about you. In the problem statement, the topic of discussion is the real you.

Running through this to check that my wetware handles it consistently.

Paying -100 if asked:

When the coin is flipped, one's probability branch splits into a 0.5 of oneself in the 'simulation' branch, 0.5 in the 'real' branch. For the 0.5 in the real branch, upon awaking a subjective 50% probability that on either of the two possible days, both of which will be woken on. So, 0.5 of the time waking in simulation, 0.25 waking in real 1, 0.25 waking in real 2.

0.5 x (260) + 0.25 x (-100) + 0.25 x (-100) = 80. However, this is the expected cash-balance change over the course of a single choice, and doesn't take into account that Omega is waking you multiple times for the worse choice.

An equation for relating choice made to expected gain/loss at the end of the experiment doesn't ask 'What is my expected loss according to which day in reality I might be waking up in?', but rather only 'What is my expected loss according to which branch of the coin toss I'm in?' 0.5 x (260) + 0.5 x (-100-100) = 30.

Another way of putting it: 0.5 x (260) + 0.25 x (-100(-100)) + 0.25 x (-100(-100)) = 30 (Given that making one choice in a 0.25 branch guarantees the same choice made, separated by a memory-partition; either you've already made the choice and don't remember it, or you're going to make the choice and won't remember this one, for a given choice that the expected gain/loss is being calculated for. The '-100' is the immediate choice that you will remember (or won't remember), the '(-100)' is the partition-separated choice that you don't remember (or will remember).)

--Trying to see what this looks like for an indefinite number of reality wakings: 0.5 * (260) + n x (1/n) x (1/2) x (-100 x n) = 130 - (50 x n), which of the form that might be expected.

(Edit: As with reddit, frustrating that line breaks behave differently in the commenting field and the posted comment.)

If, instead of paying £50 to refusers, Omega doesn't do that but takes away an additional £50 from payers (without giving them any choice), the problem seems to go away (at least, as far as I can tell - I need to check the maths later).

Yet we would expect this case to be identical to your case wouldn't we?

If he did that, it would no longer be rational to accept to pay (expected return 0.5x(£260) + 0.5x(-£150 -£150) = -£20). The fact that this case is NOT identical to the first one is due to the whole Sleeping Beauty set-up.

No: I meant, if you pay, you pay a total of £250.

i.e. a slightly clearer statement of my imagined setup is this:

Omega flips a coin. On tails, he asks you if you'll pay £125 now, knowing that if this is day 1 he'll wipe your memory and ask you again tomorrow.

On heads, he simulates you and sends you £260 if you pay.

There is never any money paid to non-payers.

(Basically, the only difference between this version and yours is that both paying and not paying have a return that is £25 lower than in your version. That surely shouldn't make a difference, but it makes the problem go away.)

  • Not paying always gives £0.
  • Precommitting gives a return of 0.5 x £260 + 0.5 x (-£250) = £5
  • By your logic, at the time of the decision, return is 1/3(£260-£125-£125) = £3.33

Isn't that correct? I admit I suck at maths.