1 min read

-12

Excuse the horrible terribad pun...

 

An evil Omega has locked you in a box. Inside, there is a bomb and a button. Omega informs you that in an hour the bomb will explode, unless you do the opposite of what Omega predicted you will do. Namely, press the button if it predicted you won't or vice versa. In that case, the bomb won't explode and the box will open, letting you free.

Your actions?

 

PS. You have no chance to survive make your time.

PPS. Quick! Omega predicted that in exactly 5 second from now, you will blink. Your actions?

PPPS. Omega vs. Quantum Weather Butterfly. The battle of the Eon!

New Comment
33 comments, sorted by Click to highlight new comments since:

This isn't a paradox, the bomb will go off no matter what, assuming Omega is a perfect predictor.

Amusingly, this wouldn't seem like a paradox if something good was guaranteed to happen if Omega guessed right. Like if the problem was that you're locked in a box, and you can only avoid getting a million dollars if you do the opposite of what Omega predicts. Answer: "cool, I get a million dollars!" and you stop thinking. In the problem as stated, you're casting about for an answer that doesn't seem possible, and that feels like thinking about paradoxes, so you think the problem is a paradox. It isn't. You're just trapped in a box with a bomb.

[-][anonymous]90

Agreed. This isn't really interesting. It's basically "If you have access to a RNG you have a 50% chance of survival, otherwise death."

Your actions?

Take off every "zig".

You know what you doing.

For great paperclips!

You forgot about "Move 'ZIG'."

This is just the "Death in Damascus" case. The case is more interesting if there is some asymmetry, e.g. you press the button you get pleasant background music for the hour before you die.

A TDTer or evidential decision theorist would be indifferent between the top options in the symmetric version, and pick the better choice in the asymmetric version.

For CDT, neither option is "ratifiable," i.e. CDT recommends doing whatever you think you won't do and immediately regretting any action you do take (if you can act suddenly, before you can update against following through with your plan).

This is just the "Death in Damascus" case.

Some unintended humour from the link essay:

Answer 1: If you take box A, you’ll probably get $100. If you take box B, you’ll probably get $700. You prefer $700 to $100, so you should take box A.

Verdict: WRONG!.

That's true. If B gives the $700 and you want the $700 you clearly pick B, not A!

This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)

Oh! For this to make (limited) sense it must mean that answer 1 "so you should take box A" is a typo and he intended to say 'B' as the answer.

It seems that two wrongs can make a right (when both errors happen to entail a binary inversion of the same bit).

The only alternative is to deny that B is even a little irrational. But that seems quite odd, since choosing B involves doing something that you know, when you do it, is less rewarding than something else you could just as easily have done.

So I conclude Answer 2 is correct. Either choice is less than fully rational. There isn’t anything that we can, simply and without qualification, say that you should do. This is a problem for those who think decision theory should aim for completeness, but cases like this suggest that this was an implausible aim.

Poor guy. He did all the work of identifying the problem, setting up scenarios to illustrate and analysing the answers. But he just couldn't manage to bite the bullet that was staring him in the face. That his decision theory of choice was just wrong.

[-]TimS00

In the context, I think the author is talking about anti-prediction. If you want to be where Death isn't, and Death knows you use CDT, should you choose the opposite of what CDT normally recommends?

I don't think I endorse his reasoning, but I think you misread him.

I don't think I endorse his reasoning, but I think you misread him.

It is not inconceivable that I misread him. Mind reading is a task that is particularly difficult when it comes to working out precisely which mistake someone is making when at least part of their reasoning is visibly broken. My subjectively experienced amusement applies to what seemed to be the least insane of the interpretations. Your explanation requires the explanation to be wrong (ie. it wouldn't be analogous to one boxing at all) rather than merely the label.

Death knows you use CDT, should you choose the opposite of what CDT normally recommends?

That wouldn't make much sense (for the reasoning in the paper).

I would just flip a coin, I guess. I think it would be hard to get better than fifty fifty odds by thinking about it for a really long time.

I'm pretty sure predicting the trajectory of a flipped coin is trivial compared to predicting your future thoughts and actions.

Yeah, what you really want is a quantum random number generator. Your only hope in this scenario is to do something as randomly as possible.Being able to tap into a true source of randomness that Omega cannot predict is your only hope.

I'm pretty sure predicting the trajectory of a flipped coin is trivial compared to predicting your future thoughts and actions.

Why? While there are serious biases with how most people flip a coin, it doesn't take much to remove those. In that case, a close to fair coin is an extremely hard to predict system.

How so? If you know the initial conditions (and Omega supposedly does), it's a straightforward motion dynamics problem.

Sure, but it isn't at all obvious that humans are substantially different either in terms predictability. For purposes of this conversation, that's the standard.

I think it depends on your threshold of "substantial." A human brain responds in a complex an (probably) noisy fashion to inputs from the rest of the world. That I might choose to flip coins and choose actions based on the outcome is part of the operation of my future thoughts and actions. In my case, I would choose random numbers based on complex and noisy physical operations. For example, the 4th decimal place of a voltmeter reading the voltage across a hot resistor, and to make it fun, I would take the 4th place at exactly 15 seconds after the beginning of the most recent minute, suppose it is N, then take the 4th place N readings later, call it M, then take the 4th place M seconds later, call it Z, this would be my random number. I would use the 4th place only if I saw it was one or two to the right of where I saw variation on the voltmeter. ALL of this, the operation of my mind in deciding to do this, and the physical details of the voltmeter-hot resistor system in detail so as to predict the resistor's detailed brownian motions, AND it's interaction with the voltmeter. you'd probably have to predict how I would pick the resistor and the voltmeter to predict what would happen, and as I considered what I would do I would pick the 17th voltmeter on a google search page. I would reach into a bin of resistors and pick one from the middle. I would partially smash the resistor with a hammer to make further difficulty for anyone predicting what would happen.

SO all of that has to be predicted to come up with Z, the output of my random number generator based on a resistor and voltmeter, Is that "substantially" harder than predicting a single coin toss, or is it somehow "substantially" similar?

[-][anonymous]00

Not when the initial conditions for the coin flip are a function of your future thoughts and actions.

[This comment is no longer endorsed by its author]Reply

The original Newcomb's problem is interesting because it leads to UDT, which allows coordination between copies. Your problem seems to require anti-coordination instead. (Note that if your copies have different information, UDT gives you anti-coordination for free, because it optimizes your whole input-output map.) I agree that anti-coordination between perfect copies would be nice if it were possible. Is it fruitful to think about anti-coordination, and if yes, what would the resulting theory look like?

Also, here's a couple ways you can remove the need for Omega:

1) You wake up in a room with two buttons. You press one of them and go back to sleep. While you're asleep, the experimenter gives you an amnesia drug. You wake up again, not knowing if it's the first or second time. You press one of the buttons again, then the experiment ends and you go home. If you pressed different buttons on the first and second time, you win $100, otherwise nothing.

2) You are randomly chosen to take part in an experiment. You are asked to choose which of two buttons to press. Somewhere, another person unknown to you is given the same task. If you pressed different buttons, you both get $100, otherwise nothing.

Our only chance at this point is to try to outsmart Omega. I know this sounds impossible, but we can at least make some partial progress. If Omega is only correct say 90% of the time, it is probably the case that his correctness is a function of complexity of your mental algorithm. The more complex your mental algorithm, the harder you will be to accurately predict. Once you reach a certain threshold of complexity, Omega's accuracy will very quickly approach 50%.

Further, you have an hour. You can try a new different method to generate a pseudorandom bit every 5 minutes, and all it takes is for one of them to be unpredictable for you to bring Omega's accuracy down to 50%

This doesn't require actually outsmarting Omega, This just requires playing against Omega in a game so complex that his powers are less useful, and he has LESS of an advantage over you. You will not be able to pass 50%, unless you are actually smarter than Omega.

[-][anonymous]30

If we're going to share silly Newcomb's Paradox like situations, here's the silliest one I've thought of, which rapidly devolves into a crazy roleplaying scenario, as opposed to a decision theory problem (unless you're the kind of person who treats crazy roleplaying scenarios as decision theory problems). Note that this is proffered primarily as humor and not to make a serious point.:

Omega appears and makes the two boxes appear, but they're much larger than usual. Inside the transparent one is a Android that appears to be on line. Omega gives his standard spiel about the problem he predicted, but in this case he says that the other opaque box contains 1,000 Androids who are not currently online, which may have been smashed to useless pieces depending on whether or not he predicted you would attempt to take just the opaque box or the opaque box and the transparent box. Any attempt to use fundamentally unpredictable quantum randomness such as generated by a small device over there will result in Omega smashing both boxes. (Which you may want to do, if you feel the Androids are a UFAI)

If you need a rough reference for the Androids, consider the Kara Demo from Quantic Dream.

http://www.wired.com/gamelife/2012/03/kara-quantic-dream/

As the Android who is playing inside the Transparent box, your job could just to be to escape from the box, or it might be to save your 1,000 other fellow Androids, or it could be to convince the other person that you aren't planning on taking over the world and so not to attempt to use quantum randomness on purpose to smash you all to bits, even though you actually are planning to enslave everyone over time. Your call. Much like Kara (from the demo) you know you certainly FEEL alive, but you have no initial information about the status of the Opaque Box 1,000 androids (whether they would also feel alive, or whether they're just obedient drones, or whether some of them are or some of them aren't.)

Oh, and other people are playing! By the time you finished absorbing all of this, some of them may have already made their decisions.

What do, a decider, do in this situation? What do you, an android do in this situation? If you are playing as Omega, you're a bit like the DM. You get to arbitrate any rules disputes or arguments about what happens if (for instance) someone successfully releases their singleton android from the transparent box and then tries to work together with her to overpower another player before he activates his Quantum Randomness device to smash all of the Androids because he feels it's too risky.

I think at some point I'm going to try running this as a roleplaying scenario (as Omega) and see what happens, but I would need to get more people over to my house for it.

[-]Shmi10

First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will). Thus calling your sadistic jailor (SJ) Omega is misleading, so I won't.

Second, given that SJ is not Omega, your problem is underspecified, and I will try to steelman it a bit, though, honestly, it should have been your job.

What other information, not given in the setup, is relevant to making a decision? For example, do you know of any prior events of this kind conducted by SJ? What were the statistical odds of survival? Is there something special about the reference class of survivors and/or the reference class of victims? What happened to the cheaters who tried to escape the box? How trustworthy is SJ?

Suppose, for example, that SJ is very accurate. First, how would you know that? Maybe there is a TV camera in the box and other people get to watch you, after SJ made its prediction known to the outside world but not to you. In this situation, as others suggested, you ought to get something like 50/50 odds by simply flipping a coin.

Now, if you consider the subset of all prior subjects who flipped a coin, or did some other ostensibly unpredictable choice, what is their survival rate? If it's not close to 50%, then SJ can predict the outcome of a random event better than chance (if it was worse than chance, SJ would simply learn after a few tries and flip its prediction, assuming it wants to guess right to begin with).

So the only interesting case that we have to deal is when the subjects who do not choose at random have a higher survival rate than those who do. How can this happen? First, if the randoms' survival rate is below 50%, and assuming the choice is truly random, SJ likely knows more about the world than our current best physical models (which cannot predict an outcome of a quantum coin flip), in which case it is simply screwing around with you. If the randoms' survival rate is about 50% but the non-randoms fare better, even though they are more predictable, it means that SJ favors non-randoms instead of doing its best predicting. So, again, it is screwing around with you, punishing the process, not the decision.

So this analysis means that, unless randoms get 50% and non-randoms are worse, you are dealing with an adversarial opponent, and your best chance of survival is to study and mimic whatever the best non-randoms do.

First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will).

This is false. The setup is not incompatible with Omega being a perfect predictor. The fact that you cannot do the opposite of what the perfect predictor knows does not make the scenario with Omega incoherent because the scenario does not require that this has happened (or even could happen). Examining the scenario:

An evil Omega has locked you in a box. Inside, there is a bomb and a button. Omega informs you that in an hour the bomb will explode, unless you do the opposite of what Omega predicted you will do. Namely, press the button if it predicted you won't or vice versa. In that case, the bomb won't explode and the box will open, letting you free.

We have an assertion "X unless Y". Due to the information we have available about Y (the nature of Omega, etc) we can reason that Y is false. We then have "X unless false" which represents the same information as the assertion "X". Similar reasoning applies to anything of the form "IF false THEN Z". Z merely becomes irrelevant.

The scenario with Omega is not incoherent. It is merely trivial, inane and pointless. In fact, the first postcript ("PS. You have no chance to survive make your time.") more or less does all the (minimal) work of reasoning out the implications of the scenario for us.

Thus calling your sadistic jailor (SJ) Omega is misleading, so I won't.

I'm still wary of calling the Sadistic Jailor Omega even though the perfect prediction part works fine. Because Omega is supposed to be arbitrarily and limitedly benevolent, not pointlessly sadistic. When people make hypotheticals which require a superintelligence that is a dick they sometimes refer to "Omega's cousin X" or similar, a practice that appeals to me.

I blinked after one second. TAKE THAT, OMEGA!

I blinked after one second. TAKE THAT, OMEGA!

Then you blinked again 4 seconds later. Damn.

No, no I did not! Next was another six seconds later.

unless you do the opposite of what Omega predicted you will do.

Unless nothing, I'm just gonna kill you..

[-]Decius-20

I try to alternate hyperventilation and deep exhalation in an attempt to die before the hour is up.