Now, I've pre-committed that after Omega offers me The Deal, I'll make two quantum coin flips. If I get two tails in a row, I'll two-box. Otherwise, I'll one-box.

Omega predicted that and put the large box in a quantum superposition entangled with those of the coins, such that it will end up containing $1M if you get at least a head and containing an equal mass of blank paper otherwise.

I'm simplifying, but I don't think it's really strawmanning.

There exists no procedure that the Chooser can perform after Omega sets down the box and before they open it that will cause Omega to reward a two boxer or fail to reward a one boxer. Not X-raying the boxes, not pulling a TRUE RANDOMIZER out of a portable hole. Omega is defined as part of the problem, and fighting the hypothetical doesn't change anything.

He correctly rewards your actions in exactly the same way that the law in Prisoner's Dilemma hands you your points. Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn't have spoons in their answers they are doing something wrong...isn't even wrong, it's solving the wrong problem.

What you are fighting, Omega's defined perfection, doesn't exist. Sinking effort into fighting it is dumb. The idea that people need to 'take seriously' your shadow boxing is even more silly.

Like, say we all agree that Omega can't handle 'quantum coin flips', or, heck, dice. You can just repose the problem with Omega2, who alters reality such that nothing that interferes with his experiment can work. Or walls that are unspoonable, to drive the point home.

*0 points [-]Another strawman. Strawman arguments may work on some gullible humans, but don't expect it to sway a rationalist.

You're not being very clear, but it sounds like you're assuming a contradiction. You can't assert that Omega2 both does and does not alter the reality of the boxes after the choice. If you allow a contradiction you can do whatever you want, but it's not math anymore. We're not talking about anything useful. Making stuff up with numbers

and the constraint of logicis math. Making stuff up with numbers and no logic is just numerology.I think this is the crux of your objection: I think agents based on real-world physics are the default, and an

`agent - QRNG`

(quantum random number generator) problem is anadditional constraint. A special case. You think that classical-only agents are the default, and`classical + QRNG`

is the special case.Recall how an algorithm feels from the inside. Once we know all the relevant details about Pluto, you can still ask, "But it

reallya planet?". But at this point understand we'renottalking about Pluto. We're talking about our own language. Thus which isreallythe default should be irrelevant. We should be able to taboo "planet", use alternate names, and talk intelligently abouteither case. But recall the OPspecificallyassumes a QRNG:Pretending that I didn't assume that, when I specifically stated that I had, is logically rude.

Why do we care about Newcomblike problems? Because they apply to real-world agents. Like AIs. It's

usefulto consider.Omniscience doesn't exist. Omega is only the limiting case, but Newcomblike reasoning applies even in the face of an imperfect predictor, so Newcomblike reasoning still applies in the real world. QRNGs,

doexist in the real world, andIFyour decision theory can't account for them, and use them appropriately, then it's the wrong decision theory for the real world.`classical + QRNG`

isusefulto think about. It isn't being silly to ask other rationalists to take it seriously, and I'm starting to suspect you're trolling me here.But we should be able to talk intelligently about the other case. Are there situations where it's

usefulto consider`agent - QRNG`

? Sure, if the rules of the game stipulate that the Chooser promises not to do that. That's clearly a different game than in the OP, but perhaps closer to the original formulation in the paper thatg_pepperpointed out. In that case, you one-box. We could even say that Omega claims to never offer a deal to those he cannot predict accurately. If you know this, you may be motivated to be more predictable. Again, a different game.But can it look like the game in the OP to the Chooser? Can the Chooser

thinkit's in`classical + QRNG`

, when, in fact, it is not? Perhaps, but it's contrived. It isunrealisticto think a real-world superintelligence can't build a QRNG, given access to real-world actuators. But if you boxed the AI Chooser in a simulated world (denying it real actuators), you could provide it with a "simulated QRNG", that is not, in fact, quantum. Maybe you generate a list of numbers in advance, then you could create a "simulated Omega" that can predict the "simulated QRNG" due outside-the-box information, but of course, not a real one.But this isn't The Deal either. This is isomorphic to the case where Omega cheats by loading your dice to match his prediction

afterpresenting the choice (or an accomplice does this for him), thus violating The Deal. The Chooser must choose, not Omega, or there's no point. With enough examples the Chooser may suspect it's in a simulation. (This would probably make it much less useful as an oracle, or more likely to escape the box.)