Let's play a game. Two times, I will give you an amnesia drug and let you enter a room with two boxes inside. Because of the drug, you won't know whether this is the first time you've entered the room. On the first time, both boxes will be empty. On the second time, box A contains $1000, and Box B contains $1,000,000 iff this is the second time and you took only box B the first time. You're in the room, do take both boxes or only box B?

This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.

I suspect that any problem with Omega can be transformed into an equivalent problem with amnesia instead of Omega.

Does CDT return the winning answer in such transformed problems?

Discuss.

 

New Comment
44 comments, sorted by Click to highlight new comments since:

This is actually insightful, given that the most frequently proposed way for Omega to make predictions is to simulate the decision-maker - in which case you run into a Sleeping Beauty problem in which you are the real or simulated decision-maker.

I like this phrasing. It's less ambiguous.

I agree that this takes us into the world of Sleeping Beauty problems. But those are much harder. This makes things worse.

...in this version of the problem, it is much more obvious to me that you take only box B forever and ever and ever.

The whole point of Newcomb's problem is that CDT two-boxes and prediction "isn't really you", so that we have a conflict of intuition to one-box and CDT, and need to resolve it somehow, thus gaining new understanding. What is your thought experiment for?

Problems where CDT loses can be (probably mechanically) transformed to "strategy-equivalent" problems where CDT wins. That's at least interesting.

It even suggests a decision theory. Just transform the problem and use the strategy that CDT recommends for this new problem.

This is unsurprising: CDT relies on explicit dependencies given by causal definitions, while what you want is to look for logical (ambient) dependencies for which the particular way the problem was specified (e.g. physical content defined by causality) is irrelevant. After you find the dependencies as a result of such analysis, all that's left is applying expected utility, at which point any CDT-specificity is gone (see Controlling Constant Programs).

How would you port Counterfactual Mugging to amnesia?

[-]Bongo-20

Flip a coin. If tails, induce amnesia, ask the player for $100, if they pay, keep it, game over. If heads, induce amnesia, ask the player for $100, if they pay, return their money and award them an additional $10000, game over.

EDIT: Emile noted you can omit the amnesia.

Amnesia doesn't play any role in that, does it?

Nah, that doesn't look very convincing. The whole point of CM was that once a branch of you gets asked for $100, agreeing to pay cannot benefit that branch. Also, what Emile said.

[-]Bongo-10

Convincing schmonvincing. All I promised is that all strategies will do equally well.

If that's really all the information you want to preserve, then I don't understand why you bother with amnesia in Newcomb's Problem. Just offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box. I'm not sure what insight into decision theory we're supposed to get from such translations.

offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box.

Hmm. This form has the same expected winnings for all strategies, but the $0 and $1,001,000 outcomes are impossible, unlike in the transformed Newcomb and the original Newcomb (given an Omega that doesn't punish mixed strategies). Also, expected winnings doesn't equal expected utility. For some utility functions, your problem has different expected utility than the normal or amnesiac Newcomb even if you play the same strategy in each. So it's not really equivalent.

Another example: consider (the tranformation of) Parfit's Hitchiker. If you use a coinflipping strategy there, the expected utility is

0.5*U(die)+0.5*(0.5*U($0)+0.5*U(-$100) = 0.5*U(die)+0.25*U($0)+0.25*U(-$100)

While the expected utility in the version where you simply plop the player in front of an ATM and drive them to the desert and dump them there if they don't pay $100 is:

0.5*U(die) + 0.5-U(-$100)

Which is clearly different.

Your transformation seems to require weird Omegas that respond to randomizing players by randomizing too. It's not clear to me why an Omega would want to behave like that (probabilistically reward cheaters). Can you handle other kinds of Omegas, e.g. the original kind specified by Eliezer?

I don't think they're weird. I think Omegas that go out of their way to discriminate against mixed strategies are weird. A strategy that one-boxes with probability 0.999 never gets a million, while one that one-boxes with probability 1 always gets a million. You could call that a discontinuity.

And I thought 1 was not a probability anyway! Any real rational one-boxing agent will expect to one-box with probability ~1, not with "probability" 1. Does that mean that the agent is using a mixed strategy? On the other hand, any agent that isn't using quantum randomness will in fact either one-box or two-box, even if it flips coins and stuff. Does that mean the agent is using a pure strategy? I can't answer this off the top of my head.

I assume the following is the key thing about Eliezer's original Omega:

Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars

I didn't see Eleizer saying that Omega doesn't tolerate mixed strategies. If there were coinflippers among that 100, presumably Omega predicted the results of their coinflips and set up box B accordingly. To the extent that I can't duplicate the conditions perfectly to make sure any coin will land the same way both times, I can't do that. To the extent that I can, I can.

[-][anonymous]00

Uh, then my transformation of the problem is better than yours because it "predicts" coinflips perfectly, not just "to the extent that I can" :-)

Amnesia can replace Omega for the prediction part of Newcomb's problem, but that is not the only function Omega serves. Omega is also shorthand for some simplifying assumptions: that you accept the problem statement as definitely true, that no clever third options are available, etc.

This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.

I don't see this. For example, the mixed strategy of one-boxing half the time and two-boxing half the time generates very different results in the transformed problem than in the original Newcomb's Problem.

Though I suppose there may be some ambiguity in the question of what amnesia is supposed to do to the 'seed' in your pseudo-random number generator.

Does CDT return the winning answer in such transformed problems?

It does fine on your transformation of Newcomb. I won't venture a guess on more general problems, because I don't understand how the general transformation is imagined to work. What is the transformation of the Hitchhiker, for example?

I don't see this. For example, the mixed strategy of one-boxing half the time and two-boxing half the time generates very different results in the transformed problem than in the original Newcomb's Problem.

Conventionally, you are not allowed access to a random number generator in Newcomb's Problem - and so can't use a mixed strategy. Any such usage would tarnish Omega's reputation. Omega - being a mind-reading superintelligence - can fairly easily discourage such a tactic by punishing randomising agents economically - and letting the punishment strategy be known.

I don't see this. For example, the mixed strategy of one-boxing half the time and two-boxing half the time generates very different results in the transformed problem than in the original Newcomb's Problem.

Nope? Let's say you flip a coin. Then your expected winnings are

  • 0.5x(0.5x1000000+0.5x(1000000+1000))+0.5x(0.5x0+0.5x(0+1000)) = 500500 dollars

in both versions if Omega follows the rule:

  • if you one-box with probability p, Omega fills box B with probability p

What is the transformation of the Hitchhiker, for example?

Put the player in front of an ATM and give them the amnesia drug. If they don't pay you $100, take them to the desert and dump them there. If they paid, put the money from the first round back into their bank account and give them the amnesia drug again. If they pay you again, keep their money. And the player knows these rules.

I don't have the general transformation down yet.

if you one-box with probability p, Omega fills box B with probability p

Really? I thought Omega would correctly predict the results of the coin flip and whether I called heads or tails. I guess this shows that Omega is better at predicting what I do than I am at predicting what he does.

In any case, thank you for the thought experiment. I agree with Snowyowl that your version is philosophically different from the original, but if we want our philosophical concepts to pay rent, they are going to have to have different consequences than some cheap amnesia drug. Otherwise, why keep them around?

I don't think it's quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).

Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.

EDIT: Assuming money is proportional to utility.

Oblivion to Omega! Long live amnesia!

[-][anonymous]10

I'm not convinced it's quite the same. If you owe the mafia $1001000 and they're coming to collect the money this afternoon, you're best off if you toss a coin to decide whether to choose two boxes. Omega, if I remember the formulation correctly, doesn't stand for such tricks.

I could change the rules and decide not to stand for such tricks (mixed strategies) either. EDIT: No, I couldn't.

And on the other hand, Omega could deal with mixed strategies perfectly well, and I don't really understand why people make it so that he explicitly doesn't tolerate mixed strategies in their problems. For example, in Newcomb's Problem, if you one-box with probability p, Omega can just fill box B with probability p - for example if p=0.5 your expected winnings in Newcomb's Problem are $500,500.

I could change the rules and decide not to stand for such tricks (mixed strategies) either.

That sounds tricky - unless you are a mind-reading superintelligence!

Yeah, you're right. I can't decide to not stand for mixed strategies, only Omega can.

In the traditional formulation of Newcomb's Problem (at least here on Less Wrong), if Omega predicts you'll use a randomizer, it will leave box B empty.

[-][anonymous]70

That's weird. Assuming human decision making is caused by neural processes, which aren't perfectly reliable, there'd be no way for a human to not use a randomizer.

[-]ata20

We assume that Omega is powerful enough to simulate your brain and the environment precisely, and that quantumness is negligible.

In that case, you could still say that there's no way not to use a randomizer, but Omega would be using the same randomizer with the same seed.

If you use flipping a coin as a randomizer, Omega could simulate that too. But traditionally using coins doesn't fly while using brains is okay.

Omega could deal with mixed strategies perfectly well, and I don't really understand why people make it so that he explicitly doesn't tolerate mixed strategies in their problems.

Use of a mixed strategy might tarnish Omega's reputation.

The first time you enter the room, the boxes are both empty, so you can't ever get more than $1,000,000. But you're otherwise correct.

[-][anonymous]40

No, I can get $1001,000. If I randomly choose to take one box the first time, then both boxes will contain money the second time, where I might randomly choose to take both.

(Unless randomising devices are all somehow forced to come up with the same result both times)

Sorry, my mistake. I misread the OP.

Hang on a minute though

1 box then 2 box = $1,001,000 1 box then 1 box = $1,000,000 2 box then 2 box = $1,000 2 box then 1 box = $0

$2,002,000 divided by 4 is $500,500. Effectively you're betting a million dollars on two coinflips, the first to get your money back (1-box on the first day) and the second to get $1000 (2-box on the second day). Omega could just use a randomizer if it thinks you will, in which case people would say "Omega always guesses right, unless you use a randomizer. But it's stupid to use one anyway."

Where p is the probability of 1 boxing, $E = p^2 $1,000,000 + p(1-p $1,001,000 + (1-p)^2 * $1,000 = $999,000 p + $1000. So the smart thing to do is clearly always one-box, unless showing up Omega who thinks he's so big is worth $499,500 to you.

[-][anonymous]00

I completely agree that to maximise your expected gain you should one-box every time. I was thinking of the specific case where you really, really need $1001,000 and are willing to reduce your expected gain to maximise the chance of getting it.

"Discuss" is not a sentence. It's also redundant - if you didn't say it, would people not reply? And it moderately annoys me when people tell me to "discuss." If other people feel similarly, could we make a habit of not using it like that?

Well I like it. Make a poll. If it gets over 30 votes I'll honor it.

Well, given that nobody has voted or commented on this so far, I'd guess other people don't care.

I don't see that the scenario is the same. If you one-box everytime in your thought experiment, you are guaranteed to get the million; if you two box everytime, you will certainly not get the million. With Omega, there is a high probability but not certainty.

Also, what you do in the first round causes what happens in the second round, but with Omega, it is debatable whether what you end up doing causes there to be a million dollars or not.

You won't one-box every time. There is always some chance, however small, that you will two-box, and vice versa.

You point out perhaps the only potentially meaningful difference, and it is the main salient point in dispute between one-boxers and two-boxers in the Omega problem.

First subpoint: With Omega, you are told (by Omega) that there is certainty--that he is never wrong--and you have a large but finite number of previous experiments that do not refute him. Any uncertainty is merely hoped for/dreaded. (There are versions in which there is definite uncertainty, but those are clearly not similar to the OP.)

Second subpoint: If there is truly, really, actually no uncertainty, then correlation is perfect. It is hard to determine cause and effect in such conditions with no chance to design experiments to separate them. I'd argue that cause is a low-value concept in such a situation.