nshepperd comments on Chocolate Ice Cream After All? - Less Wrong

3 Post author: pallas 09 December 2013 09:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 13 December 2013 09:45:54AM 2 points [-]

This version of Transparent Newcomb is ill-defined, because Omega's decision process is not well-specified. If you do a different thing depend on what money is in the boxes, there's no unique correct prediction. Normally, Transparent Newcomb involves Omega predicting what you would do with the large box set to either empty or full (the "two arms" of Transparent Newcomb).

Also, I don't think "predict what Omega did to the boxes and make the opposite choice" is much of a problem either. You can't simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc

Comment author: Jiro 14 December 2013 05:04:47AM *  -1 points [-]

Omega's decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.

You can't simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc

Yes, of course, but you can't be an imperfect predictor either, unless you're imperfect in a very specific way. Imagine that there's a 25% chance you correctly predict what Omega does--in that case, Omega still can't be a perfect predictor. The only real difference between the transparent and nontransparent versions (if you still like taking Omega down a peg) is that the transparent version guarantees that you can correctly "predict" what Omega did.

Comment author: ArisKatsaris 14 December 2013 02:20:31PM *  0 points [-]

25% chance you correctly predict what Omega does

A flipped coin has a 50% chance to correctly predict what Omega does, if Omega is allowed only two courses of action.

Comment author: nshepperd 14 December 2013 05:43:57AM 0 points [-]

Omega's decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.

If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you'll do. The non-transparent version does not have this problem.

Comment author: EHeller 14 December 2013 06:18:38AM 0 points [-]

If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you'll do. The non-transparent version does not have this problem.

But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can't be a perfect predictor, which breaks the specification of the problem.

Comment author: ArisKatsaris 14 December 2013 02:26:29PM 3 points [-]

These are all trivial objections. In the same manner you can "break the problem" by saying "well, what if the players chooses to burn both boxes?" "What if the player walks away?" "What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?".

Player walks in the room, recites Vogon poetry, and then shoots themselves in the head.
We then open Box A. Inside we see a note that says "I predict that the player will walk in the room, recite Vogon poetry and then shoot themselves in the head without taking any of the boxes".

These objections don't really illuminate anything about the problem. There's nothing inconsistent about Omega predicting you're going to do any of these things, and having different contents in the box prefilled according to said prediction. That the original phrasing of the problem doesn't list all of the various possibilities is really again just a silly meaningless objection.

Comment author: EHeller 14 December 2013 04:09:19PM *  -2 points [-]

Your objections are of a different character. Any of these

In the same manner you can "break the problem" by saying "well, what if the players chooses to burn both boxes?" "What if the player walks away?" "What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?"

involve not picking boxes. The issue with the coin flip is to point out that there are algorithms for box picking that are unpredictable. There are methods of picking that make it impossible for Omega to have perfect accuracy. Whether or not Newcomb is coherent depends on your model of how people make choices, and how noisy that process is.

Comment author: ialdabaoth 14 December 2013 04:11:02PM *  1 point [-]

But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can't be a perfect predictor, which breaks the specification of the problem.

The premise of the thought experiment is that Omega has come to you and said, "I have two boxes here, and know whether you are going to open one box or two boxes, and thus have filled the boxes accordingly".

If Omega knows enough to predict whether you'll one-box or two-box, then Omega knows enough to predict whether you're going to flip a coin, do a dance, kill yourself, or otherwise break that premise. Since the frame story is that the premise holds, then clearly Omega has predicted that you will either one-box or two-box.

Therefore, this Omega doesn't play this game with people who do something silly instead of one-boxing or two-boxing. Maybe it just ignores those people. Maybe it plays another game. But the point is, if we have the narrative power to stipulate an Omega that plays the "one box or two" game accurately, then we have the narrative power to stipulate an Omega that doesn't bother playing it with people who are going to break the premise of the thought experiment.

In programmer-speak, we would say that Omega's behavior is undefined in these circumstances, and it is legal for Omega to make demons fly out of your nose in response to such cleverness.

Comment author: EHeller 14 December 2013 04:21:33PM -2 points [-]

Therefore, this Omega doesn't play this game with people who do something silly instead of one-boxing or two-boxing

Flipping a coin IS one boxing Or two boxing! Its just not doing it PREDICTABLY.

Comment author: ialdabaoth 14 December 2013 04:28:35PM *  1 point [-]

ಠ_ಠ

EDIT: Okay, I'll engage.

Either Omega has perfect predictive power over minds AND coins, or it doesn't.

If it has perfect predictive power over minds AND coins, then it knows which way the flip will go, and what you're really saying is "give me a 50/50 gamble with a net payoff of $500,500", instead of $1,000,000 OR $1,000 - in which case you are not a rational actor and Newcomb's Omega has no reason to want to play the game with you.

If it only has predictive power over minds, then neither it nor you know which way the flip will go, and the premise is broken. Since you accepted the premise when you said "if Omega shows up, I would...", then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you're just trying to signal cleverness by breaking the thought experiment on a bogus technicality.

Please don't do that.

Comment author: EHeller 14 December 2013 04:55:08PM *  1 point [-]

Since you accepted the premise when you said "if Omega shows up, I would...", then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you're just trying to signal cleverness by breaking the thought experiment on a bogus technicality.

Its not breaking the thought experiment on a "bogus technicality" its pointing out that the thought experiment is only coherent if we make some pretty significant assumptions about how people make decisions. The more noisy we believe human decision making is, the less perfect omega can be.

The paradox still raises the same point for decisions algorithms, but the coin flip underscores that the problem can be ill-defined for decisions algorithms that incorporate noisy inputs.