dclayh comments on Newcomb's Problem standard positions - Less Wrong

5 Post author: Eliezer_Yudkowsky 06 April 2009 05:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: dclayh 06 April 2009 09:12:27PM 2 points [-]

This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.

I think this reply is also illuminating: the stated goal in Newcomb's problem is to maximize your financial return. If your goal is make Omega have predicted wrongly, you are solving a different problem.

I do agree that the problem may be subtly self-contradictory. Could you point me to your preferred writeup of the Unexpected Hanging Paradox?

Comment author: cousin_it 06 April 2009 09:48:33PM *  3 points [-]

Uh, Omega has no business deciding what problem I'm solving.

Could you point me to your preferred writeup of the Unexpected Hanging Paradox?

The solution I consider definitively correct is outlined on the Wikipedia page, but simple enough to be expressed here. The judge actually says "you can't deduce the day you'll be hanged, even if you use this statement as an axiom too". This phrase is self-referential, like the phrase "this statement is false". Although not all self-referential statements are self-contradictory, this one turns out to be. The proof of self-contradiction simply follows the prisoner's reasoning. This line of attack seems to have been first rigorously formalized by Fitch, "A Goedelized formulation of the prediction paradox", can't find the full text online. And that's all there is to it.

Comment author: dclayh 06 April 2009 10:09:13PM *  1 point [-]

Uh, Omega has no business deciding what problem I'm solving.

No, but if you're solving something other than Newcomb's problem, why discuss it on this post?

Comment author: cousin_it 06 April 2009 10:18:12PM *  2 points [-]

I'm not solving it in the sense of utility maximization. I'm solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.

Comment author: dclayh 06 April 2009 10:41:50PM 1 point [-]

Okay yes, I see what you're trying to do and the comment is retracted.

Comment author: whpearson 06 April 2009 09:36:58PM 1 point [-]

Maximising your financial return entails that you make omega's prediction wrong, if you can get it to predict that you one box when you actually two box, you maximise your financial return.

Comment author: dclayh 06 April 2009 10:36:22PM 3 points [-]

My point is merely that getting Omega to predict wrong is easy (flip a coin). Getting an expectation value higher than $1 million is what's hard (and likely impossible, if Omega is much smarter than you, as Eliezer says above).

Comment author: Eliezer_Yudkowsky 06 April 2009 09:38:48PM 3 points [-]

Well, it had better not be predictable that you're going to try that. I mean, at the point where Omega realizes, "Hey, this guy is going to try an elaborate clever strategy to get me to fill box B and then two-box" It's pretty much got you pegged.

Comment author: ciphergoth 06 April 2009 11:43:48PM 1 point [-]

That's not so - the "elaborate clever strategy" does include a chance that you'll one-box. What does the payoff matrix look like from Omega's side?

Comment author: whpearson 06 April 2009 09:46:57PM 1 point [-]

I never said it was easy thing to do. I just meant that situation is the maximum if it is reachable. Which depends upon the implementation of Omega in the real world.