kaz comments on Newcomb's Problem and Regret of Rationality - Less Wrong

64 Post author: Eliezer_Yudkowsky 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

Sort By: Old

You are viewing a single comment's thread.

Comment author: kaz 19 August 2011 01:56:48AM 0 points [-]

I see your general point, but it seems like the solution to the Omega example is trivial if Omega is assumed to be able to predict accurately most of the time:
(letting C = Omega predicted correctly; let's assume for simplicity that Omega's fallibility is the same for false positives and false negatives)
- if you chose just one box, your expected utility is $1M * P(C)
- if you chose both boxes, your expected utility is $1K + $1M * (1 - P(C))
Setting these equal to find the equilibrium point:
1000000 * P(C) = 1000 + 1000000 * (1 - P(C))
1000 * P(C) = 1001 - 1000 * P(C)
2000 * P(C) = 1001
P(C) = 1001/2000 = 0.5005 = 50.05%

So as long as you are at least 50.05% sure that Omega's model of the universe describes you accurately, you should pick the one box. It's a little confusing because it seems like cause precedes effect in this situation, but that's not the case; your behaviour affects the behaviour of a simulation of you. Assuming Omega is always right: if you take one box, then you are the type of person who would take the one box, and Omega will see that you are, and you will win. So it's the clear choice.