You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Andreas_Giger comments on Can anyone explain to me why CDT two-boxes? - Less Wrong Discussion

-12 Post author: Andreas_Giger 02 July 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: Andreas_Giger 02 July 2012 05:00:35PM *  -2 points [-]

CDT can't distinguish between free money boxes and Newcomb paradoxes

No, CDT can in fact distinguish very well. It always concludes that the money is there, and it is always right, because it never encounters Newcomb.

we think Newcomb is possible.

To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees.

If you're talking about empirical Newcomb, that certainly is possible, but it is impossible to do better than CDT without choosing differently in other situations, because if you've acted like CDT in the past, Omega is going to assume you are CDT, even if you're not.

I disagree on two points: one, I think a simulator is possible (that is, Omega 's impossibility comes from other powers we've given it, we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction)

I agree on the "we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction" part, but this will change what the "correct" answer is. For example, you could substitute Omega with a coin toss and repeat the game if Omega is wrong. This is still a one-time problem, because Omega is a coin and therefore has no memory, but CDT, which would two-box in empirical Newcomb, one-boxes in this case and takes the $1,000,000.

and two, I don't think the priors-and-payoffs approach to an empirical predictor is correct (for game-theoretic reasons which I can explicate if you'd like, but if it's not the point of contention it would only distract).

I don't think this is the point of contention, but after we've settled that, I would be interested in hearing your line of thought on this.

Comment author: Emile 02 July 2012 05:35:01PM 3 points [-]

To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees.

How about the version where agents are computer programs, and Omega runs a simulation of the agent facing the choice, observes it's behavior, and fills the boxes accordingly?

I see no violation of causality in that version.