MatthewW comments on Open Thread September, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb's Problem.
The 99% case is not very different from the 100% case, it's continuous. If you're facing a 99% Omega (or even a 60% Omega) in Newcomb's Problem, you're still better off being a one-boxer. That's true even if both boxes are transparent and you can see what's in them before choosing whether to take one or two - a fact that should make any intellectually honest CDT-er stop and scratch their head.
No offense, but I think you should try to understand what's already been done (and why) before criticizing it.
To get to the conclusion that against a 60% Omega you're better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.
I think that's really the original problem in disguise (it's a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.
How exactly different?
It would become a mind game: you'd have to explicitly model how you think Omega is making the decision.
The problem you're facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the 'all your behaviour' part, because Omega is always right. But in the 'imperfect Omega' case you can't.