You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Creutzer comments on Two-boxing, smoking and chewing gum in Medical Newcomb problems - Less Wrong Discussion

14 Post author: Caspar42 29 June 2015 10:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread. Show more comments above.

Comment author: Unknowns 29 June 2015 05:14:31PM 2 points [-]

I think this is addressed by my top level comment about determinism.

But if you don't see how it applies, then imagine an AI reasoning like you have above.

"My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I'm lucky. But if he's not, then acting like I have the kind of programming he likes isn't going to help me. So why should I one-box? That would be acting like I had one-box programming. I'll just take everything that is in both boxes, since it's not up to me."

Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.

Comment author: Creutzer 30 June 2015 10:45:02AM -1 points [-]

Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.

Then you're talking about an evil decision problem. But neither in the original nor in the genetic Newcombe's problem is your source code investigated.

Comment author: Unknowns 30 June 2015 11:14:38AM 1 point [-]

No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes).

The original does not specify how Omega makes his prediction, so it may well be by investigating source code.