Jiro comments on Two-boxing, smoking and chewing gum in Medical Newcomb problems - Less Wrong

14 Post author: Caspar42 29 June 2015 10:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread. Show more comments above.

Comment author: Unknowns 29 June 2015 04:52:08PM 3 points [-]

The general mistake that many people are making here is to think that determinism makes a difference. It does not.

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Note that determinism is irrelevant. If a program couldn't use a decision theory or couldn't make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.

Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.

Comment author: Jiro 29 June 2015 09:29:13PM 0 points [-]

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Omega can solve the halting problem?