Jiro comments on Two-boxing, smoking and chewing gum in Medical Newcomb problems - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
The general mistake that many people are making here is to think that determinism makes a difference. It does not.
Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.
Note that determinism is irrelevant. If a program couldn't use a decision theory or couldn't make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.
Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.
Omega can solve the halting problem?