Jiro comments on Two-boxing, smoking and chewing gum in Medical Newcomb problems - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to "predict Omega". The AI doesn't have to actually look at its own source code to do things that are equivalent to looking at its own source code--that's how the halting problem works!
If Omega is not a program and can do things that a program can't do, then this isn't true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier "deterministic" means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can't help Omega do any better.