You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jiro comments on Two-boxing, smoking and chewing gum in Medical Newcomb problems - Less Wrong Discussion

14 Post author: Caspar42 29 June 2015 10:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jiro 01 July 2015 08:28:36PM 0 points [-]

"Predict what Omega thinks you'll do, then do the opposite".

Which is really what the halting problem amounts to anyway, except that it's not going to be spelled out; it's going to be something that is equivalent to that but in a nonobvious way.

Saying "Omega will determine what the agent outputs by reading the agent's source code", is going to implicate the halting problem.

Comment author: g_pepper 02 July 2015 03:36:30AM *  0 points [-]

Predict what Omega thinks you'll do, then do the opposite

I don't know if that is possible given Unknowns' constraints. Upthread Unknowns defined this variant of Newcomb as:

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

Since the player is not allowed to look at its own (or, presumably, Omega's) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns' restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb's paradox.

Comment author: Jiro 02 July 2015 04:42:00AM *  0 points [-]

If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to "predict Omega". The AI doesn't have to actually look at its own source code to do things that are equivalent to looking at its own source code--that's how the halting problem works!

If Omega is not a program and can do things that a program can't do, then this isn't true,. but I am skeptical that such an Omega is a meaningful concept.

Of course, the qualifier "deterministic" means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can't help Omega do any better.