Vladimir_Nesov comments on Example decision theory problem: "Agent simulates predictor" - Less Wrong

23 Post author: cousin_it 19 May 2011 03:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 20 May 2011 02:35:47AM 0 points [-]

The world program is completely self-contained; other than through the argument it receives, it may not contain references to the agent's choices at all.

Can you formalize this requirement? If I copy agent's code, rename all symbols, obfuscate it, simulate its execution in a source code interpreter that runs in a hardware emulator running on an emulated linux box running on javascript inside a browser running on Windows running on a hardware simulator implemented (and then obfuscated again) in the same language as the world program, and insert this thing in the world program (along with a few billion people and a planet and a universe), how can you possibly make sure that there is no dependence?

Comment author: jimrandomh 20 May 2011 02:49:24AM 0 points [-]

Can you formalize this requirement? If I copy agent's code ... and insert this thing in the world program, how can you possibly make sure that there is no dependence?

You don't get to do that, because when you're writing World, the Strategy hasn't been determined yet. Think of it as a challenge-response protocol; World is a challenge, and Strategy is a response. You can still do agent-copying, but you have to enlarge the scope of World to include the rules by which that copying was done, or else you get unrelated agents instead of copies.

Comment author: Vladimir_Nesov 20 May 2011 02:57:08AM *  0 points [-]

To copy agent's code, you don't need to know strategy. World naturally changes if you change it, and the strategy might change as well if you run the agent on a changed world, but agent's code is still the same, and you know this code. The new world will only depend on the new strategy, not the old one, but now we have a world that depends on its agent's strategy, and you won't be able to find how it does, if you don't already know.

In any case, all this copying is irrelevant, because the point is that there can exist very convoluted worlds that depend on agent's action, but it's not feasible to know that they do or how they do. And we don't get to choose the real world.