gRR comments on Decision Theories: A Less Wrong Primer - Less Wrong

69 Post author: orthonormal 13 March 2012 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 12 March 2012 04:27:18PM 0 points [-]

And as I replied there, this depends on its utility function being such that "filling the box for my non-simulated copy" has utiity comparable to "taking the extra box when I'm not simulated". There are utility functions for which this works (e.g. maximizing paperclips in the real world) and utility functions for which it doesn't (e.g. maximizing hedons in my personal future, whether I'm being simulated or not), and Omega can slightly change the problem (simulate an agent with the same decision algorithm as X but a different utility function) in a way that makes CDT two-box again. (That trick wouldn't stop TDT/UDT/ADT from one-boxing.)

Comment author: gRR 12 March 2012 05:18:09PM 1 point [-]

I think you missed my point.

Omega can slightly change the problem (simulate an agent with the same decision algorithm as X but a different utility function)

This is irrelevant. The agent is actually outside, thinking what to do in the Newcomb's problem. But only we know this, the agent itself doesn't. All the agent knows is that Omega always predicts correctly. Which means, the agent can model Omega as a perfect simulator. The actual method that Omega uses to make predictions does not matter, the world would look the same to the agent, regardless.

Comment author: orthonormal 13 March 2012 04:47:52AM 1 point [-]

Unless Omega predicts without simulating- for instance, this formulation of UDT can be formally proved to one-box without simulating.

Comment author: gRR 13 March 2012 07:32:28AM 0 points [-]

Errrr. The agent does not simulate anything in my argument. The agent has a "mental model" of Omega, in which Omega is a perfect simulator. It's about representation of the problem within the agent's mind.

In your link, Omega - the function U() - is a perfect simulator. It calls the agent function A() twice, once to get its prediction, and once for the actual decision.

Comment author: orthonormal 13 March 2012 09:37:12PM 0 points [-]

The problem would work as well if the first call went not to A directly but querying the oracle whether A()=1. There are ways of predicting that aren't simulation, and if that's the case then your idea falls apart.