JGWeissman comments on Maximise Expected Utility, not Expected Perception of Utility - Less Wrong

12 Post author: JGWeissman 26 March 2010 04:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 08 May 2010 04:01:43PM 0 points [-]

The typical AI representational system has no way to distinguish a true fact from a believed fact.

What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.

A rational agent can't detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.

No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.

Comment author: PhilGoetz 10 May 2010 10:43:30PM 0 points [-]

What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.

It's very important to be able to specify who believes a proposition. But I don't see how the AI can compute that it is going to believe a proposition, without believing that proposition. (We're not talking about propositions that the AI doesn't currently believe because the preconditions aren't yet satisfied; we're talking about an AI that is able to predict that it's going to be fooled into believing something false.)

A rational agent can't detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.

No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.

Please give an example in which an AI can both infer that A leads to utility uTA, and that the AI will believe it has led to uFA, that does not involve the AI detecting errors in its own reasoning and not correcting them.