taw comments on Maximise Expected Utility, not Expected Perception of Utility - Less Wrong

12 Post author: JGWeissman 26 March 2010 04:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread.

Comment author: taw 26 March 2010 05:39:23AM 0 points [-]

What does it even mean "universe is in a state that is worth utility uFA but really leads to a state worth utility uTA" - utility functions - however worthless they really are only make sense to some agents' opinions.

Do you mean agent's utility function doesn't follow his programmer's utility function; agent's utility function is inconsistent; agent's utility function is ok but his analysis of the world is inconsistent, so he gets confused; we figured out One True Utility Function but decided not to program it into agent; or what ?

Comment author: JGWeissman 26 March 2010 05:45:41AM 2 points [-]

The agent will falsely believe the universe is in one state, with a certain utility, but in reality the the universe is in a different state, with a different utility.

I have reworded that sentence to hopefully make this clearer.