Giles comments on Utility Maximization and Complex Values - Less Wrong

3 Post author: XiXiDu 19 June 2011 04:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: Giles 19 June 2011 09:03:09PM 0 points [-]

The utility function only specifies the terminal values. If the utility function is difficult to maximize, then the agent will have to create a complex system of instrumental values. From the inside, terminal values and instrumental values feel pretty similar.

In particular, for a human agent, achieving a difficult goal is likely to involve navigating a dynamic social environment. There will therefore be instrumental social goals which act as stepping-stones to the terminal goal. For neurotypicals, this kind of challenge will seem natural and interesting. For those non-neurotypicals who have trouble socialising, working around this limitation becomes an additional sub-goal.

This isn't just theoretical - I'm describing my own experience since choosing to apply instrumental rationality to a goal system in which one value comes out dominating (for me this turns out to be Friendly AI).