orthonormal comments on The Domain of Your Utility Function - Less Wrong

24 Post author: Peter_de_Blanc 23 June 2009 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 25 June 2009 06:48:06PM 2 points [-]

I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.

To show this is trivially true, take your decision algorithm and consider the utility function "1 for acting in accordance with this algorithm, 0 for not doing so". Tim is giving an example where it doesn't have to be this ridiculous, but still has to be meta compared to object-level preferences.

Still (I say), if it's less complicated to describe the full range of human behavior by an algorithm that doesn't break down into utility function plus optimizer, then we're better off doing so (as a descriptive strategy).