Stuart_Armstrong comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 30 January 2011 06:17:57PM 2 points [-]

AI's without utility functions, but some other motivational structure, will tend to self-improve to a utility function AI. Utility-function AI's seem more stable under self-improvement, but there are many reasons it might want to change its utility (eg speed of access, multi-agent situations).

Comment author: Oligopsony 30 January 2011 06:53:26PM 0 points [-]

Could you clarify what you mean by an "other motivational structure?" Something with preference non-transitivity?

Comment author: Stuart_Armstrong 30 January 2011 07:47:18PM 1 point [-]