RichardKennaway comments on Applying utility functions to humans considered harmful - Less Wrong

26 Post author: Kaj_Sotala 03 February 2010 07:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (114)

You are viewing a single comment's thread.

Comment author: RichardKennaway 04 February 2010 11:24:14PM 0 points [-]

Utility functions are a good model to use if we're talking about designing an AI. We want an AI to be predictable, to have stable preferences, and do what we want.

Why would these desirable features be the result? It reads to me as if you're saying that this is a solution to the Friendly AI problem. Surely not?

Comment author: PhilGoetz 08 October 2011 05:01:36PM 0 points [-]

I am afraid he probably does. That's the Yudkowskian notion of "friendly". Not a very good word to describe it, IMHO.