Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Richard_Hollerith2 comments on That Tiny Note of Discord - Less Wrong

17 Post author: Eliezer_Yudkowsky 23 September 2008 06:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Richard_Hollerith2 26 September 2008 03:33:30PM 0 points [-]

In other words, there is no way to program a search for objective morality or for any other search target without the programmer specifying or defining what constitutes a successful conclusion of the search.

If you understand this, then I am wholly at a loss to understand why you think an AI should have "universal" goals or a goal system zero or whatever it is you're calling it.

The flip answer is that the AI must have some goal system (and the designer of the AI must choose it). The community contains vocal egoists, like Peter Voss, Hopefully Anonymous, maybe Denis Bider. They want the AI to help them achieve their egoistic ends. Are you less at a loss to understand them than me?