Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on That Tiny Note of Discord - Less Wrong

17 Post author: Eliezer_Yudkowsky 23 September 2008 06:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 24 September 2008 07:14:36PM 2 points [-]

In other words, there is no way to program a search for objective morality or for any other search target without the programmer specifying or defining what constitutes a successful conclusion of the search.

If you understand this, then I am wholly at a loss to understand why you think an AI should have "universal" goals or a goal system zero or whatever it is you're calling it.