Eliezer_Yudkowsky comments on Optimization - Less Wrong

20 Post author: Eliezer_Yudkowsky 13 September 2008 04:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 14 September 2008 07:54:33PM 3 points [-]

Oh, come on, Lara, did you really think I hadn't thought of that? One of the reasons why Friendly AI isn't trivial is that you need to describe human values like autonomy - "I want to optimize my own life, not have you do it for me" - whose decision-structure is nontrivial, e.g., you wouldn't want an AI choosing the exact life-course for you that maximized your autonomy.