Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Aaron5 comments on What I Think, If Not Why - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 December 2008 05:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Aaron5 11 December 2008 07:12:29PM 0 points [-]

I'm just trying to get the problem you're presenting. Is it that in the event of a foom, a self-improving AI always presents a threat of having its values drift far enough away from humanity's that it will endanger the human race? And your goal is to create the set of values that allow for both self-improvement and friendliness? And to do this, you must not only create the AI architecture but influence the greater system of AI creation as well? I'm not involved in AI research in any capacity, I just want to see if I understand the fundamentals of what you're discussing.