Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

jerryL comments on My Childhood Role Model - Less Wrong

29 Post author: Eliezer_Yudkowsky 23 May 2008 08:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

Sort By: Old

You are viewing a single comment's thread.

Comment author: jerryL 23 May 2008 02:08:00PM 0 points [-]

"Ideals are like stars". All Schurz is doing is defining, yet again, desire. Desire is metonymic by definition, and I think it is one of the most important evolutionary traits of the human mind. This permanent dissatisfaction of the mind must have proven originally very useful in going after more game that we could consume, and it is still useful in scientific pursuits. How would AI find its ideals? What would be the origin of the desire of AI that would make it spend energy for finding something utterly useless like general knowledge? If AI evolves it would be focused on energy problems (how to think more and faster with lower energy consumption) and it may find interesting answers, but only on that practical area. If you don't solve the problem of AI desire (and this is the path of solving friendliness) AI will evolve very fast on a single direction and will reach real fast the limits of its own "evolutionary destiny". I still think the way to go is to replace biological mass with replaceable material in humans, not the other way around.