Richard_Hollerith2 comments on Ends Don't Justify Means (Among Humans) - Less Wrong

44 Post author: Eliezer_Yudkowsky 14 October 2008 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Richard_Hollerith2 15 October 2008 02:07:54PM 0 points [-]

But, Nancy, the self-preservation can be an instrumental goal. That is, we can make it so that the only reason the AI wants to keep on living is that if it does not then it cannot help the humans.