NancyLebovitz comments on Ends Don't Justify Means (Among Humans) - Less Wrong

44 Post author: Eliezer_Yudkowsky 14 October 2008 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

Sort By: Old

You are viewing a single comment's thread.

Comment author: NancyLebovitz 15 October 2008 12:38:35PM 0 points [-]

It seems to me that an FAI would still be in an evolutionary situation. It's at least going to need a goal of self-preservation [1] and it might well have a goal of increasing its abilities in order to be more effectively Friendly.

This implies it will have to somehow deal with the possibility that it might overestimate its own value compared to the humans it's trying to help.

[1] What constitutes the self for an AI is left as a problem for the student.