NancyLebovitz comments on Ends Don't Justify Means (Among Humans) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
It seems to me that an FAI would still be in an evolutionary situation. It's at least going to need a goal of self-preservation [1] and it might well have a goal of increasing its abilities in order to be more effectively Friendly.
This implies it will have to somehow deal with the possibility that it might overestimate its own value compared to the humans it's trying to help.
[1] What constitutes the self for an AI is left as a problem for the student.