Emile comments on Ends Don't Justify Means (Among Humans) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
I wonder where this is leading ... 1) Morality is a complex computation, that seems to involve a bunch of somewhat independent concerns 2) Some concerns of human morality may not need to apply to AI
So it seems that building friendly AI involves not only correctly building (human) morality, but figuring out which parts don't need to apply to an AI that doesn't have the same flaws.