Emile comments on Ends Don't Justify Means (Among Humans) - Less Wrong

44 Post author: Eliezer_Yudkowsky 14 October 2008 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Emile 15 October 2008 10:42:43AM 1 point [-]

I wonder where this is leading ... 1) Morality is a complex computation, that seems to involve a bunch of somewhat independent concerns 2) Some concerns of human morality may not need to apply to AI

So it seems that building friendly AI involves not only correctly building (human) morality, but figuring out which parts don't need to apply to an AI that doesn't have the same flaws.