DanielLC comments on Ends Don't Justify Means (Among Humans) - Less Wrong

44 Post author: Eliezer_Yudkowsky 14 October 2008 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 03 May 2013 03:10:10AM 4 points [-]

Why would a Friendly AI lose out? They can do anything any other AI can do. They're not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.