Caledonian2 comments on Ends Don't Justify Means (Among Humans) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
At present, Eliezer cannot functionally describe what 'Friendliness' would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI.
Imagine if Isaac Asimov not only lacked the ability to specify *how* the Laws of Robotics were to be implanted in artificial brains, but couldn't specify what those Laws were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. 'Friendly AI' is not so specified and cannot be analyzed. No one can find problems with the concept because it's not substantive enough - it is essentially nothing but one huge, undefined problem.