Snowyowl comments on Less Wrong: Open Thread, September 2010 - Less Wrong

3 Post author: matt 01 September 2010 01:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (610)

You are viewing a single comment's thread. Show more comments above.

Comment author: Snowyowl 01 September 2010 11:36:18AM *  2 points [-]

I don’t care if AI is Friendly or not. [...] I am mainly interested in that whatever AI we create does not paperclip the universe

You contradict yourself here. A Friendly AI is an intelligence which attempts to improve the well-being of humanity. A paperclip maximiser is an intelligence which does not, as it cares about something different and unrelated. Any sufficiently advanced AI is either one or the other or somewhere in between.

By "sufficiently advanced", I mean an AI which is intelligent enough to consider the future of humanity and attempt to influence it.

Comment author: PhilGoetz 01 September 2010 04:37:25PM 4 points [-]

You contradict yourself here. A Friendly AI is an intelligence which attempts to improve the well-being of humanity. A paperclip maximiser is an intelligence which does not, as it cares about something different and unrelated. Any sufficiently advanced AI is either one or the other or somewhere in between.

No; these are two types of AIs out of a larger design space. You ignore, at the very least, the most important and most desirable case: An AI that shares many of humanity's values, and attempts to achieve those values rather than increase the well-being of humanity.