You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

_rpd comments on Open Thread, January 11-17, 2016 - Less Wrong Discussion

3 Post author: username2 12 January 2016 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: username2 12 January 2016 10:41:58AM 2 points [-]

Paperclip maximizer thought experiment makes a lot of people pattern match AI risk to Science Fiction. Do you know any AI risk related thought experiments that avoid that?

Comment author: _rpd 12 January 2016 10:55:46PM 2 points [-]

If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn't 'intend' to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to 'delete' or 'turn off'. AI will operate in an environment that is many times more complex: "mindspace".