You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

turchin comments on The virtual AI within its virtual world - Less Wrong Discussion

6 Post author: Stuart_Armstrong 24 August 2015 04:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 25 August 2015 07:48:03PM 3 points [-]

I prefer term "Safe AI" as it more self explaining for the outsider.

Comment author: PhilGoetz 26 August 2015 01:01:59AM *  2 points [-]

I think it's more accurate, though the term "safe" has a much larger positive valence than is justified, and is so accurate but misleading. Particularly since it smuggles in EY's presumptions about whom it's safe for, and so whom we're supposed to be rooting for, humans or transhumans. Safer is not always better. I'd rather get the concept of stasis or homogeneity in there. Stasis and homogeneity are, if not the values at the core of EY's scheme, at least the most salient products of it.

Comment author: DanielLC 26 August 2015 10:04:06PM 0 points [-]

Safe AI sounds like it does what you say as long as it isn't stupid. Friendly AIs are supposed to do whatever's best.

Comment author: turchin 27 August 2015 08:52:18PM *  0 points [-]

For me Safe AI is one that is not existential risk. "Friendly" reminds me about "friendly user interface", that is something superficial for core function.