You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 23 January 2013 08:40:05PM *  1 point [-]

Perhaps, but we already know that most people (and groups) are not Friendly

Not clear what you refer to by "Friendly" (I think this should be tabooed rather than elaborated), no idea what the relevance of properties of humans is in this context.

Making them more powerful by giving them safe-for-them genies seems unlikely to sum to Friendly-to-all.

I sketched a particular device, for you to evaluate. Whether it's "Friendly-to-all" is a more vague question than that (and I'm not sure what you understand by that concept), so I think should be avoided. The relevant question is whether you would prefer the device I described (where you personally get the 1/Nth part of the universe with a genie to manage it) to deleting the Earth and everyone on it. In this context, even serious flaws (such as some of the other parts of the universe being mismanaged) may become irrelevant to the decision.