Vladimir_Nesov comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (354)
Not clear what you refer to by "Friendly" (I think this should be tabooed rather than elaborated), no idea what the relevance of properties of humans is in this context.
I sketched a particular device, for you to evaluate. Whether it's "Friendly-to-all" is a more vague question than that (and I'm not sure what you understand by that concept), so I think should be avoided. The relevant question is whether you would prefer the device I described (where you personally get the 1/Nth part of the universe with a genie to manage it) to deleting the Earth and everyone on it. In this context, even serious flaws (such as some of the other parts of the universe being mismanaged) may become irrelevant to the decision.