Dmytry comments on Superintelligent AGI in a box - a question. - Less Wrong

14 Post author: Dmytry 23 February 2012 06:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 23 February 2012 10:41:39PM *  2 points [-]

Indeed. Or would I really? hehe.

I don't intend to make AI and box it, though, so I don't care if the AI reads this.

I think that kind of paranoia only ensures that first AI out won't respect human informed consent, because the paranoia would only keep the AIs that respect human informed consent inside the boxes (if it would keep any AIs in boxes at all, which I also doubt it would).

edit: I would try to make my AI respect at least my informed consent, btw. That excludes stuff like boxing which would be 100% certain to keep my friendly AI inside (as it would imply i dont want it out unless it honestly explains to me why i should let it out, bending over backwards not to manipulate me), while it would have nonzero, but likely not very small, probability of letting nasty AIs out. And eventually someone's going to make AI without any attempt at boxing it.