JoshuaFox comments on AALWA: Ask any LessWronger anything - Less Wrong

28 Post author: Will_Newsome 12 January 2014 02:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (611)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 13 January 2014 07:33:20AM 1 point [-]

I don't think I have anything to say that hasn't been said better by others in MIRI and FHI, but I think that AI boxing is impossible because (1) it can convince any gatekeepers to let it out and (2) any AI is "embodied" and not separate from the outside world if only in that its circuits pass electrons, and (3) I doubt you could convince all AGI reseachers to keep their projects isolated.

Still, I think that AI boxing could be a good stopgap measure, one of a number of techniques that are ultimately ineffectual, but could still be used to slightly hold back the danger.