David_Gerard comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Gerard 28 January 2013 09:23:26PM *  0 points [-]

So, we're being asked to imagine an arbitrary superhuman AI whose properties and abilities we can't guess at except to specify arbitrarily

Quite a lot of discussion concerning the future superintelligent AI is of this sort: "we can't understand it, therefore you can't prove it wouldn't do any arbitrary thing I assert." This already makes discussion difficult.