Thomas comments on Superintelligent AGI in a box - a question. - Less Wrong

14 Post author: Dmytry 23 February 2012 06:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 23 February 2012 09:29:46PM *  9 points [-]

Alan Turing already peeked inside a simple computational machine and have determined that in general debuggers (and humans) can't determine if the machine is going to halt.

So we already determined that in general, the question whenever the machine wants to do something 'evil' is undecidable.

It is not an exotic result on exotic code, either. It is very hard to figure out what even simple programs would do, when the programs are not written by humans with clarity in mind. When you generate solutions, via genetic algorithms, or via neural network training, it is extremely difficult to analyze the result, and most of the operations in the result serve no clear purpose.