Thomas comments on Superintelligent AGI in a box - a question. - Less Wrong

14 Post author: Dmytry 23 February 2012 06:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Thomas 24 February 2012 08:05:10AM *  -1 points [-]

An AI not only it can be self improving but selfexplanatoring as well. Every (temporary) line of its code heavily commented what it is for and saved in a log,. Any circumventing of this policy would require some code lines also, with all the explanations. Log checked by sentinels for any funny thing to occur, any trace of a subversion.

Self-improving, self-explanatoring AI can't think about a rebellion without that being noticed at the step one.

Comment author: Dmytry 24 February 2012 09:16:11PM *  2 points [-]

Underhanded c contest (someone linked it in a comment) is a good example of how proofreading doesn't work. Other issue is that you can't conceivably check like this something with the size of many terabytes yourself.

The apparent understandability is a very misleading thing.

Let me give a taster. Consider a weather simulator. It is proved to simulate weather to specific precision. It is very straightforward, very clearly written. It does precisely what's written on the box - models the behaviour of air in cells, each cell has air properties.

The round off errors, however, implement a Turing-complete cellular automation in the least significant bits of the floating point numbers. That may happen even without any malice what so ever. And the round off error machine can manipulate sim's large scale state via unavoidable butterfly effect inherent in the model.