Dmytry comments on Superintelligent AGI in a box - a question. - Less Wrong

14 Post author: Dmytry 23 February 2012 06:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 25 February 2012 02:58:42AM *  1 point [-]

After ton of failed attempts, its extraordinary claims needing extraordinary evidence.

Also, when AI is downloading stuff off internet its already not boxed. Reading copy of internet maybe. Keep in mind that dumbest AI can read that stuff the fastest, cos it was only e.g. looking for how first letter correlates with last letter. I sure won't assume that the raytracer is working correctly just because it did load all the objects in a scene. Let alone experimental AI.

Comment author: Anubhav 26 February 2012 02:32:14AM *  0 points [-]

You can bat aside individual scenarios, but the point is... are there no known reliable indicators that an AI is undergoing FOOM? Even at the point where AI theory is advanced enough to actually build one?

Comment author: Dmytry 26 February 2012 08:47:56AM *  6 points [-]

We have 1 example of seed AI. The seed AI took about 3 hours to progress to the point that it started babbling to itself, 2..3 seconds from there to trying to talk to outside (except it didn't figure out how to talk to outside, and was still just babbling to itself), and then 0.036 seconds to FOOM.

The seed AI was biological intelligence (as a black box), and i scaled to 1 hour = 1 billion years. (and the outside doesn't seem to exist but the intelligence tried anyway).