John_Maxwell_IV comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: Henrik_Jonsson 15 June 2009 04:51:36PM *  0 points [-]

But even if the AI discovered some things about our physics, it does not significantly narrow the range of possible minds. It doesn't know if it's dealing with paperclippers or a pebblesorters. It might know roughly how smart we are.

You're using your (human) mind to predict what a postulated potentially smarter-than-human intelligence could and could not do.

It might not operate on the same timescales as us. It might do things that appear like pure magic. No matter how often you took snapshots and checked how far it had gotten in figuring out details about us, there might be no way of ruling out progress, especially if you gave it motives for hiding that progress (such as pulling the plug every time it came close). Sooner or later you'd conclude that nothing interesting was happening and putting it on autopilot. A small self-improvement might cascade in an enormous difference in understanding, with the notorious FOOM following.

I don't usually like quoting myself, but

If you had a program that might or might not be on a track to self-improve and initiate an Intelligence explosion you'd better be sure enough that it would remain friendly to, at the very least, give it a robot body, a scalpel, and stand with your throat exposed before it.

If the scenario makes you nervous you should be pretty much equally nervous at the idea of giving your maybe-self-improving AI sitting inside thirty nestled sandboxes even 10 milliseconds (10^41 Planck intervals) of CPU time.

Let me be clear here: I'm not assigning any significant probability to someone recreating EURISKO or something like it in their spare time and having it recursively self-improve any time soon. My confidence intervals are spread widely enough that I can spend some time being worried about it, though. I'm just pointing out that sandboxing adds approximately zero extra defense in those situations we would need it.

The parallel to the simulation argument was interesting though, thanks.

Comment author: John_Maxwell_IV 10 February 2013 10:02:06AM *  2 points [-]

If the scenario makes you nervous you should be pretty much equally nervous at the idea of giving your maybe-self-improving AI sitting inside thirty nestled sandboxes even 10 milliseconds (10^41 Planck intervals) of CPU time.

I don't think the number of Planck intervals is especially useful to cite... it seems like the relevant factor is CPU cycles, and while I'm not an expert on CPUs, I'm pretty sure that we're not bumping up on Planck intervals yet.

Relatedly, if you were worried about self-improving superintelligence, you could give your AI a slow CPU.