You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Size of the smallest recursively self-improving AI? - Less Wrong Discussion

4 Post author: alexflint 30 March 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: timtyler 31 March 2011 05:39:00PM *  1 point [-]

Then there is the fact that any algorithm that naively enumerates some space of algorithms qualifies in some sense as a FOOM seed as it will eventually hit on some recursively self-improving AI. But that could take gigayears so is really not FOOM in the usual sense.

If you link it up to actuators? That doesn't work - it bashes its brains in before it does anything interesting. Unless you have mastered spaceships and self-replication - but then you have already built a S.I.S.

Comment author: alexflint 01 April 2011 12:49:20PM 0 points [-]

Hmm good point.

I think we need an inverse AI-box -- which only lets AIs out. Something like "prove Fermat's last theorem and I'll let you out". An objection would be that we'll come across a non-AI that just happens to print out the proof before we come across an actual AI that does so, but actually the reverse should be true: an AI represents the intelligence to find that proof, which should be more compressible than a direct encoding of the entire the proof (even if we allow the proof itself to be compressed). But it could be that encoding intelligence just requires more bits than encoding the proof to Fermat's last theorem, in which case we can just pick a more difficult problem, like "cure cancer in this faithful simulation of Earth". As we increase the difficulty of the problem, the size of the smallest non-AI that solves it should increase quickly, but the size of the smallest true-AI that solves it should increase slowly.

Or perhaps the original AI box would actually function as an inverse AI box too: the human just tries to keep the AI in, so only a sufficiently intelligent AI can escape.