You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Risto_Saarelma comments on Size of the smallest recursively self-improving AI? - Less Wrong Discussion

4 Post author: alexflint 30 March 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: Tiiba 31 March 2011 05:26:02PM 3 points [-]

What's up with the word "foom", and why is it always in all caps? Can we come up with another name for this that doesn't sound like a sci-fi nerd in need of Ritalin?

Comment author: alexflint 01 April 2011 02:52:04PM 0 points [-]

Yeah I agree. "Intelligence explosion" is bandied about, but I guess that can also refer to Kurzweilian-style exponential growth phenomena.

"Hard take-off singularity" is close, too, but not exactly the same. Again, it refers to a certain magnitude of acceleration, whereas FOOM refers specifically to recursive self-improvement as the mechanism.

I'm open to suggestions.

Comment author: TheOtherDave 01 April 2011 04:30:10PM 1 point [-]

My $0.02: singularities brought about by recursive self-improvement are one concept, and singularities involving really-really-fast improvement are a different concept. (They are, of course, perfectly compatible.)

It may just not be all that useful to have a single word that denotes both.

If I want to talk about a "hard take-off" or a "step-function" scenario caused by recursively self-improving intelligence, I can say that.

But I estimate that 90% of what I will want to say about it will be true of many different step-function scenarios (e.g., those caused by the discovery of a cache of Ancient technology) or true of many different recursively self-improving intelligence scenarios.

So it may be worthwhile to actually have to stop and think about whether I want to include both clauses.

Comment author: alexflint 02 April 2011 10:42:33AM 1 point [-]

Completely agree with paras 1 and 2.

However, It does seem that we talk about "hard take-off scenario caused by recursively self-improving intelligence" often enough to warrant a convenience term to mean just that. Much of the discussion about cascades, cycles, insights, AI-boxes, resource overhangs etc are specific to the recursive self-improvement scenario, and not to, e.g. the cache of Ancient tech scenario.

Comment author: timtyler 31 March 2011 05:35:41PM *  0 points [-]

See http://lesswrong.com/lw/we/recursive_selfimprovement/ for an attempt at a definition.