You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheOtherDave comments on Size of the smallest recursively self-improving AI? - Less Wrong Discussion

4 Post author: alexflint 30 March 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 01 April 2011 04:30:10PM 1 point [-]

My $0.02: singularities brought about by recursive self-improvement are one concept, and singularities involving really-really-fast improvement are a different concept. (They are, of course, perfectly compatible.)

It may just not be all that useful to have a single word that denotes both.

If I want to talk about a "hard take-off" or a "step-function" scenario caused by recursively self-improving intelligence, I can say that.

But I estimate that 90% of what I will want to say about it will be true of many different step-function scenarios (e.g., those caused by the discovery of a cache of Ancient technology) or true of many different recursively self-improving intelligence scenarios.

So it may be worthwhile to actually have to stop and think about whether I want to include both clauses.

Comment author: alexflint 02 April 2011 10:42:33AM 1 point [-]

Completely agree with paras 1 and 2.

However, It does seem that we talk about "hard take-off scenario caused by recursively self-improving intelligence" often enough to warrant a convenience term to mean just that. Much of the discussion about cascades, cycles, insights, AI-boxes, resource overhangs etc are specific to the recursive self-improvement scenario, and not to, e.g. the cache of Ancient tech scenario.