For no reason in particular I'm wondering about the size of the smallest program that would constitute a starting point of a recursively self-improving AI.
The analysis of FOOM as a self-amplifying process would seem to indicate that in principle one could get it started from a relatively modest starting point -- perhaps just a few bytes of the right code could begin the process. Or could it? I wonder whether any other considerations give tighter lower-bounds.
One consideration is that FOOM hasn't already happened -- at least not here on Earth. If the smallest FOOM seed were very small (like a few hundred bytes) then we would expect evolution to have already bumped into it at some point. Although evolution is under no specific pressure to produce a FOOM, it has probably produced over the last few billion years all the interesting computations up to some minor level of complexity, and if there were a FOOM seed among those then we would see the results about us.
Then there is the more speculative analysis of what minimal expertise the algorithm constituting the FOOM seed would actually need.
Then there is the fact that any algorithm that naively enumerates some space of algorithms qualifies in some sense as a FOOM seed as it will eventually hit on some recursively self-improving AI. But that could take gigayears so is really not FOOM in the usual sense.
I wonder also whether the fact that mainstream AI hasn't yet produced FOOM could lower-bound the complexity of doing so.
Note that here I'm referring to recursively self-improving AI in general -- I'd be interested if the answers to these questions change substantially for the special case of friendly AIs.
Anyway, just idle thoughts, do add yours.
My $0.02: singularities brought about by recursive self-improvement are one concept, and singularities involving really-really-fast improvement are a different concept. (They are, of course, perfectly compatible.)
It may just not be all that useful to have a single word that denotes both.
If I want to talk about a "hard take-off" or a "step-function" scenario caused by recursively self-improving intelligence, I can say that.
But I estimate that 90% of what I will want to say about it will be true of many different step-function scenarios (e.g., those caused by the discovery of a cache of Ancient technology) or true of many different recursively self-improving intelligence scenarios.
So it may be worthwhile to actually have to stop and think about whether I want to include both clauses.
Completely agree with paras 1 and 2.
However, It does seem that we talk about "hard take-off scenario caused by recursively self-improving intelligence" often enough to warrant a convenience term to mean just that. Much of the discussion about cascades, cycles, insights, AI-boxes, resource overhangs etc are specific to the recursive self-improvement scenario, and not to, e.g. the cache of Ancient tech scenario.