timtyler comments on Size of the smallest recursively self-improving AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
Yudkowsky apparently defines the term "FOOM" here:
It's weird and doesn't seem to make much sense to me. How can the term "FOOM" be used to refer to a level of capability?
I agree, though I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.
We should probably scratch that definition - even though it is about the only one provided.
If the term "FOOM" has to be used, it should probably refer to actual rapid progress, not merely to a capability of producing technologies rapidly.
Creating molecular nanotechnology may be given as homework in the 29th century - but that's quite a different idea to there being rapid technological progress between now and then. You can attain large capabilities by slow and gradual progress - as well as via a sudden rapid burst.
Yeah it's a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term "FOOM", though I agree that it's important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people's understanding of FOOM, but I'm happy to be corrected.
I would use the term "singularity" to refer more generally to periods of rapid progress, so e.g. I'd be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last "open-ended" clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
My understanding of the FOOM process: