I think this approach to thinking about AI capabilities is quite pertinent. Could be worth including "Nx AI R&D labor AIs" in the list?
Thanks for the recommendation! I liked ryan's sketches of what capabilities an Nx AI R&D labor AIs might possess. Makes things a bit more concrete. (Though I definitely don't like the name.) I'm not sure if we want to include this definition, as it is pretty niche. And I'm not convinced of its utility. When I tried drafting a paragraph describing it, I struggled to articulate why readers should care about it.
Here's the draft paragraph.
"Nx AI R&D labor AIs: The level of AI capabilities that is necessary for increasing the effective amount of labor working on AI research by a factor of N. This is not the same thing as the capabilities required to increase AI progress by a factor of N, as labor is just one input to AI progress. The virtues of this definition include: ease of operationalization, [...]"
I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].
which would lead to "losing the forest for the trees", make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI ↩︎
e.g. a higher-level abstraction like "AI that speeds up AI development by a factor of N" might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development. ↩︎
This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
These terms are all related attempts to define AI capability milestones — roughly, "the point at which artificial intelligence becomes truly intelligent" — but with different meanings:
Superintelligence is defined by Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". This is by far the highest bar out of all the concepts listed here, but it may be reached a short time after the others, e.g., because of an intelligence explosion.
Other terms which are sometimes used include:
The term AGI suffers from ambiguity, to the point where some people avoid using it. Still, it remains the most common term used to talk about the cluster of concepts used on this page.