Background and questions

Since Eric Drexler publicly released his “Comprehensive AI services model” (CAIS) there has been a series of analyses on LW, from rohinmshah, ricraz, PeterMcCluskey, and others.

Much of this discussion focuses on the implications of this model for safety strategy and resource allocation. In this question I want to focus on the empirical part of the model.

  • What are the boldest predictions the CAIS model makes about what the world will look in <=10 years?

“Boldest” might be interpreted as those predictions which CAIS gives a decent chance, but which have the lowest probability under other “worldviews” such as the Bostrom/Yudkowsky paradigm.

A prediction which all these worldviews agree on, but which is nonetheless quite bold, is less interesting for present purposes (possibly something like that we will see faster progress than places like mainstream academia expect).

Some other related questions:

  • If you disagree with Drexler, but expect there to be empirical evidence within the next 1-10 years that would change your mind, what is it?
  • If you expect there to be events in that timeframe causing you to go “I told you so, the world sure doesn’t look like CAIS”, what are they?

Clarifications and suggestions

I should clarify that answers can be about things that would change your mind about whether CAIS is safer than other approaches (see e.g. the Wei_Dai comment linked below).

But I suggest avoiding discussion of cruxes which are more theoretical than empirical (e.g. how decomposable high-level tasks are) unless you have a neat operationalisation for making them empirical (e.g. whether there will be evidence of large economies-of-scope of the most profitable automation services).

Also, it might be really hard to get this down to a single prediction, so it might be useful to pose a cluster of predictions and different operationalisations, and/or using conditional predictions.

New Answer
New Comment

4 Answers sorted by

PeterMcCluskey

110

One clear difference between Drexler's worldview and MIRI's is that Drexler expects progress to continue along the path that recent ML research has outlined, whereas MIRI sees more need for fundamental insights.

So I'll guess that Drexler would predict maybe a 15% chance that AI research will shift away from deep learning and reinforcement learning within a decade, whereas MIRI might say something more like 25%.

I'll guess that MIRI would also predict a higher chance of an AI winter than Drexler would, at least for some definition of winter that focused more on diminishing IQ-like returns to investment, than on overall spending.

jacobjacob

60

Wei_Dai writes:

A major problem in predicting CAIS safety is to understand the order in which various services are likely to arise, in particular whether risk-reducing services are likely to come before risk-increasing services. This seems to require a lot of work in delineating various kinds of services and how they depend on each other as well as on algorithmic advancements, conceptual insights, computing power, etc. (instead of treating them as largely interchangeable or thinking that safety-relevant services will be there when we need them). Since this analysis seems very hard to do much ahead of time, I think we'll have to put very wide error bars on any predictions of whether CAIS would be safe or unsafe, until very late in the game.

jacobjacob

60

Ricraz writes:

I'm broadly sympathetic to the empirical claim that we'll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single superhuman AGI (one unified system that can do nearly all cognitive tasks as well as or better than any human).

I’d be interested in operationalising this further, and hearing takes on how many years “significantly before” entails.

He also adds:

One plausible mechanism is that deep learning continues to succeed on tasks where there's lots of training data, but doesn't learn how to reason in general ways - e.g. it could learn from court documents how to imitate lawyers well enough to replace them in most cases, without being able to understand law in the way humans do. Self-driving cars are another pertinent example. If that pattern repeats across most human professions, we might see massive societal shifts well before AI becomes dangerous in the adversarial way that’s usually discussed in the context of AI safety.

The operationalisation which feels most natural to me is something like:

  • Make a list of cognitively difficult jobs (lawyer, doctor, speechwriter, CEO, engineer, scientist, accountant, trader, consultant, venture capitalist, etc...)
  • A job is automatable when there exists a publicly accessible AI service which allows an equally skilled person to do just as well in less than 25% of the time that it used to take a specialist, OR which allows someone with little skill or training to do the job in about the same time that it used to take a specialist.
  • I claim that
... (read more)
2jacobjacob
Why are you measuring it in proportion of time-until-agent-AGI and not years? If it takes 2 years from comprehensive services to agent, and most jobs are automatable within 1.5 years, that seems a lot less striking and important than the claim pre-operationalisation.
2Richard_Ngo
The 75% figure is from now until single agent AGI. I measure it proportionately because otherwise it says more about timeline estimates than about CAIS.

Tamay

50

If research into general-purpose systems stops producing impressive progress, and the application of ML in specialised domains becomes more profitable, we'd soon see much more investment in AI labs that are explicitly application-focused rather than basic-research focused.

10 comments, sorted by Click to highlight new comments since:

I have a hard time making near/medium-term predictions under the Bostrom/Yudkowsky paradigm. Can you give some examples of what those would be? It seems to me like that paradigm primarily talks about what the limiting behavior is.

Like, I want to say something like "CAIS predicts that data will continue to be important", because Bostrom/Yudkowsky paradigm says that superintelligent AI systems will be able to extract as much information as possible from data and so will need less of it; but I seriously doubt that anyone would actually consider that to be a "bold" prediction, and Bostrom/Yudkowsky themselves probably would also make that prediction.

EY seems to have interpreted AlphaGo Zero as strong evidence for his view in the AI-foom debate, though Hanson disagrees.

EY:

Showing excellent narrow performance *using components that look general* is extremely suggestive [of a future system that can develop lots and lots of different "narrow" expertises, using general components].

Hanson:

It is only broad sets of skills that are suggestive. Being very good at specific tasks is great, but doesn't suggest much about what it will take to be good at a wide range of tasks. [...] The components look MORE general than the specific problem on which they are applied, but the question is: HOW general overall, relative to the standard of achieving human level abilities across a wide scope of tasks.

It's somewhat hard to hash this out as an absolute rather than conditional prediction (e.g. conditional on there being breakthroughs involving some domain-specific hacks, and major labs keep working on them, they will somewhat quickly superseded by breakthroughs with general-seming architectures).

Maybe EY would be more bullish on Starcraft without imitation learning, or AlphaFold with only 1 or 2 modules (rather than 4/5 or 8/9 depending on how you count).

The following exchange is also relevant:

[-] Raiden 1y link 30

Robin, or anyone who agrees with Robin:

What evidence can you imagine would convince you that AGI would go FOOM?

Reply[-] jprwg 1y link 22

While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.

That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:

  • A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussedrecently, or something along similar lines.
  • Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn't say much.

Reply[-] RobinHanson 1y link 23

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

EY seems to have interpreted AlphaGo Zero as strong evidence for his view in the AI-foom debate

I don't think CAIS takes much of a position on the AI-foom debate. CAIS seems entirely compatible with very fast progress in AI.

I don't think CAIS would anti-predict AlphaGo Zero, though plausibly it doesn't predict as strongly as EY's position does.

conditional on there being breakthroughs involving some domain-specific hacks, and major labs keep working on them, they will somewhat quickly superseded by breakthroughs with general-seming architectures

This is a prediction I make, with "general-seeming" replaced by "more general", and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.

I don't think CAIS takes much of a position on the AI-foom debate. CAIS seems entirely compatible with very fast progress in AI.

Isn't the "foom scenario" referring to an individual AI that quickly gains ASI status by self-improving?

The equivalent of the "foom scenario" for CAIS would be rapidly improving basic AI capabilities due to automated AI R&D services, such that the aggregate "soup of services" is quickly able to do more and more complex tasks with constantly improving performance. If you look at the "soup" as an aggregate, this looks like a thing that is quickly becoming superintelligent by self-improving.

The main difference from the classical AI foom scenario is that the thing that's improving cannot easily be modeled as pursuing a single goal. Also, there are more safety affordances: there can still be humans in the loop for services that have large real world consequences, you can monitor the interactions between services to make sure they aren't doing anything unexpected, etc.

This is a prediction I make, with "general-seeming" replaced by "more general", and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.

I notice I'm confused. My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones (though this might be more of a claim about economics than a claim about the nature of intelligence).

My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones

Depends what you mean by "general". If you mean that there would be poor returns to building an AGI that has a broad understanding of the world that you then ask to always perform surgery, I agree that that's not going to be as good as creating a system that is specialized for surgeries. If you mean that there would be poor returns to building a machine translation system that uses end-to-end trained neural nets, I can just point to Google Translate using those neural nets instead of more specialized systems that built parse trees before translating. When you say "domain-specific hacks", I think much more of the latter than the former.

Another way of putting it is that CAIS says that there are poor returns to building task-general AI systems, but does not say that there are poor returns to building general AI building blocks. In fact, I think CAIS says that you really do make very general AI building blocks -- the premise of recursive technological improvement is that AI systems can autonomously perform AI R&D which makes better AI building blocks which makes all of the other services better.

All of that said, Eric and I probably do disagree on how important generality is, though I'm not sure exactly what the disagreement is, so to the extent that you're trying to use Eric's conception of CAIS you might want to downweight these particular beliefs of mine.

[-]jmh10

I've not read the paper, but did just go to the link. One thing I would be interested in hearing from the community here on is Figure 1, Classes of intelligent systems.

I am a bit surprised that I don't see any type of parallel references in the higher order levels to human related institutions. If we're putting humans at the individual agent level it seems some of the existing human institutions might fit at the higher, information or task-oriented, level.

To answer this question, it would be interesting to see the list of such services. It may include:

  • Text generators and personal assistants (including lawyers)
  • Super Google: better search, email, translation etc
  • Self-driving cars and all around them
  • Homo robotics
  • Centralised planning Oracles and government support systems
  • Scientists' support services: automatic blueprints generators, article's generator
  • Medical sphere: from telemedicine to medical advise by AI
  • Advance viruses and antiviruses
  • Global police: advance from of Palantir
  • Military planning strategic systems and cyberweapons

What else?