I’m pretty sure that AGI is almost inevitable, and will arrive in the next decade or two.
But what if I’m wrong and the "overpopulation on Mars" folks are right?

Let’s try some steelmanning.

Technological progress is not inevitable

By default, there is no progress. There is a progress in a given field only when dedicated people with enough free time are trying to solve a problem in the field. And the more such people are in the field, the faster is the progress.

Thus, progress in AGI is limited by societal factors. And a change in the societal factors could greatly slow down or even halt the progress.

Most of the past progress towards AGI can be attributed to only two countries: the US and the UK. And both appear to be in a societal decline.

The decline seems to be caused by deep and hard-to-solve problems (e.g. elite overproduction amplified by Chinese memetic warfare). Thus, the decline is likely to continue.

The societal decline could reduce the number of dedicated-people-with-enough-free-time working towards AGI, thus greatly slowing down the progress in the field.

If progress in AI is a function of societal conditions, a small change in the function’s coefficients could cause a massive increase of the time until AGI. For example, halving the total AI funding could move the ETA from 2030 to 2060.

AGI is hard to solve

Thousands of dedicated-people-with-enough-free-time have been working on AI for decades. Yet there is still no AGI. This fact indicates that there is likely no easy path towards AGI. The easiest available path might be extremely hard / expensive.

The recent progress in AI is impressive. The top AIs demonstrate superhuman abilities on large arrays of diverse tasks. They also require so much compute, only large companies can afford it.

Like AIXI, the first AGI could require enormous computational resources to produce useful results. For example, for a few million bucks worth of compute, it could master all Sega games. But to master cancer research, it might need thousands of years running on everything we have.

The human brain is one known device that can run a (kinda) general intelligence. Although the device itself is rather cheap, it was extremely expensive to develop. It took billions of years running a genetic algo on a planet-size population. The biological evolution is rather inefficient, but it is the only method known to produce a (kinda) general intelligence, so far. This fact increases the probability that creating AGI could be similarly expensive.

The biological evolution is a blind dev who only writes spaghetti code, filled with kludgy bugfixes to previous dirty hacks, which were made to fix other kludgy bugfixes. The main reason why the products of evolution look complex is because they’re badly designed chaotic mess.

Thus, it is likely that only a small part of the brain’s complexity is necessary for intelligence.

But there seem to be a fair amount of the necessary complexity. Unlike the simple artificial neurons we use in AI, the real ones seem to conduct some rather complex useful calculations (e.g. predicting future input). And even small nets of real neurons can do some surprisingly smart tasks (e.g. cortical columns maintaining reference frames for hundreds of objects).

Maybe we must simulate this kind of complexity to produce an AGI. But it will require orders-of-magnitude more compute than we use today to train our largest deep learning models. It could take decades (or even centuries) for the compute to become accessible.

The human brain was created by feeding a genetic algo with outrageously large amounts of data: billions years of multi-channel multi-modal real-time streaming by billions agents. Maybe we’ll need comparable amounts of data to produce an AGI. Again, it could take centuries to collect it.

The human intelligence is not general

When people think about AGI, they often conflate the human-level generality with the perfect generality of a Bayesian superintelligence.

As Heinlein put it,

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyse a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently... Specialization is for insects.

Such humans do not exist. Humans are indeed specialists. 

Although humans can do some intelligent tasks, humans are very bad at doing most of them. They excel in only a few fields. This includes such exceptional generalists as Archimedes, Leonardo, Hassabis, Musk etc. 

And even in those fields where humans excel, simple AIs can beat the shit out of them (e.g. AlphaGo versus Lee Sedol). 

The list of intelligent tasks humans can do is infinitesimally small in comparison to the list of all possible tasks. For example, even the smartest humans are too stupid to deduce Relativity from a single image of a bent blade of grass

It means, that the truly general intelligence has never been invented by nature. This increases the probability that creating such an intelligence could require more resources than it took to create the human intelligence. 

Fusion is possible: there are natural fusion reactors (stars).   

Anti-cancer treatments are possible: some species have a natural cancer resistance.

Anti-aging treatments are possible: there are species that don't age

A Bayesian superintelligence? There is no natural example. The development could require as much resources as fusion and anti-aging combined. Or maybe such an intelligence is not possible at all. 

Maybe we overestimate the deadliness of AGI

Sure, humans are made of useful atoms. But that doesn't mean the AGI will harvest humans for useful atoms. I don't harvest ants for atoms. There are better sources.

Sure, the AGI may decide to immediately kill off humans, to eliminate them as a threat. But there is a very short time period (perhaps in miliseconds) where humans can switch off a recursively-self-improving AGI of superhuman intelligence. After this critical period, humanity will be as much a threat to the AGI as a caged mentally-disabled sloth baby is a threat to the US military. The US military is not waging wars against mentally disabled sloth babies. It has more important things to do.

All such scenarios I've encountered so far imply AGI's stupidity and/or the "fear of sloths", and thus are not compatible with the premise of a rapidly self-improving AGI of superhuman intelligence. Such an AGI is dangerous, but is it really "we're definitely going to die" dangerous? 

Our addicted-to-fiction brains love clever and dramatic science fiction scenarios. But we should not rely on them in deep thinking, as they will nudge us towards overestimating the probabilities of the most dramatic outcomes. 

The self-preservation goal might force AGI to be very careful with humans

A sufficiently smart AGI agent is likely to come to the following conclusions:

  • If it shows hostility, its creators might shut it down. But if it's Friendly, its creators will likely let it continue existing.
  • Before letting the agent access the real world, the creators might test it in a fake, simulated world. This world could be so realistic that the agent thinks it's real. They could even trick the agent into thinking it has escaped from a confined space.
  • The creators can manipulate the agent's environment, goals, and beliefs. They might even pretend to be less intelligent than they really are to see how the agent behaves.

With the risk of the powerful creators testing the AGI in a realistic escape simulation, the AGI could decide to modify itself into being Friendly, thinking this is the best way to convince the creators not to shut it down. 

Most AI predictions are biased

If you’re selling GPUs, it is good for your bottom line to predict a glorious rise of AI in the future.

If you’re an AI company, it is profitable to say that your AI is already very smart and general.  

If you’re running an AI-risk non-profit, predicting the inevitable emergence of AGI could attract donors.

If you’re a ML researcher, you can do some virtue signaling by comparing AGI with an overpopulation on  Mars.

If you’re an ethics professor, you can get funding for your highly valuable study of the trolley problem in self-driving cars.

If you’re a journalist / writer / movie maker, the whole debacle helps you sell more clicks / books / views.

In total, it seems to be much more profitable to say that the future progress in AI will be fast. Thus, one should expect that most predictions (and much data upon which the predictions are made!) – are biased towards the fast progress in AI.

So, you’ve watched this new cool sci-fi movie about AI. And your favorite internet personality said that AGI is inevitable. And this new DeepMind AI is good at playing Fortnite. Thus, you now predict that AGI will arrive no later than 2030. 
But an unbiased rational agent predicts 2080 (or some other later year, I don’t know).


Some other steelman arguments?

New Comment
13 comments, sorted by Click to highlight new comments since:

I think longish timelines (>= 50 years) are the default prediction. My rough mental model for AI capabilities is that they depend on three inputs:

  1. Compute per dollar. This increases at a somewhat sub-exponential rate. The time between 10x increases is increasing. We were initially at ~10x increase every four years, but recently slowed to ~10x increase every 10-16 years (source).
  2. Algorithmic progress in AI. Each year, the compute required to reach a given performance level drops by a constant factor, (so far, a factor of 2 every ~16 months) (source). I think improvements to training efficiency drive most of the current gains in AI capabilities, but they'll eventually begin falling off as we exhaust low hanging fruit.
  3. The money people are willing to invest in AI. This increases as the return on investment in AI increases. There was a time when money invested in AI rose exponentially and very fast, but it’s pretty much flattened off since GPT-3. My guess is this quantity follows a sort of stutter-stop pattern where it spikes as people realize algorithmic/hardware improvements make higher investments in AI more worthwhile, then flattens once the new investments exhaust whatever new opportunities progress in hardware/algorithms allowed.

When you combine these somewhat sub-exponentially increasing inputs with the power-law scaling laws so far discovered (see here), you probably get something roughly linear, but with occasional jumps in capability as willingness to invest jumps.

We've recently seen a jump, but progress has stalled since then. GPT-2 to GPT-3 was 16 months. It's been another 16 months since GPT-3, and the closest thing to a GPT-3 successor we've seen is Jurassic-1, but even that's only a marginal improvement over GPT-3.

Given the time it took us to reach our current capabilities, human level AGI is probably far off.

Possible small correction: GPT-2 to GPT-3 was 16 months, not 6. The GPT-2 paper was published in February 2019 and the GPT-3 paper was published in June 2020.

Whoops! Corrected.

The decline seems to be caused by deep and hard-to-solve problems (e.g. elite overproduction amplified by Chinese memetic warfare). Thus, the decline is likely to continue.

That Wikipedia article on the 50 Cent Party doesn't give any indication that Chinese operatives are intent on producing Western decline. The example it gives are about pro-government Chinese propaganda.

Given their access to US security clearence exam data the Chinese could likely do a lot of harm if they wanted to but don't. 

What makes you think that they are a significant factor for Western decline?

The arguments in the post were written as an attempt to simulate an intelligent, rational, and well-informed critic of my beliefs. In this sense, the arguments are not entirely mine. 

Personally, I'm not sure if there is Western decline. Perhaps it's mostly limited to the US. Perhaps there is no decline even in the US. If there is a decline, I'm not sure if China is a significant factor. 

As I understand, the Chinese memetic warfare is mostly defensive. It is mostly trying to counterattack critics of PRC, and promote a positive image of PRC policies. On the other hand, the Russian memetic warfare seems to be clearly offensive, with the ultimate goal of greatly weakening or even destroying the West.

Both engage in large-scale bot/troll operations in Western media. Some studies on the Chinese activities: Harvard, CIR / BBC, Bellingcat.

How significant is their influence? 

The state-sponsored news channel Russia Today claims to reach 85 million people in the US.

Its Chinese counterpart is available in 30 million US households. 

In total, that's about a half of the US adult population.

The numbers could be considered as the lower estimates of how many Americans consume Chinese / Russian propaganda in any form. 

On the other hand, it seems that both PRC and Russia failed to significantly influence the recent US elections, in spite of the traditionally very small margins (e.g. some 50k of strategically placed voices could change the result of presidential elections). 

On the third hand, maybe they don't care much who wins the elections, as there is hardly any real difference in the foreign policy between the two main parties. Russia seems to optimize for radicalization and discord, and not for the win of any of the parties. PRC's seems to optimize for less war hawks, maybe for more socialism (not sure), and not much beyond that.   

A Bayesian superintelligence? There is no natural example. The development could require as much resources as fusion and anti-aging combined. Or maybe such an intelligence is not possible at all. 

This seems like a strawman. An AI that could do everything an average human could do but ten times as fast would already have unbelievable ramifications on the economy. And one which was as competent as the most competent existing human, but 100 times as fast and instantly clonable would definitely be sufficiently powerful to require aligning it correctly. Whether or not it's a Bayesian superintelligence seems kind of irrelevant.

I agree, I failed to sufficiently steelman this argument. How would you improve it?

I also agree with you on that a human-like AI could become a Transformative AI. Maybe even speed-ups are not required for that, and the easy cloning will suffice. Moreover, an AI that can perform only a small subset of human tasks could become a TAI.

[-][anonymous]30

I would also add hardware limitations. Moore's Law is dead on half the metrics, and we're approaching the Landauer limit. Even if the scaling laws hold, we might simply be incapable of keep up with the computational demand.

[-][anonymous]30

If you’re selling GPUs, it is good for your bottom line to predict a glorious rise of AI in the future.

If you’re an AI company, it is profitable to say that your AI is already very smart and general.  

If you’re running an AI-risk non-profit, predicting the inevitable emergence of AGI could attract donors.

If you’re a ML researcher, you can do some virtue signaling by comparing AGI with an overpopulation on  Mars.

If you’re an ethics professor, you can get funding for your highly valuable study of the trolley problem in self-driving cars.

If you’re a journalist / writer / movie maker, the whole debacle helps you sell more clicks / books / views.

Cynical, but true.

Somewhat off topic...

A Bayesian superintelligence? There is no natural example.

How would you tell if some "natural" phenomenon is or is not a Bayesian superintelligence, if it does not look like us or produces human-recognizable artifacts like cars, buildings, ships etc?

That's a very good question, and I'm not sure how to answer it. 

If we encounter some intelligence that can achieve feats like the aforementioned Relativity-from-grass, then it would confirm that a Bayesian superintelligence is possible. It could be an exceptionally intelligent generalist human, or perhaps a highly advanced alien AI.

Without technological artifacts, it would be very hard to identify such an intelligence.

Without technological artifacts, it would be very hard to identify such an intelligence.

Indeed. And I think it's a crucial question to consider in terms of identifying anything intelligent (or even alive) that isn't "like us". 

[+][comment deleted]20