Magnus Carlsen is closer in ELO to Stockfish than median human.
Chess is a bad example.
Here's a useful rule of thumb: Every 100 Elo is supposed to give you a 30% edge. Or play around with this: https://wismuth.com/elo/calculator.html
This means that if a 1400 plays a 1500, the 1500 should win about 30% more than the 1400. Totally normal thing that happens all the time.
It also means that if a one-million Elo AI plays a one-million-one-hundred Elo AI, the one-million-one-hundred should win 30% more than the one-million. This is completely absurd, because actual superintelligences are just going to draw each other 100% of the time. Ergo, there can never be a one-million Elo chess engine.
It's like chess has a ceiling, where as you get close to that ceiling all the games become draws and you can't rise further. The ceiling is where all the superintelligences play, but the location of the ceiling is just a function of the rules of chess, not a function of how smart the superintelligences are. Magnus Carlsen is closer to the ceiling than he is to the median human's level, which can be taken as merely a statement about how good he is at chess relative to its rules.
In the game "reality," there's probably still a ceiling, but that ceiling is so high that we don't expect any AIs that haven't turned the Earth into computronium to be anywhere near it.
This is completely absurd, because actual superintelligences are just going to draw each other 100% of the time. Ergo, there can never be a one-million Elo chess engine.
Do you have some idea of where the ceiling might be, that you can say that with confidence?
Just looking at this, seems like research in chess has slowed down. Makes sense. But did we actually check if we were near a chess capabilities ceiling before we slowed down? I'm wondering if seeing how far we can get above human performance could give us some data about limits to superintelligence..
Do you mean win the game in a fair match? In that case, sure, each player adding intelligence gives them in an advantage.
But to show how it's diminishing returns: can any chess algorithm beat an average human player who gets an extra queen? 2 extra queens? Intelligence doesn't necessarily translate to real world ability to win unfair matchups. Sometimes a loss is inevitable no matter the action taken.
I agree with most of these claims. However, I disagree about the level of intelligence required to take over the world, which makes me overall much more scared of AI/doomy than it seems like you are. I think there is at least a 20% chance that a superintelligence with +12 SD capabilities across all relevant domains (esp. planning and social manipulation) could take over the world.
I think human history provides mixed evidence for the ability of such agents to take over the world. While almost every human in history has failed to accumulate massive amounts of power, relatively few have tried. Moreover, when people have succeeded at quickly accumulating lots of power/taking over societies, they often did so with surprisingly small strategic advantages. See e. g. this post; I think that an AI that was both +12 SD at planning/general intelligence and social manipulation could, like the conquistadors, achieve a decisive strategic advantage without having to have some kind of crazy OP military technology/direct force advantage. Consider also Hitler's rise to power and the French Revolution as cases where one actor/a small group of actors was able to surprisingly rapidly take over a country.
While these examples provide some evidence in favor of it being easier than expected to take over the world, overall, I would not be too scared of a +12 SD human taking over the world. However, I think that the AI would have some major advantages over an equivalently capable human. Most importantly, the AI could download itself onto other computers. This seems like a massive advantage, allowing the AI to do basically everything much faster and more effectively. While individually extremely capable humans would probably greatly struggle to achieve a decisive strategic advantage, large groups of extremely intelligent, motivated, and competent humans seem obviously much scarier. Moreover, as compared to an equivalently sized group of equivalently capable humans, a group of AIs sharing their source code would be able to coordinate among themselves far better, making them even more capable than the humans.
Finally, it is much easier for AIs to self modify/self improve than it is for humans to do so. While I am skeptical of foom for the same reasons you are, I suspect that over a period of years, a group of AIs could accumulate enough financial and other resources that they could translate these resources into significant cognitive improvements, if only by acquiring more compute.
While the AI has the disadvantage relative to an equivalently capable human of not immediately having access to a direct way to affect the "external" world, I think this is much less important than the AIs advantages in self replication, coordination, an self improvement.
I agree with most of these claims. However, I disagree about the level of intelligence required to take over the world, which makes me overall much more scared of AI/doomy than it seems like you are. I think there is at least a 20% chance that a superintelligence with +12 SD capabilities across all relevant domains (esp. planning and social manipulation) could take over the world.
I specifically said a human with +12 SD g factor. I didn't actually consider what a superintelligence that was at that level on all domains would mean, but I don't think it would matter because of objection 4: by the time superhuman agents arrive, we would already have numerous superhuman non agentic AI, including systems specialised for planning/tactics/strategy.
You'd need to make particular claims about how a superhuman agent performs in a world of humans amplified by superhuman non agents. It's very not obvious to me that they can win any ensuing cognitive arms race.
I am sceptical that a superhuman agent /agency would easily attain decisive cognitive superiority to the rest of civilisation.
Hmm... I guess I'm skeptical that we can train very specialized "planning" systems? Making superhuman plans of the sort that could counter those of an agentic superintelligence seems like it requires both a very accurate and domain-general model of the world as well as a search algorithm to figure out which plans actually accomplish a given goal given your model of the world. This seems extremely close in design space to a more general agent. While I think we could have narrow systems which outperform the misaligned superintelligence in other domains such as coding or social manipulation, general long-term planning seems likely to me to be the most important skill involved in taking over the world or countering an attempt to do so.
Well, simulator type systems like GPT-3 do not become agents if amplified to superhuman cognition.
Simulators could be used to generate/evaluate superhuman plans without being agents with independent objectives of their own.
In the intervening period, I've updated towards your position, though I still think it is risky to build systems with capabilities that open ended which are that close to agents in design space
Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.
I'm sceptical of this. Can you say more about why you think this is true?
Assuming a Gaussian distribution, +6 SD is much rarer than +3 SD, which is already quite rare. There's probably less than 10 +6 SD people alive on the earth today, wheras there are ~10 million +3 SD people. Given the role of things like luck, ambition, practical knowledge, etc., it's not surprising that we see several of the +3 SD people accomplishing things far greater than any of the +6 SD g-factor people, purely on the basis of their much greater abundance.
And that's ignoring potential trade-off effects. Among humans, increased intelligence often seems to come at the cost of lowered social skills and practical nature- there are certainly many intelligent people who are good at sociality and practicality, but there is an inverse correlation (though of course, being intelligent also helps directly to make up for those shortcomings). There's no reason to expect that these same trade-offs will be present in an artificial system, who take completely different physical forms, both in size / form-factor, and in the materials and architectures used to build them. And the incentive gradients that govern the development and construction of artificial systems are also quite different from those that shape humans.
The normal distribution is baked into the scoring of intelligence tests. I do not know what the distribution of raw scores looks like, but the calculation of the IQ score is done by transforming the raw scores to make them normally distributed with a mean of 100. There is surely not enough data to do this transformation out to ±6 SD.
In general, excluding a few fields, I'm not aware that g-factor beyond +3 SD shows up in an important way in life outcomes.
The richest/most powerful/most successful aren't generally the smartest (again, excluding a few fields).
It has been pointed out to me that the lack of such evidence of cognitive superiority may simply be because there's not enough data on people above +3 SD g factor.
But regardless, when I look at our most capable people, they just don't seem to be all that smart.
This is a position I might change my mind on, if we were able to get good data quantifying the gains to real world capabilities moving further out on the human spectrum.
The richest/most powerful/most successful aren't generally the smartest (again, excluding a few fields).
That is exactly addressed by the comment you are replying to:
There's probably less than 10 +6 SD people alive on the earth today, wheras there are ~10 million +3 SD people.
Imagine a world containing exactly 10 people with IQ 190, each of them having 100% chance to become one of "the best"; and 10 000 000 people with IQ 145, each of them having 0.001% chance to become one of "the best".
In such world, we would have 110 people who are "the best", and 100 of them would have IQ 145.
Just because they are a majority in the category doesn't mean that their individual chances are similar.
No, I wasn't directly comparing +6 SD to +3 SD.
It's more that gains from higher g factor beyond +3 SD seem to be minimal/nonexistent in commerce, politics, etc.
Hard science research and cognitive sports are domains in which the most successful seem to be above +3 SD g factor.
I'm not compelled by the small sample size objection because there are actually domains in which the most successful are on average > +3 SD g factor. Those domains just aren't commerce/politics/other routes of obtaining power.
As best as I can tell, your reply seems like a misunderstanding of my objection?
The richest/most powerful/most successful aren't generally the smartest (again, excluding a few fields).
Bill Gates has more than +3 SD g factor given his SAT scores. With Bezos, we don't know his SAT scores but we do know that he was valedictorian. According to Wikipedia the school he attended features in lists of the top 1000 schools in the US. This suggests that the average student at the school is significantly smarter than the average US citizen, so being a valedictorian in that school likely also suggests >3 SD g factor.
Ben Bernanke and Yellen as chairs of the Federal Reserve also seem examples of people with significantly more than 3SD g factor.
I don't think you get the 22.4% of Jewish Nobel prize winners without IQ that goes beyond >3 SD g factor helping with winning Nobel prizes.
Wait, how are you estimating Ben Bernanke and Yellen's g factor. Your reason for guessing it seem much less compelling to me than for Gates and Bezos.
I mean inferring from SAT seems sensible. Valedictorian status is also not as sketchy. I won't necessarily trust it, but the argument is plausible, and I expect we could later see it validated.
Our hard science superstars/chess superstars seem to have a mean and median g factor that's +3 SD.
This does not seem to be the case for self made billionaires, politicians, bureaucrats or other "powerful people".
g factor seems to have diminishing marginal returns in how much power it lets you attain?
For Ben Bernanke it's SAT score. For Yellen there's a New York Times story where they asked a described a colleague to describe her and they said "small lady with a large IQ". There are a few headlines that describe her that way as well.
Chess is not an IQ-driven activity. The same goes for Go. One Go player who I don't think would have qualified for Mensa himself has once visiting a professional Go school in Korea and his impression was that the average professional Go player isn't very smart.
I'm not sure who you mean with hard science superstars. There seems to be an analysis of the best scientists in 1952 that suggests mean IQ of around 154 for them.
It's hard to know the average IQ for self-made billionaires. If we however just at the top tech billionaires people like Bill Gates (perfect math SAT score), Steve Balmer (perfect math SAT score), Jeff Bezos (valedictorian at top school) and Mark Zuckerberg (perfect SAT score) that suggests IQ is helping very much.
I'm not aware of any data from that class of people that speaks about people who have just 130 IQ.
I'm under the impression that many of the best chess players are +4 SD and beyond in IQ.
For scientists, I was thinking of that study that claimed an average IQ of around 154 yeah.
Players at a Go school not being very smart has little bearing on my point. If we found out that the average IQ of the best Go players was e.g. < 130, that would be a relevant counterargument, but the anecdote you presented doesn't sound particularly relevant.
Out of curiosity, what IQ range does a perfect SAT score map to?
Do you have a specific counterexample in mind when you say "when I look at our most capable people, they just don't seem to be all that smart"?
If we consider the 10 richest people in the world, all 10 of them (last time I checked) seem incredibly smart, in addition to being very driven. Success in politics seems less correlated with smarts, but I still perceive politicians in general to have decent intelligence (Which is particularly applied in their ability to manipulate people), and to the extent that unintelligent people can succeed in politics, I attribute that to status dynamics largely unrelated to a person's capability
When it comes to US presidents, I don't think status dynamics largely unrelated to a person's capability really fits it.
While they might not have significantly more than 3 SD g factor, they often have skills that distinguish them. Bill Clinton had his legendary charisma for 1-on-1 interactions. Barack Obama managed to hold speeches that made listeners feel something deeply emotional. Trump has his own kind of charisma skills.
Charisma skills are capabilities of people even when they are not largely driven by IQ.
Quoting myself from elsewhere:
Our hard science superstars/chess superstars seem to have a mean and median g factor that's +3 SD.
This does not seem to be the case for self made billionaires, politicians, bureaucrats or other "powerful people".
g factor seems to have diminishing marginal returns in how much power it lets you attain?
Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.
The smartest humans might be more likely to be mentally ill, the smartest monkeys may be more likely to be mentally ill.
There is no reason to think human intelligence is normally distributed. But if we go with that assumption, +3 SD is 1 in a thousand. Fairly common in intellectual circles. +6 is 1 in a billion. There should be 8 people worldwide that are +6 SD. Even if you knew who those people were (it isn't like the world has a standardized global g factor test that is applied to all humans), you would still have only 8 datapoints, leaving a lot of room for chance. Subjective impressiveness is a subjective measure.
Discovering something like relitivity takes around +3SD, plus a fair bit of "luck". Some of that luck is being born in a position where you get maths lessons and time to think. Some is deciding to focus your intelligence on physics. Some is all over the place. Like whether or not you read a particular textbook, whether your thinking style is more geometric or formal symbolic. Whether an apple falls on your head. Neurochemical noise.
When general relativity was discovered, there was probably 1 +6SD human, (smaller population) And statistically they were a peasant farmer never taught anything beyond arthritic, who may well have been conscripted into some war and shot at. Saying "this +6 human failed to invent relativity" is putting an extremely weak upper bound on the capabilities of +6 SD humans.
If we go into the logic of evolution, selection pressure applies mostly near median. +3SD is already getting to the region where selection pressure is minimal. Anything beyond that is just getting lucky on the genetic dice. Consider a model where there are 650 genes, each 99% likely to have the correct version, and with a 1% chance of having a deleterious mutation. A +3SD human has all the genes correct. Thus the only way to go beyond that is to not only have all the genes correct, but for randomness to produce a new gene that's even better. The average human has 6.5 bad genes, so in this model, you would need 6.5 beneficial mutations (and no deleterious ones) to get the +6SD human. This is far too unlikely to ever happen.
So without hard data, we can't put reasonable upper bounds on what a +6SD human is capable of. Even with any data you might gather, I don't think it would be easy to learn much about the limits of intelligence. Any signal is mostly about the upper tail produced by evolution. And possibly about who is best at goodhearting your g test.
we can't put reasonable upper bounds on what a +6SD human is capable of.
We know they still think using neurons connected by axons that have a max propagation velocity. (and this person has the best variant available). They have 1 set of eyes, hands, and mouth.
These put a hard limit on what they could be capable of regardless of their gifts.
FWIW thank you for posting this! It's good to see where different people are coming from on this, and I like several of your other writings
On many useful cognitive tasks(chess, theoretical research, invention, mathematics, etc.), beginner/dumb/unskilled humans are closer to a chimpanzee/rock than peak humans (for some fields, only a small minority of humans are able to perform the task at all, or perform the task in a useful manner
This seems due to the fact that most tasks are "all or nothing", or at least have a really steep learning curve. I don't think that humans differ that much in intelligence, but rather that small differences result in hugely different abilities. This is part of why I expect foom. Small improvements to an AI's cognition seem likely to deliver massive payoffs in terms of their ability to affect the world.
On many useful cognitive tasks(chess, theoretical research, invention, mathematics, etc.), beginner/dumb/unskilled humans are closer to a chimpanzee/rock than peak humans
All of these tasks require some amount of learning. AIXI can't play chess if it has never been told the rules or seen any other info about chess ever.
So a more reasonable comparison would probably involve comparing people of different IQ's who have made comparable effort to learn a topic.
Intelligence often doesn't look like solving the same problems better, but solving new problems. In many cases, problems are almost boolean, either you can solve them or you can't. The problems you mentioned are all within the range of human variation. Not so trivial any human can do them, nor so advanced no human can do them.
Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.
This is a highly subjective judgement. But there is no particularly strong reason to think that human intelligence has a Gaussian distribution. The more you select for humans with extremely high g factors, the more you goodheart to the specifics of the g factor tests. This goodhearting is relitively limited, but still there at +6SD.
3.0. I believe that for similar levels of cognitive investment narrow optimisers outperform general optimisers on narrow domains.
I think this is both trivially true, and pragmatically false. Suppose some self modifying superintelligence needs to play chess. It will probably largely just write a chess algorithm and put most of it's compute into that. This will be near equal to the same algorithm without the general AI attached. (probably slightly worse at chess, the superintelligence is keeping an eye out just in case something else happens, a pure chess algorithm can't notice a riot in the spectator stands, a superintelligence probably would devote a little compute to checking for such possibilities.)
However, this is an algorithm written by a superintelligence, and it is likely to beat the pants off any human written algorithm.
4.1. I expect it to be much more difficult for any single agent to attain decisive cognitive superiority to civilisation, or to a relevant subset of civilisation.
Being smarter than civilization is not a high bar at all. The government often makes utterly dumb decisions. The average person often believes a load of nonsense. Some processes in civilization seem to run on the soft minimum of the intelligences of the individuals contributing to them. Others run on the mean. Some processes, like the stock market, are hard for most humans to beat, but still beaten a little by the experts.
My intuition is that the level of cognitive power required to achieve absolute strategic dominance is crazily high.
My intuition is that the comparison to a +12SD human is about as useful as comparing heavy construction equipment to top athletes. Machines usually operate on a different scale to humans. The +12 SD runner isn't that much faster than the +6SD runner, Especially because, as you reach into the peaks of athletic performance, the humans are running close to biological limits, and the gap between top competitors narrows.
This is reasonably close to my beliefs. An additional argument I'd like to add is:
There needs to be an economically viable entity pushing AI development forward every step of the way. It doesn't matter if AI can "eventually" produce 30% worldwide GPD growth. Maybe diminishing returns kick in around GPT-4, or we run out of useful training data to feed to the models (We have very few examples of +6 SD human reasoning, as MikkW points out in a sibling comment).
Analogy: It's not the same to say that a given species with X,Y,Z traits can survive in an ecosystem, than to say it can evolve from its ancestor in that same ecosystem.
Fascinating insight with the chimpanzee. I believe this connects to the argument that is sometimes made: an ASI takeover is unlikely because how widely distributed the physical levers of power are in the real world. Ok... but chimpanzees don't have access to massively scaleable industrial weapon technology, such as nuclear, biological etc. They don't live in an increasingly electronically connected world. They don't rely on an electricity grid, among others, for their many daily needs. Also they live in groups or troops - not tribes, but that was more for the pedantic kick of it.
Thinking back on it: that was actually an interesting slip of the tongue with the chimp tribe vs. troop. Tribes are highly, highly human social structures. What the slip of the tongue reveals is that pop culture has generally assimilated them with less sophisticated, lower IQ, more primitive people. Hence we now find our chimps in a tribe. But if you think about it, there is a specific group, at the heart of our Western, sophisticated, industrial and capitalist world, that distinguishes itself through two essential features: i) it is high IQ, and ii) it is unique among the groups of that world in precisely that it has retained much of its ancient tribal structure as a form of social organisation.
Narrow Optimisers Outperform General Optimisers on Narrow Domains
That's true sometimes but not always. Notably, GATO is better at controlling a Sawyer arm than more specialized optimizers. Given that the company that sells the Sawyer arm spent a lot of time developing software to control it, that's impressive.
If we would throw a few billion dollar worth of compute at them they would likely get significantly better.
I have the totally opposite take on chess engines (see my comment).
These takes aren't totally opposite. Elo is capped due to the way it treats draws, but there's other metrics that can be devised, where "significantly better" is still viable. For example, how close to a perfect game (with no tied positions becoming game-theoretically lost, or winning positions becoming game-theoretically tied) does the AI play? And ignoring matches where there are ties, only paying attention to games where either player wins, you remove the ceiling.
I did say given similar levels of cognitive investment.
My guess is that the cognitive work put in GATO's architectures/algorithms was much better than the specialised arms it dominates.
That or GATO was running on a much larger compute budget.
I expect Magnus Carlsen to be closer in ELO to a bounded superintelligence than to a median human.
Seems like this sort of claim could be something tractable that would qualify as material progress on understanding bounds to superintelligence? I'm thinking about results such as this.
However I think that post's title oversells the result-- from the paper:
This paper has demonstrated that even superhuman agents can be vulnerable to adversarial policies. However, our results do not establish how common such vulnerabilities are: it is possible Go-playing AI systems are unusually vulnerable.
There may be superhuman go playing models that are more robust.
I'm also just noting my thoughts here, as I'm also very interested in foom dynamics and wondering how the topic can be approached.
Also - I don't really get "the general intelligence is composite anyway" argument. Ok - I also believe that it is. But what would prevent an ASI from being developped as a well-coordinated set of many narrow optimizers?
Also - why the fixation on 12 SD? It's not that high really. It sounds high to a human evaluating another human. Bostrom made a good point on this - the need to step out of the antropomorphic scale. This thing could very well reach 120 SD (the fact that we wouldn't even know how to measure and recognize 120 SD is just an indication of our own limitations, nothing more), and make every human look like a clam.
Thinking about it - I think a lot of what we call general intelligence might be that part of the function which after it analyses the nature of the problem strategizes and selects the narrom optimizer, or set of narrow optimizers that must be used to solve it, in what order, with what type of logical connections between the outputs of the one and the input of the other etc. Since the narrow optimizers are run sequentially rather than simultaneously in this type of process, the computing capacity required is not overly large.
Disclaimer
Written quickly[1]. It's better to draft my objections poorly, than to not draft them at all.
Introduction
I am sceptical that "foom"[2] is some of not physically possible/feasible/economically viable.
[Not sure yet what level of scepticism I endorse.]
I have a few object level beliefs that bear on it. I'll try and express them succinctly below (there's a summary at the end of the post for those pressed for time).
Note that my objections to foom are more disjunctive than they are conjuctive. Each is independently a reason why foom looks less likely to me.
Beliefs
I currently believe/expect the following to a sufficient degree that they inform my position on foom.
Diminishing Marginal Returns
1.0. Marginal returns to cognitive investment (e.g. compute) decay at a superlinear rate (e.g. exponential) across some relevant cognitive domains (e.g. some of near human, human spectrum, superhuman, strongly superhuman).
1.1. Marginal returns to real world capabilities from cognitive amplification likewise decay at a superlinear rate across relevant cognitive domains.
Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.
Broad Human Cognitive Spectrum
2. The human cognitive spectrum (1st percentile human to peak human) is broad in an absolute sense.
On many useful cognitive tasks(chess, theoretical research, invention, mathematics, etc.), beginner/dumb/unskilled humans are closer to a chimpanzee/rock than peak humans (for some fields, only a small minority of humans are able to perform the task at all, or perform the task in a useful manner[3], for other like chess, beginners are simply closer to the lowest attainable scores than to the scores obtained by peak humans [600 - 800 is a lot closer to 0 than to 2700 - 2900]).
Median humans are probably also closer to a rock than to peak humans (on e.g. inventing general relativity pre 1920).
Peak humans may be closer to bounded superintelligences than beginner/median humans.
E.g. Magnus Carlsen is closer in ELO to Stockfish than median human.
I expect Magnus Carlsen to be closer in ELO to a bounded superintelligence than to a median human.
Narrow Optimisers Outperform General Optimisers on Narrow Domains
3.0. I believe that for similar levels of cognitive investment narrow optimisers outperform general optimisers on narrow domains.
This is because they are not constrained by the pareto frontier across many domains and are more able to pursue the optimum in their narrow domains.
I expect this to translate to many narrow domains (I wouldn't be surprised if we get superhuman language performance without "dangerously capable" systems [we got superhuman art without dangerously capable systems".].
E.g. future LLMs may be able to write very compelling ("bestseller" status) long form fiction in an hour.)
I expect a superintelligence to not win against dedicated chess/Go bots with comparable cognitive endowments (compute budgets, comparably efficient cognitive algorithms/architectures).
"Not win" is too conservative: I expect the ASI to lose unless it adopts the strategy of just running the bot (or depending on the level of superhuman, it might be able to force a tie). I simply do not think a general optimiser (no matter how capable) with comparable cognitive endowment can beat a narrow optimiser at their own game. Optimisation across more domains constrains the attainable optimum in any domain; the pareto frontier is an absolute limit.
I wouldn't be surprised if this generalises somewhat beyond Go.
Are narrow AI superhuman real world strategists viable?
The answer is not obviously "no" to me.
3.1. I believe that general intelligence is not compact.
Deployment Expectations and Strategic Conditions
4.0. I expect continuous progress in cognitive capabilities for several years/decades more.
There may be some paradigm shifts/discontinuous jumps, but I expect that the world would have already been radically transformed when superhuman agents arrive.
4.1. I expect it to be much more difficult for any single agent to attain decisive cognitive superiority to civilisation, or to a relevant subset of civilisation.
Especially given 3.
Superhuman agents may not be that much more capable than superhuman narrow AI amplified humans.
4.2. Specifically, I expect a multipolar world in which many actors have a suite of superhuman narrow AIs that make them "dangerously capable" relative to 2020s earth, but not relative to their current time (I expect the actors to be in some sort of equilibrium).
I'm not convinced the arrival of superhuman agents in such a world would necessarily shatter such an equilibrium.
Or be unilateral "existentially dangerous" relative to said world.
Hence, I expect failure to materialise as dystopia not extinction.
"Superintelligence" is a High Bar
5. "Superintelligence" requires a "very high" level of strongly superhuman cognitive capabilities
Reasons:
My intuition is that the level of cognitive power required to achieve absolute strategic dominance is crazily high.
And it's a moving target that would rise with the extant effective level of civilisation.
Summary
Courtesy of chatGPT:
Half an hour to touch up a stream of consciousness Twitter thread I wrote yesterday.
An "intelligence explosion" scenario where there's a very short time period where AI systems rapidly grow in intelligence until their cognitive capabilities far exceed humanity's.
E.g. inventing the dominant paradigm in a hard science seems beyond the ability of most humans. I'm under the impression that pre 1920 < 1,000 (and plausibly < a 100) people could have invented general relativity.
Some have claimed that without Einstein we may not have gotten general relativity for decades.