I think a lot of this discussion becomes clearer if we taboo "intelligence" as something like "ability to search and select a high-ranked option from a large pool of strategies".
Thank you for posting this! There's a lot of stuff I'm not mentioning because confirming agreements all the time makes for a lot of comment clutter, but there's plenty of stuff to chew on here. In particular, the historical rate of scientific progress seems like a real puzzle that requires some explanation.
- I'm pretty surprised by the position that "intelligence is [not] incredibly useful for, well, anything". This seems much more extreme than the position that "intelligence won't solve literally everything", and like it requires an alternative explanation of the success of homo sapiens.
I guess it depends on how many "intelligence-driven issues" are yet to solve and how important they are, my intuition is that the answer is "not many" but I have very low trust in that intuition. It might also be just the fact that "useful" is fuzzy and my "not super useful" might be your "very useful", and quantifying useful gets into the thorny issue of quantifying intuitions about progress.
The problem (even in humans) is rarely the ability to identify the right answer, or even the speed at which answers can be evaluated, but rather the ability to generate new possibilities. And that is a skill that is both hard and not well understood.
I would agree with "superintelligence is not literally omnipotence" but I think I think you're making overly strong claims in the opposite direction. My reasons are basically contained in Intelligence Explosion Microeconomics, That Alien Message, and Scott Alexander's Superintelligence FAQ. For example...
power seems to be very unrelated to intelligence
I think "very" is much too strong, and insofar as this is true in the human world, that wouldn't necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn't be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
Also, there are already robot bodies capable of doing I think the vast majority of physical labor jobs. The only reason they're not doing those jobs today is inadequate algorithms.
The apparatus humans use to “understand” other human is not just a complex probabilistic function based on observing them, but rather it’s an immensely complex simulation which we adjust based on our observations, a simulation that we might never be able to efficiently run on a computer.
I think being charismatic over the internet is easier than you're suggesting ... if people would open up to ELIZA, I think they would open up to an AGI that has studied ELIZA and also has had extensive practice talking to people. Also, I don't think that the algorithms underlying human empathy are as computationally intensive as you do, but that's a more complicated story, maybe not worth getting into here.
That aside, the question remains of whether or not solving all “thinking bottlenecks” would leave us with a process of scientific advancement that is somewhat faster than what we have today (slow road to progress) or exponentially faster (singularity).
I think you're overly focused on "scientific advancement". Existing scientific and technological knowledge plus AGI could bring the entire world up to first-world standards of living, and eliminate the need for any human to ever work again. That's nothing to scoff at!
The vast majority of "good thinkers" (under an IQ/math/language/memory == intelligence paradigm) are funnelled towards intern[et] companies, no extra requirements, not even a diploma, if you have enough "raw intelligence". Under the EMH that would indicate those companies have the most need for them. Yet internet companies are essentially devoid of any practical implications when it comes to reality, they aren't always engaged in "zero-sum" games, but they are still "competitive", in that their ultimate reason is to convince people they want/need more things and that those things are more valuable, they aren't "creating" any tangible things. On the other hand, research universities and companies interested in exploring the real world seem to care much less about intelligence...
I'm not sure you're applying EMH properly. EMH would imply that the most intelligent people (if choosing jobs purely based on pay) would go to jobs where they have the highest marginal impact on firm revenue, compared to a marginally less intelligent person. If research universities don't offer salaries as high as Facebook, that doesn't mean that research universities don't "care" about getting intelligent people, it probably means, for example, that the exchange rate between marginal professor intelligence and marginal grant revenue isn't high enough to support Facebook-level salaries, and moreover universities are in part of a job market where lots of smart people will definitely apply for a professor job even if the salary is much lower than Facebook's. The fact that academia has rampant credentialism and Facebook doesn't is, umm, not related to the EMH, I would suggest. I think it's more related to Eliezer's "Inadequate Equilibria" stuff.
I think "very" is much too strong, and insofar as this is true in the human world, that wouldn't necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn't be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
You're thinking "one superintelligence against modern spam detection"... or really against 20 years ago spam detection. It's no longer possible to mass-call everyone in the world because, well, everyone is doing it.
Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.
And again, that's with current tech, by the time a superintelligence exists you'd have equally matched spam detection.
That's my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren't entirely fair, thus safeguarding the status quo.
<Also, I'd honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that "understood AI" 10 years ago and doesn't own a company valued at a few hundred millions is like reading someone that "gets how trading works", but works at Walmart and live with his mom>
reading someone that "understood AI" 10 years ago and doesn't own a company valued at a few hundred millions is like reading someone that "gets how trading works", but works at Walmart and live with his mom
Such an interesting statement. Do you mean this literally? You believe that everyone on Earth who "understood AI" ten years ago, became a highly successful founder?
Roughly speaking, yes, I'd grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.
Back then people literally made 1-niche image recognition startups that work.
I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.
I'm not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.
It's obviously east to "understand that it was the right direction"... With the benefit of hindsight. Much like now everyone "understands" transformers are the future of NLP.
But in general the field of "AI" has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.
I don't claim I'm among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.
I'm not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.
<Also, I'd honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that "understood AI" 10 years ago and doesn't own a company valued at a few hundred millions is like reading someone that "gets how trading works", but works at Walmart and live with his mom>
A person who runs a company worth a few hundred millions is mainly spending his time managing people. There are plenty of cases where it makes more sense to listen to scientists who spend their time studying the subject then to managers when it comes to predicting future technology.
One key point here is reminiscent of Amdahl's law in computing: if you are trying to make some computation faster by parallelizing it, and a fraction x of it is stuff you don't have a way to parallelize, then no amount of extra hardware will make the speedup bigger than 1/(1-x).
Similarly, if you are trying to make some discovery / invention / optimization faster by doing it more intelligently, and a fraction x of the work is something other than pure thinking, then no amount of better thinking will make the speedup bigger than 1/(1-x).
I think the usual rejoinder on the "AI go foom" side is that we are likely to overestimate x by underestimating what really effective thinking can do; Eliezer's story "That Alien Message" is intended as an intuition pump in that direction. (Along with his non-fictional comments in the same post, suggesting e.g. that something sufficiently smart looking at webcam footage of a falling apple would be contemplating GR as a plausible hypothesis by the third frame.) I don't find that my own intuition is very compellingly pumped by this, for what it's worth.
I think the usual rejoinder on the "AI go foom" side is that we are likely to overestimate x by underestimating what really effective thinking can do
Well, yeah, and on the whole, it's the kind of assumption that one can't scientifically prove or disprove. It's something that can't be observed yet and that we'll see play out (hopefully) this century.
I guess the main issue I see with this stance is not that it's unfounded, but that its likely cause is something like <childhood indoctrination to hold having good grade, analytical thinking, etc as the highest value in life>, as in, it would perfectly explain why it seems to be readily believed by anyone that stumbles upon less wrong, whereas few/no other beliefs (that don't have a real-world observation to prove/disprove them) are so widely shared here (as well as more generally in a lot of nerdy communities).
Granted, I can't "prove" this one way or another, but I think it helps to have some frameworks of thinking that are able to persuade people that start from an "intelligence is supreme" perspective towards the centre, much like the alien story might persuade people that start from an "intelligence can't accomplish much" perspective.
Superior intelligence isn't the only trait of an AGI. Our corporations are severly limited by being mazes where a siginificant amount of human effort doesn't go into achieving the goals of the corporation but in everyone doing what's best for themselves.
An AGI that replaces all the management of a corporation has the ability to align the efforts of the corporation much better.
This inefficiency is part of the reason why Karlsson found that a job that doesn't need to any skill and can be done in two days with a friend costs 20 days of wages.
In a world with internally aligned corporations that analyse all business niches and make profits to serve them, you wouldn't see such opportunities.
But power seems to be very unrelated to intelligence.
I don't think it's an accident that Ben Bernake and Bill Gates are on top of the intelligence scale and Bill Gates was for a long time the richest men and Ben Bernake had immense power as FED chairman (and while we don't have test information abotu Bernake's successor given that she was called "small lady with a large IQ", she's likely also smart).
The fact that people make a point to note Yanet's IQ suggests that it's unusual given the selection pool.
While years of social conditioning tell me otherwise, I remain rather unconvinced that intelligence is incredibly useful for, well, anything. With small and pointless exceptions such as chess, GO or writing articles about the pointlessness of intelligence.
Intelligence (as measured by IQ) seems to be much less central to success at Go then it seems to be central to become Fed Chairmen.
Stereotypes suggest that Go skill would correlate more with intelligence but that's not what you see when you look at the actual people.
In Chess even so Garry Kasparov qualifies for Mensa he's far off the level of Gates or Bernake.
If you want to move past individuals, the economic success of Jewish people with an average higher IQ happened even in enviroments where they face strong discrimination.
The question you should be asking is not if IQ is correlated with success, but if it's correlated with success in spite of other traits. I.e. being taller than your siblings, facial symmetry and having few coloured spots on your skin are also correlated with success... but they are not direct markers, they simply point to some underlying "causes" ("good" embryonal env, which correlates with being born into wealth/safety/etc | lack of cellular damage and/or ability to repair said damage | proper nutrition growing up... etc).
Also, my claim is not that humans don't fetishize or value intelligence, my claim is that this fetish specifically pretains to "intelligence of people that are similar enough to me".
I think Gates/Bernake being a standard deviation higher in IQ then Kasparov does suggest that IQ is more important for what they did then it is for chess. The ability to spend a significant amount of time training for Chess also comes with a privileged background.
Your model suggests that Bill Gates and Bernake should be taller then average. Ben Bernanke is 5-foot 8-inch while Gates is 5′ 10″. As far as Yanet goes she's called "small lady with a large IQ" for a reason. Rumors have it that she wasn't reappointed by Trump because Trump thought that she was too small.
We don't have testscores for Bezos was smarter than his childhood enviroment given that he was valedictorian and with 5′ 7″ he's smaller then his brother.
How do you think your model that predict those people to be talled should be updated based on the prediction of them being tall not representating reality?
How do you explain that being born into wealth correlates with IQ? I think the best explanation is that high IQ parents are more likely to be economically successful. All the points to being taller better facial symmetry don't explain Jewish wealth which is best explained by intelligence being economically useful. If you don't believe that explanation what's your alternative?
Also, my claim is not that humans don't fetishize or value intelligence, my claim is that this fetish specifically pretains to "intelligence of people that are similar enough to me".
I think the thing you call fetishing intelligence is fetishing things like being good at chess or go where IQ isn't very central while at the same time ignoring that it's central for other domains. Thinking of Go/Chess as high intelligence activities is part of a certain stereotype and that stereotype is not helpful but it's distinct from IQ being useful.
But power seems to be very unrelated to intelligence.
On the level of individuals, perhaps. But one argument is that the more relevant question is that of species-level comparisons; if you need to understand people, to know them, befriend them, network with them, get them to like you, etc., then a human brain may be able to do it, but while a mice or dog brain might manage to do some of it, it's not going to get to a position of real power that way.
Eliezer making an argument on why one should explicitly not think of "intelligence" as corresponding to conceptual intelligence, but rather to "the thing that makes humans different from other animals":
General intelligence is a between-species difference, a complex adaptation, and a human universal found in all known cultures. There may as yet be no academic consensus on intelligence, but there is no doubt about the existence, or the power, of the thing-to-be-explained. There is something about humans that let us set our footprints on the Moon.
But the word “intelligence” commonly evokes pictures of the starving professor with an IQ of 160 and the billionaire CEO with an IQ of merely 120. Indeed there are differences of individual ability apart from “book smarts” which contribute to relative success in the human world: enthusiasm, social skills, education, musical talent, rationality. Note that each factor I listed is cognitive. Social skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet professors of academia, who are chimpanzees. You will not find many acclaimed rationalists, nor artists, nor poets, nor leaders, nor engineers, nor skilled networkers, nor martial artists, nor musical composers who are mice. Intelligence is the foundation of human power, the strength that fuels our other arts.
The danger of confusing general intelligence with g-factor is that it leads to tremendously underestimating the potential impact of Artificial Intelligence. (This applies to underestimating potential good impacts, as well as potential bad impacts.) Even the phrase “transhuman AI” or “artificial superintelligence” may still evoke images of booksmarts-in-a-box: an AI that’s really good at cognitive tasks stereotypically associated with “intelligence,” like chess or abstract mathematics. But not superhumanly persuasive; or far better than humans at predicting and manipulating human social situations; or inhumanly clever in formulating long-term strategies. So instead of Einstein, should we think of, say, the 19th-century political and diplomatic genius Otto von Bismarck? But that’s only the mirror version of the error. The entire range from village idiot to Einstein, or from village idiot to Bismarck, fits into a small dot on the range from amoeba to human.
If the word “intelligence” evokes Einstein instead of humans, then it may sound sensible to say that intelligence is no match for a gun, as if guns had grown on trees. It may sound sensible to say that intelligence is no match for money, as if mice used money. Human beings didn’t start out with major assets in claws, teeth, armor, or any of the other advantages that were the daily currency of other species. If you had looked at humans from the perspective of the rest of the ecosphere, there was no hint that the squishy things would eventually clothe themselves in armored tanks. We invented the battleground on which we defeated lions and wolves. We did not match them claw for claw, tooth for tooth; we had our own ideas about what mattered. Such is the power of creativity.
On a species level though, the specific niche of human intelligence arose and filled an evolutionary niche, but that is not proof the same strategy will be better.
Bears fill an evolutionary niche of being able to last long times without food, having a wide diet and being very powerful, but that's not a conclusion that a bear that's 3x bigger, can eat even more things and can survive even longer without food would fare any better.
Indeed, quite the opposite, if a "better" version of a trait doesn't exist that likely means the trait is optimized to an extreme.
And in terms of inter-species "achievements", if the core things that every species want to do is "survive" then, well, it's fairly easy to conclude cockroaches will outlive us, various grasses will outlive us or at least die with us, same goes for cats... and let's not even go into exteremophiles, those things might have conquered planets far way from ours billions of years before we even existed, and will certainly outlive us.
Now, our goals obviously converge from those animals, so we think "Oh, poor dumb cockroaches, they shan't ever advance as a species lacking x/y/z", but in the umvlet of the cockroach its species has been prospering at an astonishing rate in the most direction that are relevant to it.
Similarly, we are already subpar in many tasks to various algorithms, but that is rather irrelevant since those algorithms aren't made to fit the niches we do, the very need for them comes from us being unable to fill those niches.
Similarly, conceptual thinking can be used to control the nonconceptual parts of the brain, I can’t “think away pain”
While you can't it's doable with training. I have meet multiple people who used self-hypnosis to remove their pain when they had their wisdom teeth drawn (and no painkillers).
I guess I should have worded it as "while most people", I certainly agree some people can "think the pain away" and hypnosurgery is a thing and has been for over 100 years, so yeah.
Two years ago I wrote some pragmatic arguments that “human-like AI” is hard to develop and would be fairly useless. My focus was on the difficulty of defining a metric for evaluation and the cost-effectiveness of human brains.
But I think I failed to stress another fundamental point, which is that "intelligence" as commonly thought of may not be critical to acquiring knowledge about or power to change the reality external to our own body.
I – Conceptually intelligent
If I’m allowed to psychoanalyse just a tiny bit, the kind of people that think a lot about “AI” are the kind of people that value conceptual intelligence too much, because that's the best part of their thinking.
A rough definition of conceptual is: the kind of thinking that can easily be put into symbols, such as words, code or math. I’m making this distinction because there are many networks in the brain that accomplish unimaginably complex (intelligent) tasks, which are very nonconceptual. Most people that think about “AI” don’t view these functions as “worth imitating” (rightfully so in most cases).
There’s a ton of processing dedicated to going from “photovoltaic activation of retinal neurons” to “conscious experience of seeing”. But we use digital cameras to record the world for computer vision, so we don’t have to imitate most of the brain processes involved in sight.
I should make a note here that everything in the brain is interconnected. We don’t see a world of light and shadow, we see a world of objects. Conceptual thinking gets involved in the process of seeing at some point, detects various patterns, and overlays concepts over them to focus and enrich perception. Similarly, hearing, smell and memory all play a role in “seeing”.
But I think it’s permissible to think of an abstract “conceptual thinking” complex in the brain, which “AI” is trying to replicate, and treat the other bits, such as senses, motor control or homeostasis maintenance as separate from this, a prerequisite to run the platform that allows for conceptual thinking. I realize this is a naive dualist view that kind of puts conceptual thinking on a pedestal, but it's useful for conveying some ideas.
That being said, it’s important to note that conceptual thinking in humans does have access to the rest of the brain. Language can query the limbic system and say something like “I am feeling sad”. This is a very narrow abstraction that doesn’t encompass the changes in neuronal firing over billions of synapses which constitutes “being sad”. But I don't have to convey all of that to another person, since they themselves know what sadness feels like.
Similarly, conceptual thinking can be used to control the nonconceptual parts of the brain, I can’t “think away pain” but I can think “Oh, this is my leg muscles hurting because I just did some deadlifts” and the pain will probably feel “less bad” or “Oh, this is my leg muscle hurting because I have cancer" and the pain will probably feel “worse”, even if the signals coming from the leg muscles in both cases are identical.
II – Thinking reality away
Moving onwards, it’s important to remember that our understanding of the world is often limited by processes that can’t be speed up with conceptual thinking.
Take the contrived examples of physicists running experiments at the LHC. There’s a finite amount of experiments you can run in a day, a finite amount of harddisks on which you can store a finite amount of data, the gathering of which is limited by a finite amount of sensors connected to your computers by a finite amount of wiring.
You can build more LHCs, more harddisks and more sensors, but those require access to the finite amount of rare metals and assembly plants required to build them, which in turn require raw materials from a finite amount of sources… etc.
The experiment might reveal something on a scale from “very useful to very useless” and this will dictate the direction of following experiments. Iterate over this process thousands of time and you’ve got a build-up of scientific knowledge that allows for magnetic toroids to contain fusion reactions and the building of artery repairing nanobots, or whatever. All the advancements also lead to improvements down the supply chain, harddisks becoming cheaper, metal easier to mine and refine, assembly plants faster build… etc.
Part of this process can be improved with “better conceptual thinking”, i.e intelligent people, or superintelligent "AIs" can help it along. Part of this process is limited by reality itself. You can’t smelt widgets made of infinitely fast because it takes time to transport the ingredients, melt and mix them, cast them into shapes, wait for them to cool down… etc. The limitations placed upon processes by reality can be lessened by understanding reality better, but that is an iterative experimental process that is time-constrained due to our current limitations when it comes to manipulating reality.
In other words, you can have a perfect intelligence analyzing the data and optimizing the supply chains and doing every other form of “thinking” required to advance physics. But the next advancement in our theory of special-low-temperature-weak-nuclear-forces-in-niobium (or whatever) might boil down to “we have to wait 4 days for the next shipment of niobium to arrive from the DRC, and if the experiments using that don’t yield any relevant data to advance the theory we need to wait until next month to get more funding to buy more niobium from the DRC”.
I know that some of you might consider me daft for spending 6 paragraphs to essentially say “perfect thinking doesn’t solve all problems”, but I think that this is a point that sinks in really hard for some people, in part due to an education system that entrenches the idea of “thinking” as the sole driver of the world, and sometimes leads to a fallacy of “if only we could think better every problem would be instantly resolved”.
That aside, the question remains of whether or not solving all “thinking bottlenecks” would leave us with a process of scientific advancement that is somewhat faster than what we have today (slow road to progress) or exponentially faster (singularity).
I suspect that our reality falls under the former scenario, where more intelligent agents will somewhat speed up things, but not enough to yield a “singularity”, i.e exponential scientific advancements that keep compounding.
I base this suspicion in part on the following imperfect narratives:
1. There are much better “thinking tools” which science could be using in many areas. Computational biology only started using machine learning a few years ago, although the techniques used existed for a long time (e.g. decades in the case of the “novel” algorithms used by Horvath for his epigenetic clock, and 8 to 4 years for the algorithms now used to determine in-Vitro protein folding). More importantly, the vast majority of the hiring budget at companies profiting from scientific research is not spent on people that can build and use “thinking tools”, but rather on lab workers, salespeople, marketing, trial funding, etc. The creme de la creme in terms of programming, mathematics, statistics and machine learning work predominantly at tech companies in the advertising and logistics spaces, rather than at companies creating novel drugs, materials, engines and so on.
2. The historical advancement of science seems to indicate that “more intelligence” was not the answer. In the first half of the 20th-century scientific theory was constructed by a very small sample from the niche demographics of “relatively rich European or American men”… consisting of maybe a few hundred thousand candidates. Nowadays the doors are open to virtually everyone with the correct aptitudes, increasing our pool of potential scientists to half a dozen billion people. A 10,000 fold increase in the number of people. Even if 90% of the “potential scientists” lack the early-childhood needed to develop in such a direction, even if discrimination leads to “relatively rich European and American men” being 90% as likely to be hired despite their merit, we are still left with a 100 fold increase. Add to that the fact that there are hundreds of times more scientists operating nowadays… and, well, a physicist from 1950 would be forgiven for thinking that the current conditions would have lead to a “singularity” of sorts in physics due to this exponential increase in intelligence. Yet the last 70 years have bought a relative stagnation. The only reprieve from stagnation seems to be due to large experimental setups (see: expensive and take a long time to build). This seems to indicate that “adding intelligence” was not the answer, it was enough to have 100 bohem thinkers pondering the nature of space-time to come up with general relativity, increasing that number to 100,000 and having the top ones be much smarter did not yield “general relativity but 100 times better”.
3. Human intelligence has been stumbling upon the same scientific ideas since Aristotel and Lucretius. The structure of the brain has remained relatively similar in that time. The “thing” that lead to modern scientific advancement seems to be much more related to humanity crossing a barrier in terms of resource exploitation and numbers that make wars very suboptimal to trade and allows most resources to be diverted to things other than subsistence and war. The key to getting here doesn’t seem to be intelligence-related, otherwise one would expect the industrial revolution to have arisen in ancient Rome or Athens, not in the intellectually policed and poorly educated medieval England and Prussia.
4. The vast majority of "good thinkers" (under an IQ/math/language/memory == intelligence paradigm) are funnelled towards intern companies, no extra requirements, not even a diploma, if you have enough "raw intelligence". Under the EMH that would indicate those companies have the most need for them. Yet internet companies are essentially devoid of any practical implications when it comes to reality, they aren't always engaged in "zero-sum" games, but they are still "competitive", in that their ultimate reason is to convince people they want/need more things and that those things are more valuable, they aren't "creating" any tangible things. On the other hand, research universities and companies interested in exploring the real world seem to care much less about intelligence and much more about .
iii - Entrenched power
The other idea worth addressing is that “intelligence” could help one achieve power over reality by achieving power in society. For some people social power is a terminal goal, others would use it to divert societal resources from yachts and watches and war towards more productive goals, such as understanding the natural world.
But power seems to be very unrelated to intelligence.
The king might have 10 sons, and the bravest and smartest of his ten sons will be chosen to inherit the throne. But the king has 100,000,000 subjects, among which there are over 1,000,000 braver and smarter than his bravest and smartest son… yet they don’t get selected.
This is a naive example of the“global vs local” fallacy being made here. People see intelligent people achieving positions of power and think “ah, intelligence is the key”, when in fact the key is “ intelligence + being part of a very small selection pool”, and being part of that selection pool might be the much more important trait.
This entrenched power bias might be more or less present in various places. The extreme examples being places like India with its cast systems, or Italy, where the same families have controlled most wealth for hundreds or thousands of years. The more “meritocratic” places being areas like the US, where most powerful people come from a rather diverse poll, and where generational wealth is rare and means “having a rich grandparent” rather than “being a direct descendant of one of the families in the republican Roman senate”.
So maybe a superintelligence is born and it can be a 20% better trader than any man on Wallstreet, but that in itself is pretty irrelevant, because the best trader will be one working for a bank like Goldman Sachs, that has influence over the regulators, insider information, control over the exchanges and billions for traders to play with… and while “being a good trader” is a selection criterion for working with a Wallstreet bank, so is “being the kind of street-smart Brooklin guy that our CEO likes”.
Still, you might say, a superintelligent agent could “mimic” the behaviours which get people to think of them as “part of their tribe”, part of the small selection pool they want to hand over power to. I would beg to differ.
iv – Simulating humans
The broader point that the idea of entrenched power is part of, is that in order to operate in the world you need to understand people, to know them, befriend them, network with them, get them to like you, etc.
People have a very large advantage in understanding other people, besides their physical actions and appearance, which are rather hard to mimick down to the level of pheromonal changes and subtle facial muscle twitches.
We have an incredibly complex machine that can be used to simulate other people, our brain. This goes back to the idea of our conceptual thinking having the ability to communicate with the rest of our thinking. When asking “What can I do to impress Joe”, some concepts like “I should wear a blue tie” come to mind, but the reasoning behind those concepts is abstract and involves the ability to empathise with someone like Joe, to literally try and think like Joe.
Given that we still can’t run a very naive simulation of a worm with a few dozen neurons, this might indicate that simulating a human brain on a computer could be impossible. Not in the “impossible to comprehend” sense, but the number of experiments required to build the simulation and the resources required for it to run would be prohibitive.
On the other hand, humans are able to very accurately simulate other people by sheer similarity, requiring just a few seconds and a few milligrams of glucose to do so astoundingly well.
See someone prick their finger with a needle, and your finger becomes more sensitive, you “feel” their pain, the same areas of the brain activate in both of you (at the very rough level, what can be seen with an fMRI). Have a friend that you’re in constant contact with (e.g. a spouse) and you’ll be able to predict their words better than a phone keyboard app that has an exact history of their last 10 years of writings. The apparatus humans use to “understand” other human is not just a complex probabilistic function based on observing them, but rather it’s an immensely complex simulation which we adjust based on our observations, a simulation that we might never be able to efficiently run on a computer.
So one of the key traits required to operate in society, understanding people, might lie outside the reach of any computer-based system for rather pragmatic reasons, not because “computers don’t have souls” or any bullshit like that, but purely because the only environment that can efficiently and accurately simulate people, is found in other people.
v – Back to intelligence
Overall this whole article is my attempt to counter this partially unspoken view that many “believers” in the efficiency of “AI” have, a view that holds conceptual intelligence as the key to unlocking most problems.
On the whole, I don’t believe I can disprove this view, but I think that taking it for granted is often the result of the sort of conditioning that people get through education, rather than of any rational thought on the subject.
It seems very probable that the limits to us having power over nature or influencing society often resides in processes that are either impossible to significantly influence, where only gradual improvements can be made. And the areas where reality is not the limiting factor have other humans as the limiting factor, and the human brain might be, almost by definition, the best tool for coordinating other humans.
Granted, I still think that “superintelligence”, be it the whimsical kind envisioned by Bostrom, or a more sober approach to the subject, is probably economically unfeasible and very difficult to engineer, and I point back to my first article on the subject for the reasons there.
But even if we created a machine that was as intelligent as the brightest person, but able to “think” at a rate thousands of times as fast, we might still only get limited improvements in a surprisingly small niche of problems.
While years of social conditioning tell me otherwise, I remain rather unconvinced that intelligence is incredibly useful for, well, anything. With small and pointless exceptions such as chess, GO or writing articles about the pointlessness of intelligence.
I hope you will forgive me for being rather direct in this article, I am aware that I can’t take a stance here with anything but minute confidence. Heck, the “issue” itself is rather blurry and revolves around vague ideas such as intelligence, progress, knowledge, power and good.
This is the kind of topic where a data-backed answer is impossible to give, it’s nuanced and each of the thousand subproblems that would clarify it might only be solved with experiments that we could never run, due to limitation in time, resources (and potentially for ethical reasons).
But if you do have a strong belief about the supremacy of intelligence, I would invite you to question that belief further, and see where that belief might come from, look at the observations anchoring it to reality.