This feels like precisely the type of wrong but clever thinking that LW is teaching to avoid.
A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it.
Assuming the author is serious about this sentence, this would be the right moment to stop reading the article. Sure, you can show how brains are not "intrinsically intelligent" by using a proper definition of "intrinsically intelligent", but that's playing with definitions, and says little about the territory.
In particular, there is no such thing as “general” intelligence.
This is trivial to prove. If brains are not even "intelligent", they can hardly be "generally intelligent". ;)
In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. (...) The intelligence of a human is specialized in the problem of being human.
Yeah, someone has a clever definition of "highly specialized". Using this definition, even AIXI would be "highly specialized" in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also "highly specialized" in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.
If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.
Following the logic of previous paragraphs, if you cannot operate a computer without using a keyboard or a mouse, then you "cannot hope" to increase the computer's operating speed merely by buying faster processors and disks -- there will be no gains in the computing power unless you also upgrade the keyboard and the mouse.
There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. In fact, many of the most impactful scientists tend to have had IQs in the 120s or 130s
I guess someone never heard about this "base rates" stuff... (Highly specialized stuff, I guess.)
A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.
Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.
...giving up in the middle of the article, because I expect the rest to be just more of the same.
Going to reply here. I think the author is completely wrong, but you're missing several things.
Interpret this as a steelman. I do not agree with the author's conclusions or it's argument, but I think the essay was of pedagogical value. I think you're prematurely dismissing it.
---
This is trivial to prove. If brains are not even "intelligent", they can hardly be "generally intelligent". ;)
There is no generally intelligent algorithm. If you accept that intelligence is defined in terms of optimisation power, there is no intelligent algorithm that outperforms random search on all problems.
Worse there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
If you define general intelligence as an intelligent algorithm that can optimise on all problems, then random search (and its derivatives) are the only generally intelligent algorithms.
Yeah, someone has a clever definition of "highly specialized". Using this definition, even AIXI would be "highly specialized" in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also "highly specialized" in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.
This follows from the fact that there is no generally intelligent algorithm (save random search). The vast majority of potential optimisation problems are intractable (I would say pathological, but I'm not sure that makes sense when I'm talking about the majority of problems). Most optimisation problems cannot be solved except via exhaustive search. Humanity's cognitive architecture is highly specialised in the problems it can solve. This is true for all non exhaustive search methods.
Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.
Majority of exceptionally high IQ humans do not in fact solve major problems. There are millions of people in the IQ 150+ range. How many of them are academic heavyweights (Nobel prize laureates, field medalists, ACM Turing award winners, etc)?
...giving up in the middle of the article, because I expect the rest to be just more of the same.
I think you should finish it.
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like "you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random".
This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)
When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn't seem relevant to me. Saying that an AI is not "truly intelligent" unless it can handle the impossible task of skillfully navigating completely random universes... that's trying to win a debate by using silly criteria.
So, Chollet is the author of the important deep-learning library Keras ( https://keras.io/), so at first glance there's reason to take him seriously. But I don't think this is a good essay. Some counter-arguments:
Basically, the only thing here which is a valid argument against an intelligence explosion is the last section, which mentions bottlenecks and antagonistic processes (like the fact that collaboration among more people is more difficult, so more workers doesn't mean proportionately more progress.) This is basically Robin Hanson's argument against FOOM, and is several years old, and Chollet doesn't really add anything new here.
As an argument considered in a vacuum, I don't think this article provides any new reason to update away from believing in an intelligence explosion.
The fact that Chollet believes there won't be an intelligence explosion is, of course, an update both against Chollet's credibility on AI futurism (if you already think intelligence explosions are likely) and against the likelihood of an intelligence explosion (if you're impressed with Chollet's achievements), but that doesn't tell you where belief propagation is going to converge, or even whether it will.
I really liked Chollet's earlier essay, The Future of Deep Learning, which is more technical and agrees with a lot of the conclusions that I came to independently. I'm inclined to believe that Chollet writing for a general audience on Medium may be practicing propaganda, while his more concrete futurist predictions seem very credible.
In the above article, he says, "As such, this perpetually-learning model-growing system could be interpreted as an AGI—an Artificial General Intelligence. But don't expect any singularitarian robot apocalypse to ensue: that's a pure fantasy, coming from a long series of profound misunderstandings of both intelligence and technology. This critique, however, does not belong here."
You could take this to mean something like "Yes, AGI is possible and I have just laid out a rough path to getting there; but I want to strongly disaffiliate with science-fiction geeks."
I agree with all your criticisms. I also think the article is wrong and didn't update except against Chollet, but I found the article educational.
Things I learned.
The article may have seemed of significant pedagogical vale to me, because I hadn't met these ideas before. For example, I have just started reading the Yudkowsky-Hanson AI foom debate.
The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body.
I see no good reason to believe this. In vivo experiements in monkeys suggest that even after development it's possible to add additional colors via gene therapy to the eyes in a way that causes them to be integrated.
Nonvisual senses like knowing cardinal directions or magnetoperception also can be learned in vivo by adding sensors.
Out of memory I don't have concrete examples where humans or apes manged to deal with having a 8 legged body but I wouldn't be surprised if a human brain is fine at learning to do so, especially when it has that amount of limbs from birth.
I would guess that part of the reason why a 6 month year old human child is much more incapable than a 6 year old dog is that the human child depends a lot less on "hardcoded conceptions" then the puppy.
The author of the article even notes that a human child that didn't grow up in the "nurturing environment of human culture" is radically different then one that is, with suggests that a lot isn't hardcoded. Unfortunately, the author doesn't notice the contradiction.
There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news.
There are less than seven million people with IQ's of 150. IQ is not normed for having 100 for the average citizen of the world.
Saying that very high IQ doesn't matter when the richest man has something like an 160 IQ (translated from the 1590/1600 SAT score) is also misleading.
US Federal Reserve leadership with Ben Bernake who scored like Gates and Janet Yellen for whom IQ numbers or SAT numbers aren't public but who got reportedly called by a college "Small lady with large IQ".
Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time.
We measure GDP growth by a percentage of last years growth and not by absolute numbers because we believe GDP growth is exponential and not linear.
Lastely, even if it would be true if you couldn't improve AI software, no AI software improvement is necessary for FOOM. In the Age of Em Robin Hanson lays out how Em's can go FOOM by simply increasing the production of the hardware on which they run without any improvment in their cognitive ability even if they are only as smart as the smartest humans. An AGI can play all of the same tricks and likely can improve their cognition if the process of Alpha Go is any indication.
I think you wanted to link to this recent essay by François Chollet (AI researcher and designer of Keras, a well-known deep learning framework). The essay has also been discussed on Hacker News, also on Twitter.
I'm currently writing an answer to this one. I think it would be beneficial to have extra material about intelligence explosion which is disconnected from the "what should be done about it" question, which is so often tied to "sci-fi" scenarios.
I don't agree with everything here—or even the central argument—but this post was informative for me, and I think others would benefit from it.
Things I learned.
The article may have seemed of significant pedagogical value to me, because I hadn't met these ideas before. For example, I have just started reading the Yudkowsky-Hanson AI foom debate.