It is true that a Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight (I suspect that insects can do better).
Which doesn't (automatically) mean that the 747 has worse design than the eagle. Smaller things (constructions, machines, animals) are relatively stronger than bigger things not because of their superior design, but because physics of materials is not scale invariant (unless you somehow managed to scale the size of atoms too). If you made a 1000:1 scaled copy of an ant, it wouldn't be able to lift objects twenty times heavier.
Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
We have now, depending on how you interpret "teach itself". It wasn't given anything but the rules and how to play against itself.
One issue I've mentioned before and I think is worth addressing is how much of the ability to quickly self-improve might depend on strict computational limits from theoretical computer science (especially complexity theory). If P != NP in a strong sense then recursive self-improvement many be very difficult.
More explicitly, many problems that are relevant for recursive self-improvement (circuit design and memory management for example) explicitly involve graph coloring and traveling salesman variants which are NP-hard or NP-complete. In that context, it could well be that designing new hardware and software will quickly hit diminishing marginal returns. If P, NP. coNP, PSPACE, and EXP are all distinct in a strong sense, then this sort of result is plausible.
There are problems with this sort of issue. One major one is that the standard distinctions between complexity classes are all in terms of Big-Os. So it could well be that the various classes are distinct but that the constants are small enough such that for all practical purposes one can do whatever one wants. There are also possible loopholes. For example, Scott Aaronson has shown that if one has access to closed time-like cu...
Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
AIXI-MC can teach itself Pacman; Pacman by default is single-player so the game implementation has to be already done for AIXI-MC. I suppose you could set up a pair of AIXI-MCs like is sometimes done for training chess programs, and the two will gradually teach each other chess.
Evolution versus Intelligence
I didn't read all of this post, unfortunately, I didn't have time before class. But I wanted to mention my thoughts on this section. This seemed like a very unfortunate analogy. For one, the specific example of flying is wildly biased against humans, who are orders of magnitude handicapped by the square cube law. Secondly, you can think of any number of arbitrary counterexamples where the advantage is on the side of intelligence. For a similar counterexample, intelligence has invented cars, trains, and boats which allow huma...
Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.
It seems to be a kind-of irrelevant a...
Before I dive into this material in depth, a few thoughts:
First, I want to sincerely congratulate you on being (it seems to me) the first in our tribe to dissent.
Second, it seems your problem isn't with an intelligence explosion as a risk all on its own, but rather as a risk among other risks, one that is farther from being solved (both in terms of work done and in resolvability), and so this post could use a better title, i.e., "Why an Intelligence Explosion is a Low-Priority Global Risk", which does not a priori exclude SIAI from potential dona...
I can't really accept innovation as random noise. That doesn't seem to account for the incredible growth in the rate of new technology development. I think a lot of developments are in fact based on sophisticated analysis of known physical laws - e.g. a lot of innovation is engineering versus discovery. Many foundational steps do seem to be products of luck; such as the acceptance of the scientific method.
given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society.
Much human altruism is fictional - humans are nice to other humans mostly because being nice pays.
There are low-level selection explanations for most of the genuine forms of human altruism. IMHO, the most promising explanations are:
The critical similarity is that both rely on dumb luck when it comes to genuine novelty.
Someone pointed out that a sufficiently powerful intelligence could search all of design space rather than relying on "luck".
I read it on the Web, but can't find it - search really sucks when you don't have a specific keyword or exact phrasing to match.
This should be in Main. I'd much rather have this than "how I broke up with my girlfriend" there.
(Otherwise, I don't have much to say because I basically agree with you. I find your arguments kinda weak and speculative, but much less so than arguments for the other side. So your skepticism is justified.)
Many humans are not even capable of handling the complexity of the brain of a worm.
I don't think that's the right reference class. We're not asking is something is sufficient, but if something is likely.
...Our “irrationality” and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration...The noisiness of the human b
A lot of this post sounds like fake ignorance. If you just read over it, you might think the questions asked are genuinely unknown, but if you think for a bit, you can see we have quite a lot of evidence and can give a rough answer.
We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
Well humans are doing ok for themselves, it seems to have accelerating returns up to the level of a smart human. Whats more, intelligence gets more valuable with increasing scale, and with cheaper compute. When controlling a roomba, you are ...
Just a reminder that risk from AI can occur without recursive self-improvement. Any AGI with a nice model of our world and some goals could potentially be extremely destructive. Even if intelligence has diminishing returns, there is a huge hardware base to be exploited and a huge number of processors working millions of times faster than brains to be harnessed. Maybe intelligence won't explode in terms of self-improvement, but it can nevertheless explode in terms of pervasiveness and power.
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.
I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.
Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns and who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To enable an intelligence explosion the light would have to reach out much farther with each increase in intelligence than the increase of the distance between unknown unknowns. I just don’t see that to be a reasonable assumption.
We do have some data on historical increases in intelligence due to organic and cultural evolution. There's the fo...
I agree with the above, yet given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
I don't understand this paragraph. What does "something that works two levels above the individual and one level above society" mean? Or the follow-up sentence?
It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.
I don't necessarily think it's true that you need to know an unknown unknown to reach a "quantum leap". This is a very qualitative reasoning about intelligence. You could simply increase the speed. Also, evolution didn't make intelligence by knowing some unknown unknown, it was a result of trial and error. Further intelligence improvement could use the same method, just faster.
The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
If the AGI creates a sufficiently convincing business plan / fake company front, it might well be able to command a significant share of the world's resources on credit and either repay after improving or grab power and leave it at that.
The first several points you make seem very weak to me, however starting with the section on embodied cognition the post gets better.
Embodied cognition seems to me like a problem for programmers to overcome, not an argument against FOOM. However it serves as a good basis for your point about constrained resources; I suspect that with sufficient time and leeway and access to AIM an AGI could become an extremely effective social manipulator. However this seems like the only avenue which it would obviously have the ability to get and process responses easil...
If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
It seems like an argument for DOOM - but what if getting this far is simply very difficult?
Then we could be locally...
We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
General intelligence -- defined as the ability to acquire, organize, and apply information -- is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.
Even if we postulate an increasing difficulty or threshold of "contemplative-productivity" per new "layer" of intelligence, the following remains true: Any AGI which is designed as more "intelligent" than th...
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:
- Intelligence is goal-oriented.
- Intelligence can think ahead.
- Intelligence can jump fitness gaps.
- Intelligence can engage in direct experimentation.
- Intelligence can observe and incorporate solutions of other optimizing agents.
Much of this seems pretty inaccurate. The first three points are true, but not realy the issue - and explaning why would go uncomfortably close to the topic I am forbidden from talking about ...
(The following is a summary of some of my previous submissions that I originally created for my personal blog.)
— Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
Intelligence, a cornucopia?
It seems to me that those who believe into the possibility of catastrophic risks from artificial intelligence act on the unquestioned assumption that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries.
Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns and who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To enable an intelligence explosion the light would have to reach out much farther with each increase in intelligence than the increase of the distance between unknown unknowns. I just don’t see that to be a reasonable assumption.
Intelligence amplification, is it worth it?
It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.
If any increase in intelligence is vastly outweighed by its computational cost and the expenditure of time needed to discover it then it might not be instrumental for a perfectly rational agent (such as an artificial general intelligence), as imagined by game theorists, to increase its intelligence as opposed to using its existing intelligence to pursue its terminal goals directly or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors.
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable an intelligence explosion) over evolutionary discovery relative to its cost?
We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
Can intelligence be effectively applied to itself at all? How do we know that any given level of intelligence is capable of handling its own complexity efficiently? Many humans are not even capable of handling the complexity of the brain of a worm.
Humans and the importance of discovery
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:
But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence itself does it take the discovery of novel unknown unknowns?
We have no idea about the nature of discovery and its importance when it comes to what is necessary to reach a level of intelligence above our own, by ourselves. How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
Our “irrationality” and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration.
A lot of discoveries were made by people who were not explicitly trying to maximizing expected utility. A lot of progress is due to luck, in the form of the discovery of unknown unknowns.
A basic argument in support of risks from superhuman intelligence is that we don’t know what it could possible come up with. That is also why it is called it a “Singularity“. But why does nobody ask how a superhuman intelligence knows what it could possible come up with?
It is not intelligence in and of itself that allows humans to accomplish great feats. Even people like Einstein, geniuses who were apparently able to come up with great insights on their own, were simply lucky to be born into the right circumstances, the time was ripe for great discoveries, thanks to previous discoveries of unknown unknowns.
Evolution versus Intelligence
It is argued that the mind-design space must be large if evolution could stumble upon general intelligence and that there are low-hanging fruits that are much more efficient at general intelligence than humans are, evolution simply went with the first that came along. It is further argued that evolution is not limitlessly creative, each step must increase the fitness of its host, and that therefore there are artificial mind designs that can do what no product of natural selection could accomplish.
I agree with the above, yet given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
The example of altruism provides evidence that intelligence isn’t many levels above evolution. Therefore the crucial question is, how great is the performance advantage? Is it large enough to justify the conclusion that the probability of an intelligence explosion is easily larger than 1%? I don’t think so. To answer this definitively we would have to fathom the significance of the discovery (“random mutations”) of unknown unknowns in the dramatic amplification of intelligence versus the invention (goal-oriented “research and development”) of an improvement within known conceptual bounds.
Another example is flight. Artificial flight is not even close to the energy efficiency and maneuverability of birds or insects. We didn’t went straight from no artificial flight towards flight that is generally superior to the natural flight that is an effect of biological evolution.
Take for example a dragonfly. Even if we were handed the design for a perfect artificial dragonfly, minus the design for the flight of a dragonfly, we wouldn’t be able to build a dragonfly that can take over the world of dragonflies, all else equal, by means of superior flight characteristics.
It is true that a Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight (I suspect that insects can do better). My whole point is that we never reached artificial flight that is strongly above the level of natural flight. An eagle can after all catch its cargo under various circumstances like the slope of a mountain or from beneath the sea, thanks to its superior maneuverability.
Humans are biased and irrational
It is obviously true that our expert systems are better than we are at their narrow range of expertise. But that expert systems are better at certain tasks does not imply that you can effectively and efficiently combine them into a coherent agency.
The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence. Yet the same noise might be the reason that each task a human can accomplish is not put into execution with maximal efficiency. An expert system that features a single stand-alone ability is able to reach the unique equilibrium for that ability. Whereas systems that have not fully relaxed to equilibrium feature the necessary characteristics that are required to exhibit general intelligence. In this sense a decrease in efficiency is a side-effect of general intelligence. If you externalize a certain ability into a coherent framework of agency, you decrease its efficiency dramatically. That is the difference between a tool and the ability of the agent that uses the tool.
In the above sense, our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability.
Embodied cognition and the environment
Another problem is that general intelligence is largely a result of an interaction between an agent and its environment. It might be in principle possible to arrive at various capabilities by means of induction, but it is only a theoretical possibility given unlimited computational resources. To achieve real world efficiency you need to rely on slow environmental feedback and make decision under uncertainty.
AIXI is often quoted as a proof of concept that it is possible for a simple algorithm to improve itself to such an extent that it could in principle reach superhuman intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself to a non-biological substrate because you showed that in some abstract sense you can simulate every physical process.
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.
Therefore even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?
Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise? Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
In a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. Yet intelligent or not, the environment in which an agent is embedded plays a crucial role. There exist a fundamental dependency on unintelligent processes. Our environment is structured in such a way that we use information within it as an extension of our minds. The environment enables us to learn and improve our predictions by providing a testbed and a constant stream of data.
Necessary resources for an intelligence explosion
If artificial general intelligence is unable to seize the resources necessary to undergo explosive recursive self-improvement then the ability and cognitive flexibility of superhuman intelligence in and of itself, as characteristics alone, would have to be sufficient to self-modify its way up to massive superhuman intelligence within a very short time.
Without advanced real-world nanotechnology it will be considerable more difficult for an AGI to undergo quick self-improvement. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.
Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But, more importantly, it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement and risks from AI in general.
One might argue that an AGI will solve nanotechnology on its own and find some way to trick humans into manufacturing a molecular assembler and grant it access to it. But this might be very difficult.
There is a strong interdependence of resources and manufacturers. The AGI won’t be able to simply trick some humans to build a high-end factory to create computational substrate, let alone a molecular assembler. People will ask questions and shortly after get suspicious. Remember, it won’t be able to coordinate a world-conspiracy, it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources, which it has to do the hard way without nanotech.
Anyhow, you’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
People associated with the SIAI would at this point claim that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. But what, magic?
Artificial general intelligence, a single break-through?
Another point to consider when talking about risks from AI is how quickly the invention of artificial general intelligence will take place. What evidence do we have that there is some principle that, once discovered, allows us to grow superhuman intelligence overnight?
If the development of AGI takes place slowly, a gradual and controllable development, we might be able to learn from small-scale mistakes while having to face other risks in the meantime. This might for example be the case if intelligence can not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.
To me it doesn’t look like that we will come up with artificial general intelligence quickly, but rather that we will have to painstakingly optimize our expert systems step by step over long periods of times.
Paperclip maximizers
It is claimed that an artificial general intelligence might wipe us out inadvertently while undergoing explosive recursive self-improvement to more effectively pursue its terminal goals. I think that it is unlikely that most AI designs will not hold.
I agree with the argument that any AGI that isn’t made to care about humans won’t care about humans. But I also think that the same argument applies for spatio-temporal scope boundaries and resource limits. Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.
Complex goals need complex optimization parameters (the design specifications of the subject of the optimization process against which it will measure its success of self-improvement).
Even the creation of paperclips is a much more complex goal than telling an AI to compute as many digits of Pi as possible.
For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters. Therefore, given the large amount of restrictions that are inevitably part of any advanced general intelligence, the nonhazardous subset of all possible outcomes might be much larger than that where the AGI works perfectly yet fails to hold before it could wreak havoc.
Fermi paradox
The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI’s with non-human values without working directly on AGI to test those hypothesis ourselves.
If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
Summary
In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish.
There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities solely derive their appeal from being vague.
Further reading