On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.

[Under construction.]

 

"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."

Example: "I see no reason to single out AI as a mould-breaking technology: we already have billions of humans." (Deutsch, The Beginning of Infinity, p. 456.)

Response: The advantages of mere digitality (speed, copyability, goal coordination) alone are transformative, and will increase the odds of rapid recursive self-improvement in intelligence. Meat brains are badly constrained in ways that non-meat brains need not be.

 

"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."

Example: "If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn't a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have." (Hawkins, Tech Luminaries Address Singularity)

Response: Intelligence defined as optimization power doesn't necessarily need experience or learning from the external world. Even if it did, a superintelligent machine spread throughout the internet could gain experience and learning from billions of sub-agents all around the world simultaneously, while near-instantaneously propagating these updates to its other sub-agents. 

 

"There are hard limits to how intelligent a machine can get."

Example: "The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run... Exponential growth requires the exponential consumption of resources (matter, energy, and time), and there are always limits to this." (Hawkins, Tech Luminaries Address Singularity)

Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system.

 

"AGI won't be malevolent."

Example: "No intelligent machine will 'wake up' one day and say 'I think I will enslave my creators.'" (Hawkins, Tech Luminaries Address Singularity)

Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)

Response: True. But most runaway machine superintelligence designs would kill us inadvertently. "The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."

 

"If intelligence explosion was possible, we would have seen it by now."

Example: "I don't believe in technological singularities. It's like extraterrestrial life--if it were there, we would have seen it by now." (Rodgers, Tech Luminaries Address Singularity)

Response: Not true.

 

"Humanity will destroy itself before AGI arrives."

Example: "the population will destroy itself before the technological singularity." (Bell, Tech Luminaries Address Singularity)

Response: This is plausible, though there are many reasons to think that AGI will arrive before other global catastrophic risks do.

 

"The Singularity belongs to the genre of science fiction."

Example: "The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived." (Pinker, Tech Luminaries Address Singularity)

Response: This is not an issue of literary genre, but of probability and prediction. Science fiction becomes science fact several times every year. In the case of technological singularity, there are good scientific and philosophical reasons to expect it.

 

"Intelligence isn't enough; a machine would also need to manipulate objects."

Example: "The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans." (Moore, Tech Luminaries Address Singularity)

Response: Robotics is making strong progress in addition to AI.

 

"Human intelligence or cognitive ability can never be achieved by a machine."

Example: "Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines." (Lucas, Minds, Machines and Goedel)

Example: "Instantiating a computer program is never by itself a sufficient condition of [human-liked] intentionality." (Searle, Minds, Brains, and Programs)

Response: "...nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain... As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on.... we can set aside these objections by stipulating that for the purposes of the argument, intelligence is to be measured wholly in terms of behaviour and behavioural dispositions, where behaviour is construed operationally in terms of the physical outputs that a system produces." (Chalmers, The Singularity: A Philosophical Analysis)

 

"It might make sense in theory, but where's the evidence?"

Example: "Too much theory, not enough empirical evidence." (MileyCyrus, LW comment)

Response: "Papers like How Long Before Superintelligence contain some of the relevant evidence, but it is old and incomplete. Upcoming works currently in progress by Nick Bostrom and by SIAI researchers contain additional argument and evidence, but even this is not enough. More researchers should be assessing the state of the evidence."

 

"Humans will be able to keep up with AGI by using AGI's advancements themselves."

Example: "...an essential part of what we mean by foom in the first place... is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. [But] human engineers carry out exactly this sort of technology transfer on a routine basis." (rwallace, The Curve of Capability)

Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.

 

"A discontinuous break with the past requires lopsided capabilities development."

Example: "a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided... [But] the lopsidedness is not occurring [in computers]. Obviously computer technology hasn't lagged in symbol processing - quite the contrary." (rwallace, The Curve of Capability)

Example: "Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing." (Katja Grace, How Far Can AI Jump?)

Response: It doesn't seem that symbol processing was the missing capability that made humans so powerful. Calculators have superior symbol processing, but have no power to rule the world. Also: many kinds of lopsidedness are occurring in computing technology that may allow a sudden discontinuous jump in AI abilities. In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.

 

"No small set of insights will lead to massive intelligence boost in AI."

Example: "...if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t.  Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city.  Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations." (Robin Hanson, Is the City-ularity Near?)

Example: "Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities... But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare." (Robin Hanson, The Betterness Explosion)

Response: An intelligence explosion doesn't require a breakthrough that improves all capabilities at once. Rather, it requires an AI capable of improving its intelligence in a variety of ways. Then it can use the advantages of mere digitality (speed, copyability, goal coordination, etc.) to improve its intelligence in dozens or thousands of ways relatively quickly.

 

 

To be added:

 

New to LessWrong?

New Comment
123 comments, sorted by Click to highlight new comments since: Today at 10:23 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My problem with the focus on the idea of intelligence explosion is that it's too often presented as motivating the problem of FAI, but it's really not, it's a strategic consideration right there besides Hanson's malthusian ems, killer biotech and cognitive modification, one more thing to make the problem urgent, but still one among many.

What ultimately matters is implementing humane value (which involves figuring out what that is). The specific manner in which we lose ability to do so is immaterial. If intelligence explosion is close, humane value will lose control over the future quickly. If instead we change our nature through future cognitive modification tech, or by experimenting on uploads, then the grasp of humane value on the future will fail in orderly manner, slowly but just as irrevocably yielding control over to wherever the winds of value drift blow.

It's incorrect to predicate the importance, or urgency of gaining FAI-grade understanding of humane value on possibility of intelligence explosion. Other technologies that would allow value drift are for all purposes similarly close.

(That said, I do believe AGIs lead to intelligence explosions. This point is important to appreciate the impact and danger of AGI research, if complexity of humane value is understood, and to see one form that implementation of a hypothetical future theory of humane value could take.)

2RomeoStevens12y
The question of "can we rigorously define human values in a reflectively consistent way" doesn't need to have anything to do with AI or technological progress at all.
0Giles12y
This is a good point. I think there's one reason to give special attention to the intelligence explosion concept though... it's part of the proposed solution as well as one of the possible problems. The two main ideas here are: * Recursive self-improvement is possible and powerful * Human values are fragile; "most" recursive self-improvers will very much not do what we want These ideas seem to be central to the utliity-maximizing FAI concept.

Too much theory, not enough empirical evidence. In theory, FAI is an urgent problem that demands most of our resources (Eliezer is on the record saying that the only two legitimate occupations are working on FAI, and earning lots of money so you can donate money to other people working on FAI).

In practice, FAI is just another Pascal's mugging/ lifetime dilemma/ St. Petersburg Paradox. From XiXIDu's blog:

To be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.

[...]

Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.

Until [various rationality puzzles] are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intu

... (read more)
2lukeprog12y
Added.
2djcb12y
I would also be very interested in seeing some smaller stepping stones implemented -- I imagine that creating an AGI (let alone FAI) will require massive amounts of maths, proofs and the like. It seems very useful to create artificialy intelligent mathematics software that can 'discover' and proof interesting theorems (and explain its steps). Of course, there is software that can proof relatively simple proofs, but there's nothing that could proof e.g. Fermat's Last Theorem -- we still need very smart humans for that. Of course, it's extremely hard to create such software, but it would be much easier than AGI/FAI, and at the same time it can help with constructing those (and help in some other areas, say QM). The difficulty in constructing such software might also give us some understanding in the difficulties of constructing general artificial intelligence.

You should have entitled this post "Criticisms of Criticisms of intelligence explosion" :-)

(2) the claim that intelligence explosion is likely to occur within the next 150 years

A "scientific" prediction with a time-frame of several decades and no clear milestones along the way is equivalent to a wild guess. From 20 Predictions of the Future (We’re Still Waiting For):

Weather Control: In 1966, a radio documentary, 2000 AD, was aired as a forum for various media and science personalities to discuss what life might be like in the year 2000. The primary theme running through the show concerned a prediction that no one in the year 2000 would have to work more than a day or two a week, and our leisure time would go through the roof. With so much free time, you can imagine that we would not want our vacations or day trips ruined by nasty weather, and therefore we should quickly develop a way to control the weather, shaping it to our needs. Taking the lighting from the clouds or the wind from the tornadoes were among the predictions, yet they were careful to note that we might not take weather control too far because of political reasons. Unfortunately, we here in the 2000’s still work full weeks, and we still get our picnics rained out from time to time.

One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could "unwind" storms and/or redirect weather masses).

5magfrump12y
perhaps this is a poor place to begin this, but I'll propose a couple of things I would think count as milestones toward a theory of AGI. * AIs producing original, valuable work (automated proof systems are an example; I believe there is algorithmically generated music as well that isn't awful though I'm not sure) * parsing natural language queries (Watson is a huge accomplishment in this direction) * systems which reference different subroutines as appropriate (this is present in any OS I'm sure) and which are modular in their subroutines * automated search for new appropriate subroutine (something like, if I get a new iphone and say "start a game of words with friends with danny" the phone automatically downloads the words with friends app and searches for danny's profile--I don't think this exists at present but it seems realistic soon) * emulation of living beings (i.e. a way of parsing a computation so that it behaves exactly like, for starters, C. Elegans; then more complex beings) * AI that can learn "practical" skills (i.e. AIXI learning chess against itself) * robotics that can accomplish practical tasks like all-terrain maneuvering and fine manipulation (existent) * AI that can learn novel skills (i.e. AIXI learning chess by being placed in a "chess environment" rather than having the rules explained to it) * Good emulation or API reverse engineering (like WINE) and especially theoretical results about reverse engineering * automated bug fixing programs (I don't program enough to know how good debugging tools are) * chatbots winning at Turing tests (iirc there are competitions and humans do not always shut out the chat bots) These all seem like practical steps which would make me think that AGI was nearer; many of them have come to pass in the past decade, very few came before that, some seem close, some seem far away but achievable. There are certainly many more although I would guess many would be technical and I'm not sufficiently expert
5Normal_Anomaly12y
Here is some computer-generated music. I don't have particularly refined taste, but I enjoy it. Note: the first link with all the short MP3s is from an earlier version of the program, which was intended only to imitate other composers.
3shminux12y
My guess is that it will be not so much milestones as seemingly unrelated puzzle pieces suddenly and unexpectedly coming together. This is usually how things happen. From VCRs to USB storage to app markets, you name it. Some invisible threshold gets crossed, and different technologies come together to create a killer product. Chances are, some equivalent of an AGI will sneak up on us in a form no one will have foreseen.
-1magfrump12y
I agree that the eventual creation of AGI is likely to come from seemingly unrelated puzzle pieces coming together. On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc. So these capabilities existing makes the puzzle of how to fit pieces together easier. My point is simply that if you predict AGI in the next [time frame] you would expect to see [components of AGI] in [time frame], so I listed some things I would expect if I expected AGI soon, and I still expect all of those things soon. This makes it (in my mind) significantly different than just a "wild guess".
2orthonormal12y
If the "seed AI" idea is right, this claim can't be taken for granted, especially if there's no optimization for Friendliness.
0magfrump12y
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn't strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
3orthonormal12y
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech. Second, the absence of good natural language processing isn't great evidence that AI is far off, since even if it's a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
0magfrump12y
I don't think that we disagree here very much but we are talking past each other a little bit. I definitely agree with your first point; I simply wouldn't call such an AI fully general. It could easily destroy the world though. I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren't extremely far off. I don't think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
2orthonormal12y
I agree.
0magfrump12y
Yay!
1Logos0112y
Such equipment, however, would have to have access to more power than our civilization currently generates. So while it may be more of an engineering problem than a theoretical one, I believe that AGI is more accessible.

I boldly claim that my criticisms are better than those of the Tech Luminaries:

Also see:

5Desrtopa12y
In light of the actual arguments collected on this page so far, I don't think that's such a bold claim.
2lukeprog12y
I agree, as of the time of your comment, though I'm adding new ones as time passes.
-1XiXiDu12y
Also see a summary of Robin Hanson's positions here.
0timtyler12y
Regarding the "also see" material in the parent: The Curve of Capability makes only a little sense, IMO. The other articles are mostly about a tiny minority breaking away from the rest of civilization. That seems rather unrealistic to me too - but things become more plausible when we consider the possibility of large coalitions winning, or most of the planet.
[-][anonymous]12y140

Crocker’s rules declared, because I expect this may agitate some people:

(1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization.

I accept (1) and (3). Where I depart somewhat from the LW consensus is in the belief that anyone is going to accept the idea that the singularity (in its intelligence explosion form) should go ahead, without some important intervening stages that are likely to last for longer than 150 years.

CEV is a bad idea. I am sympathetic towards the mindset of the people who advocate it, but even I would be in the pitchfork-wielding gang if it looked like someone was actually going to implement it. Try to imagine that this was actually going to happen next year, rather than being a fun thing discussed on an internet forum – beware far mode bias. To quote Robin Hanson in a recent OB post:

Immersing ourselves in images of things large in space, time, and social distance puts us into a “transcendant” far mode where positive feelings are strong, our basic ideals are more visible than practica

... (read more)
2Giles12y
Seconded - I'd like to see some material from lukeprog or somebody else at SI addressing these kinds of concerns. A "Criticisms of CEV" page maybe? [Edit: just to clarify, I wasn't seconding the part about the pitchforks and I'm not sure that either IA or an AGI ban is an obviously better strategy. But I agree with everything else here]
0falenas10812y
The problem is that for a nuclear explosion to take place, the higher ups of some country have to approve it. For AGI, all that has to occur is code is leaked or hacked, then preventing the AGI from being implemented somewhere is an impossible task. And right now, no major online institution in the entire world is safe from being hacked.
-3[anonymous]12y
Perhaps that is a motivation to completely (and effectively) prohibit research in the direction of creating superintelligent AI, licensed or otherwise, and concentrate entirely on human intelligence enhancement.
3drethelin12y
Ignoring how difficult this would be (due to ease of secret development considering it would largely consist of code rather than easy to track hardware) even if every country in the world WANTED to cooperate on it, the real problem comes from the high potential value of defecting. Much like nuclear weapons development, it would take a lot more than sanctions to convince a rogue nation NOT to try and develop AI for its own benefit, should this become a plausible course of action.
1[anonymous]12y
What would anyone think they stood to gain from creating an AI, if they understood the consequences as described by Yudkowsky et al? The situation is not "much like nuclear weapons development", because nuclear weapons are actually a practical warfare device, and the comparison was not intended to imply this similarity. I just meant to say that we manage to keep nukes out of the hands of terrorists, so there is reason to be optimistic about our chances of preventing irresponsible or crazy people from successfully developing a recursively self-improving AI - it is difficult, but if creating and successfully implementing a provably safe FAI (without prior intelligence enhancement) is hopelessly difficult - even if only because the large majority of people wouldn't consent to it - then it may still be our best option.
-2drethelin12y
The same things WE hope to gain from creating AI. I do not trust north korea (for example) to properly decide on the relative risks/rewards of any given course of action it can undertake.
-1[anonymous]12y
OK but it isn't hard (or wouldn't be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant. I've seen no evidence that the North Koreans are that crazy. The problem would be people who think that something like CEV, implemented by present-day humans, is actually safe - and the people liable to believe that are more likely to be the type of people found here, not North Koreans or other non-Westerners. I'd also be interested in hearing your opinion on the security concerns should we attempt to implement CEV, and find that it shut itself down or produced an unacceptable output.
1orthonormal12y
If you're correct, then the best way to stave off the optimists from trying is to make an indisputable case for pessimism and disseminate it widely. Otherwise, eventually someone else will get optimistic, and won't see why they shouldn't give it a go.
2[anonymous]12y
I expect that once recognition of the intelligence explosion as a plausible scenario becomes mainstream, pessimism about the prospects of (unmodified) human programmers safely and successfully implementing CEV or some such thing will be the default, regardless of what certain AI researchers claim. In that case, optimists are likely to have their activities forcefully curtailed. If this did not turn out to be the case, then I would consider "pro-pessimism" activism to change that state of affairs (assuming nothing happens to change my mind between now and then). At the moment however I support the activities of the Singularity Institute, because they are raising awareness of the problem (which is a prerequisite for state involvement) and they are highly responsible people. The worst state of affairs would be one in which no-one recognised the prospect of an intelligence explosion until it was too late. ETA: I would be somewhat more supportive of a CEV in which only a select (and widely admired and recognised) group of humans was included. This seems to create an opportunity for the CEV initial dynamic implementation to be a compromise between intelligence enhancement and ordinary CEV, i.e. a small group of humans can be "prepared" and studied very carefully before the initial dynamic is switched on. So really it's a complex situation, and my post above probably failed to express the degree of ambivalence that I feel regarding this subject.
0drethelin12y
yeah, that'll work.

This is like the kid version of this page. Where are the good opposing arguments? These are all terrible...

Something about this page bothers me - the responses are included right there with the criticisms. It just gives off the impression that a criticism isn't going to appear until lukeprog has a response to it, or that he is going to write the criticism in a way that makes it easy to respond to, or something.

Maybe it's just me. But if I wanted to write this page, I would try and put myself into the mind of the other side and try to produce the most convincing smackdown of the intelligence explosion concept that I could. I'd think about what the responses would be, but only so that I could get the obvious responses to those responses in first. In other words, aim for DH7.

The responses could be collected and put on another page, or included here when this page is a bit more mature. Does anyone think this approach would help?

6antigonus12y
I had the same reaction. The post reads like singularity apologetics.
1lavalamp12y
That is much more constructively put than my comment.
5[anonymous]12y
.

Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).

Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven't seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being's intelligence from average to Alan Turing's level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But "nil" and "basic computer" are strictly stupider than "average human... (read more)

8antigonus12y
Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one's own. The claim that "the smarter you are, the better you are at designing intelligences" can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1). I see the two claims conflated shockingly often, e.g., in Bostrom's article, where he simply states: and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn't state that outright, but the sentence is at the beginning of the section entitled "Once there is human-level AI there will soon be superintelligence.") That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn't imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.
0torekp12y
Here's a line of reasoning that seems to suggest the possibility of an interesting region of decreasing f(x, x+1). It focuses on human evolution and evolutionary algorithms. Human intelligence appeared relatively recently through an evolutionary process. There doesn't seem to be much reason to believe that if the evolutionary process were allowed to continue (instead of being largely pre-empted by memetic and technological evolution) that future hominids wouldn't be considerably smarter. Suppose that evolutionary algorithms can be used to design a human-equivalent intelligence with minimal supervision/intervention by truly intelligent-design methods. In that case, we would expect with some substantial probability that carrying the evolution forward would lead to more intelligence. Since the evolutionary experiment is largely driven by brute-force computation, any increase in computing power underlying the evolutionary "playing field" would increase the rate of increase of intelligence of the evolving population. I'm not an expert on or even practitioner of evolutionary design, so please criticize and correct this line of reasoning.
1antigonus12y
I agree there's good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would've ended up more intelligent on average. What's substantially less clear is whether we would've ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is a nearby region of decreasing f(x, x+1), it doesn't seem so obvious that it's wide enough to terminate in superintelligence. Given the notorious complexity of biological systems, it's extremely difficult to extrapolate anything about the theoretical limits of evolutionary optimization.
1jacob_cannell12y
Those insights are relevant and interesting for the type of self-improvement feedback loop which assumes unlimited improvement potential in algorithmic efficiency. However, there's the much more basic intelligence explosion which is just hardware driven. Brain architecture certainly limits maximum practical intelligence, but does not determine it. Just as the relative effectiveness of current chess AI systems is limited by hardware but determined by software, human intelligence is limited by the brain but determined by acquired knowledge. The hardware is qualitatively important only up to the point where you have something that is turing-complete. Beyond that the differences become quantitative: memory constrains program size, performance limits execution speed. Even so, having AGI's that are 'just' at human level IQ can still quickly lead to an intelligence explosion by speeding them up by a factor of a million and then creating trillions of them. IQ is a red herring anyway. It's a baseless anthropocentric measure that doesn't scale to the performance domains of super-intelligences. If you want a hard quantitative measure, simply use standard computational measures: ie a human brain is a roughly < 10^15 circuit and at most does <10^18 circuit ops per second.

the claim that intelligence explosion is likely to occur within the next 150 years

"We have made little progress toward cross-domain intelligence."

That is, while human-AI comparison is turning to the advantage of AIs, so-called, in an increasing number of narrow domains, the goal of cross-domain generalization of insight seems as elusive as ever, and there doesn't seem to be a hugely promising angle of attack (in the sense that you see AI researchers swarming to explore that angle).

Meat brains are badly constrained in ways that non-meat brains need not be.

Agreed; and there's an overbroad reading of this claim, which I'm kind of worried people encountering it (e.g. in the guise of Eliezer's argument on "the space of all possible minds") can inadvertently fall into: assuming that just because we can't imagine them, there are no constraints that apply to any class of non-meat brains.

The movie that runs through our minds when we imagine "AGI recursive self-improvement" goes something like a version of Hollywood hacker movies, except with the AI in the role of the hacker. It's sitting at a desk wearing mirrorshades, and looking for the line in its own code that has the parameter for "number of working memory items". When it finds it, it goes "aha!" and suddenly becomes twice as powerful as before.

That is vivid, but probably not how it works. For instance, "number of working memory items" can be a functional description of the system, without having an easily identifiable bit of the code where it's determined, just as well in an AI's substrate as as in a human mind.

[-][anonymous]12y80

"Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system."

How do you know this? I would like some more argument behind this response. In particular, what if some things are impossible? For instance, it might be true that cold fusion is unacheivable, we will never travel faster than the speed of light (even by cheating) and nanotech suffers some hard limits.

Science fiction becomes science fact

I grate my teeth whenever someone intentionally writes this cliche; "science fact" isn't a noun phrase one would use in any other context.

"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."

It's not like pure thought alone could have ruled out, say, Newtonian mechanics.

3gwern12y
Close but maybe not quite right (because it does require observation of the night sky or at least noticing the fact that you have not been blasted to plasma) would be Olbers' paradox.
0lessdazed12y
Can you add to that? Perhaps it is worth making a separate post collecting things people intuitively wouldn't think solvable by thought and only a small about of evidence that actually can be solved that way.
0gwern12y
Mm. I have an old draft of an article going through quotes from the Presocratic philosophers, explaining how they logically proceeded to Atomism, which is a pretty impressive feat. But I haven't worked on it in a long time.

What phrase would you use to describe the failure to produce an AGI over the last 50 years? I suspect that 50 years from now we will might be saying "Wow that was hard, we've learnt a lot, specific kinds of problem solving work well, and computers are really fast now but we still don't really know how to go about creating an AGI". In other words the next 50 years might strongly resemble the last 50 from a very high level view.

In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.

We are amassing vast computational capacities without yet understanding much of how to use them effectively, period. The Raspberry Pi exceeds the specs of a Cray-1, and costs a few hundred thousand times less. What will it be used for? Playing games, mostly. And probably not games that make us smarter.

A common criticism is that intelligence isn't defined, is poorly defined, cannot be defined, can only be defined relative to human beings, etc.

I guess you could lump general criticism of the computationalist approach to cognition in here (dynamicism, embodiment, ecological psychology, etc). Perhaps intelligence explosion scenarios can be constructed for alternative approaches but they seem far from obvious.

0[anonymous]12y
.

Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)

Even for someone who hasn't read the sequences, that sounds like a pretty huge "unless." If the two don't have exactly the same interests, why wouldn't their interests interfere?

0shminux12y
Machines might not be interested in the messy human habitat, but would instead decide to go their own way (space, simulations, nicer subbranches of MWI, baby universes, etc.)

"Humans will be able to keep up with AGI by using AGI's advancements themselves."

Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.

Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I'm probably missing something.

  1. Humans can't alter their neural structure. This strikes me as unlikely. It is possible to cre

... (read more)
0Vaniver12y
This isn't directly related to engineering, but consider the narrow domain of medicine. You have human doctors, who go to medical school, see patients one at a time, and so on. Then you have something like Doctor Watson, one of IBM's goals for the technology they showcased in the Jeopardy match. By processing human speech and test data, it could diagnose diseases on comparable timescales as human doctors, but have the benefit of seeing every patient in the country / world. With access to that much data, it could gain experience far more quickly, and know what to look for to find rare cases. (Indeed, it would probably notice many connections and correlations current doctors miss.) The algorithms Watson uses wouldn't be useful for a human doctor- the things they learned in medical school would be more appropriate for them. The benefits Watson has- the ability to accrue experience at a far faster rate, and the ability to interact with its memory on a far more sophisticated level- aren't really things humans can learn. In creative fields, it seems likely that human/tool hybrids will outperform tools alone, and that's the interesting case for intelligence explosion. (Algorithmic music generation seems to generally be paired with a human curator who chooses the more interesting bits to save.) Many fields are not creative fields, though.

"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."

Problem of scale. AGI would let us have 6 trillion general intelligences. Having a thousand intelligences that like working without pay for every human that exists "won't be a big deal"?

I think otherwise. And that's even assuming these AGIs are 'merely' "roughly human-equivalent".

0faul_sname12y
These intelligences would still require power to run. Right now, even 1 trillion computers running at 100 watts would cost somewhere upwards of 50 billion dollars an hour, which is a far cry from "working without pay". Producing these 6 trillion general intelligences you speak of would also be nontrivial. That said, even one "human equivalent" AI could (and almost certainly would) far exceed human capabilities in certain domains. Several of these domains (i.e. self-improvement, energy production, computing power, and finance) would either directly or indirectly allow the AI to improve itself. Others would be impressive, but not particularly dangerous (natural language processing, for example).
2Logos0112y
Humans spend roughly 10% of their caloric intake on their brains; and Americans spend roughly the same amount as a percentage of their post-tax income. -- so 1% of the pay currently spent on Americans goes towards their cognition, on average. The average American worker also works 46 (out of 168) hours per week. We have no way of knowing the material costs of constructing these devices, nor do we know how energy-efficient they will be compared to modern human brains. Given how much heat energy a brain produces, how far a given brain is from the theoretical limits on computational efficiency and computational density, it's fairly safe to say that the comparative costs of said brains is essentially negligible compared to the average worker today. If we compare the electrical-operational costs as equivalent to the energy costs of a human, then AGIs will have 1% those of a human. And they will work 4x as long -- so that's already a 400:1 ratio of cost per human and cost per AGI for operational budget. Then factor in the absence of travel energy expenditures, the absence of plumbing investment and other human-foible-related elements that machines just don't have -- and the picture painted quickly transitions towards that 1,000 AGIs per person being a "reasonable" number for economic reasons. (Especially since at least a large minority of them will, early on, be applied towards economic expansion purposes.) So certainly, these intelligences would still require power to run. But they'd require vastly less -- for the same economic output -- than would humans. And all that economic output will be bent towards economic goals ... such as generating energy. I don't find this to be a given at all. Brain-emulations would possess, most likely, equivalent capacities to human brains. There is no guarantee that any given AGI will be capable of examining its own code and coming up with better solutions than the people that created it. Nor is there a guarantee that an AGI will be mo
1faul_sname12y
We can come up with at least a preliminary estimate of cost. The lowest estimate I have seen for the computational power of a brain is 38 pflops. The lowest cost of processing power is currently $1.80/gflops. This puts the cost of a whole-brain emulation at a bit under $70M in the best-case scenario. Assuming Moore's law holds, that number should halve every year. Comparatively speaking, human brains are far more energy-efficient than our computers. The best we have is about 2 gflops/watt, as opposed to at least 3,800,000gflops/watt (assuming 10 w) by the human brain. So unless there is a truly remarkable decrease (several orders of magnitude) in the cost of computing power, operating the equivalent power of a human brain will be costly. I was unclear. I consider brain-emulations to be humans, not AIs. The majority of possible AGIs that are considered to be at the human level will almost certainly have different areas of strength and weakness from humans. In particular, they should be far superior in those areas our specialized artificial intelligences already exceed human ability (math, chess, jeapordy, etc.). I did stipulate "human-equivalent" AGI. I am well aware of the possibility that people will augment themselves before AGI comes about. We already do, just not through direct neural interfaces. I'm studying neuroscience with the goal of developing tools to augment intelligence.
1Logos0112y
Verbal slight of hand: "human-equivalent" includes Karl Childers just as much as it does Sherlock Holmes. ---------------------------------------- A couple of points here: 1. In 1961 that same cost would have been 38 * ($1.1*10\^12 )*10\^6 -- ~4.2 million trillion dollars. 2. The cost per gflop is decreasing by exponentially, not linearly, unlike what Moore's Law would extrapolate to. 3. Moore's Law hasn't held for several years now regardless. (See: "Moore's Gap"). 4. This all rests on the notion of silicon as a primary substrate. That's just not likely moving forward; a major buzz amongst theoretical computer science is "diamondoid substrate" -- by which they mean chemical vapor deposited graphene doped with various substances to create a computing substrate that is several orders of magnitude 'better' than silicon due to several properties including its ability to retain semiconductive status at high temperatures, higher frequencies of operation for its logic gates, and potential transister-density. (Much of the energy cost of modern computers goes into heat dissipation, by the way.) 5. If the cost per gflop continues to trend similarly over the next forty years, and if AGI doesn't become 'practicable' until 2050 (a common projection) -- then the cost per gflop may well be so negligible that the 1000:1 ratio would seem conservative. Fair enough. I include emulations as a form of AGI, if for no other reason than there being a clear path to the goal. This does not follow. Fritz -- the 'inheritor' to Deep Blue -- was remarkable not because it was a superior chess-player to Deep Blue ... but because of the way in which it was worse. Fritz initially lost to Kasparov, yet was more interesting. Why? What made it so interesting? Fritz had the ability to be fuzzy, unclear, and 'forget'. To 'make mistakes'. And this made it a superior AI implementation than the perfect monolithic number-cruncher. I see this sentiment in people in AGI all the time -- that AG
0faul_sname12y
My original point was that, based on current trends, AGIs would remain prohibitively expensive to run, as power requirements have not been dropping with Moore's law. The graphene transistors look like they could solve the power requirement problem, so it looks like I was wrong. When I said 'one "human equivalent" AI could (and almost certainly would) far exceed human capabilities in certain domains.' I simply meant that it is unlikely that a (nonhuman) human-level AI would possess exactly the same skillset as a human. If it was better than humans at something valuable, it would be a game changer, regardless of it being "only" human-level. This idea seems not to be as clear to readers as it is to me, so let me explain. A human and a pocket calculator are far better than just a human at arithmetic than a human. Likewise, a human and a notebook are better at memory than an unassisted human. This does not mean notebooks are very good at storing information, it means that people are bad at it. An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
0Logos0112y
I'm sorry, this is just plain not valid. I've already explained why. An AI that is "as computationally expensive as a human" is no more likely to be "much better at the things people are phenomenally bad at" than is a human. All of the computation that goes on in a human would quite likely need to be replicated by that AGI. And there is simply no guarantee that it would be any better than a human when it comes to how it accesses narrow AI mechanisms (storage methods, calculators, etc., etc..). I really do wish I knew why you folks all always seem to assume this is an inerrant truth of the world. But based on what I have seen -- it's just not very likely at all.
0faul_sname12y
I'm not sure exactly what part of my statement you disagree with. 1. People are phenomenally bad at some things. A pocket calculator is far better than a human when it comes to performing basic operations on numbers. Unless you believe that a calculator is amazingly good at arithmetic, it stands to reason that humans are phenomenally bad at it. 2. An AGI would be better than people in the areas where humans suck I am aware of the many virtues of fuzzy, unclear processes to arrive at answers to complex questions through massively parallel processes. However, there are some processes that are better done through serial, logical processes. I don't see why an AGI wouldn't pick these low hanging fruits. My reasoning is as follows: please tell me which part is wrong. I. An emulation (not even talking about nonhuman AGIs at this point) would be able to perform as well as a human with access to a computer with, say, Python. II. The way humans currently interact with computers is horribly inefficient. We translate our thoughts into a programming language, which we then translate into a series of motor impulses corresponding to keystrokes. We then run the program, which displays the feedback in the form of pixels of different brightness, which are translated by our visual cortex into shapes, which we then process for meaning. III. There exist more efficient methods that, at a minimum, could bypass the barriers of typing speed and visual processing speed. (I suspect this is the part you disagree with) What have you seen that makes you think AGIs with some superior skills to humans won't exist?
0Logos0112y
Human-equivalent AGIs. That's a vital element, here. There's no reason to expect that the AGIs in question would be better-able to achieve output in most -- if not all -- areas. There is this ingrained assumption in people that AGIs would be able to interface with devices more directly -- but that just isn't exactly likely. Even if they do possess such interfaces, at the very least the early examples of such devices are quite likely to only be barely adequate to the task of being called "human-equivalent". Karl Childers rather than Sherlock Holmes.
0faul_sname12y
I said some, not most or all. I expect there to be relatively few of these areas, but large superiority in some particular minor skills can allow for drastically different results. It doesn't take general superiority. There is a reason we have this assumption. Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning meaning is the most efficient system? Why is a superior interface unlikely?
0lessdazed12y
Humans can improve their interfacing with computers too...though we will likely interact more awkwardly than AGIs will be able too. From TheOnion, my favorite prediction of man machine interface.
0faul_sname12y
Is that "Humans can also improve their interfacing with computers" or "Humans can improve their interfacing with computers as well as AGI could"?
0lessdazed12y
Edited.
0Logos0112y
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less. And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that's how neuron simulations currently operate.)
0faul_sname12y
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things. Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered "human equivalent", however, a given AGI is quite likely to be far more formidable than an average human.
-2Logos0112y
I strongly disagree, and I have given reasons why this is so.
0faul_sname12y
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
0Logos0112y
No. What I have said is that "human-equivalent AGI is not especially likely to be better at any given function than a human is likely to." This is nearly tautological. I have explained that the various tasks you've mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds. There is this deep myth that AGIs will automatically -- necessarily -- be "hooked into" databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on. That is a myth. Could those things be done? Certainly. But is it guaranteed? By no means. As the example of Fritz shows -- there is just no justification for this belief that merely because it's in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That's like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn't follow. So whether you're convinced or not, I really don't especially care at this point. I have given reasons -- plural -- for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
0faul_sname12y
This is nowhere near tautological, unless you define "human-level AGI" as "AGI that has roughly equivalent ability to humans in all domains" in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human. Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same. I don't hold that belief, and if that's what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.

There are two kinds of people here. Those who thinks that an intelligence explosion is unlikely, and those who thinks it is uncontrollable.

I think it is likely AND controllable.

From which we can infer that you aren't here.

0Thomas12y
Is here the fourth kind also? Those who thinks the IE is unlikely but controllable?
0TheOtherDave12y
I suspect that such people would not be terribly motivated to post about the IE in the first place, so available evidence is consistent with both their presence and their absence.
1Normal_Anomaly12y
But it's weak evidence of their absence, because them posting would be strong evidence of their presence.
0TheOtherDave12y
(nods) Certainly. But weak enough to be negligible compared to most people's likely priors. I sometimes feel like we should simply have a macro that expands to this comment, its parent, and its grandparent.
0Normal_Anomaly12y
I'm not sure what you mean here. Is it something like, ? The closest thing we currently have is linking to the Absence of Evidence post.
0TheOtherDave12y
Something like, but more "the evidence is consistent with X and ~X, but favors X very weakly (because absence of evidence is evidence of absence), but sufficiently weakly that the posterior probability of X is roughly equal to the prior probability of X." But I was mostly joking.
0amcknight12y
I think it's likely, controllable, but unlikely to be controlled. That means I'm in your faction and I would bet we're in the largest one.
0Thomas12y
You are right. An additional bit is needed for this description. How likely is that it will be controlled? Agree, not that likelly.

On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.

Wouldn't this make more sense on the wiki?

6shminux12y
People are more likely to reply to a post than to edit the wiki. Presumably once this post thread settles, its content will be adapted into a wiki page.
2lukeprog12y
Yes.

Another criticism that's worth mentioning is an observational rather than modeling issue: If AI were a major part of the Great Filter then we'd see it in the sky from when the AGIs started to control space around them at a substantial fraction of c. This should discount the probability of AGIs undergoing intelligence explosions. How much it should do so is a function of how much of the Great Filter one thinks is behind us and how much one thinks is in front of us.

0Douglas_Knight12y
No, that's backwards. If something takes over space at c, we never see it. The slower it expands, the easier it is to see, so the more our failure to observe it is evidence that it doesn't exist.
1JoshuaZ12y
In the hypothetical, it is expanding at a decent fraction of c, not at c. In order for us to see it it needs to expand at a decent fraction of c. For example, suppose it expands at 1 meter/s. That's fast enough to easily wipe out a planet before you can run away effectively, but how long will it take before it has a noticeable effect on even the nearest star? Well, if the planet is the same distance from the sun as the Earth (8 light minutes), it would take around 8*60*3*10^8 seconds, or around 4000 years. So we'll notice if we see something odd about just that star. But it won't ever expand fast enough to reach the next star. The most easily noticeable things are things that travel at a decent fraction of c, fast enough for us to notice but not fast enough for it to be impossible for us to notice before we get wiped out. AGIs expanding a decent fraction of c would fall into that category. If something does expand at c you are correct that we won't notice.
1Douglas_Knight12y
Something that expands at a fixed 1 m/s in all three of on a planet, in a solar system, and between stars qualifies as an artificial stupidity. Something that expands at 0.1 c can be observed, but has heavy anthropic penalty: we should not be surprised not to see it.
0JoshuaZ12y
We don't have a good idea how quickly something can expand between stars. The gaps between stars are big and launching things fast is tough. The fastest we've ever launched something is Helios) which at maximum velocity was a little over 0.0002c. I agree that 1 m/s would probably be artificial stupid. There's clearly a sweet range here. If for example, your AI expanded at .01c then it won't ever reach us in time if it started in another galaxy. Even your example of .1c (which is extremely fast rate of expansion) means that one has to believe that most of the Filtration is not occurring due to AI. If AI is the general filter and it is expanding at .1c then we need to live in an extremely rare lightcone for not seeing any sign of it. This argument is of course weak (and nearly useless) if one thinks that the vast majority of filtration is behind us. But either way, it strongly suggests that most of the Filter is not fast-expanding AI.
0Douglas_Knight12y
Yes, if things expanding at 0.1c are common, then we should see galaxies containing them, but would we notice them? Would the galaxy look unnatural from this distance? Not directly relevant, but I'm not sure how you're using filtration. I use it in a Fermi paradox sense: a filter is something that explains the failure to expand. An expanding filter is thus nonsense. I suppose you could use it in a doomsday argument sense - "Where does my reference class end?" - but I don't think that is usual.
1JoshuaZ12y
This would depend on what exactly they are doing to those galaxies. If they are doing stellar engineering (e.g. making Dyson spheres, Matrioshka brains, stellar lifting) then we'd probably notice if it were any nearby galaxy. But conceivably something might try to deliberately hide its activity. Yes, I think I'm using it in some form closer to the second. In the context of the first one, in regards solely to the Fermi problem then AGI is simply not a filter at all which if anything makes the original point stronger.

You are missing a train of argument which trumps all of these lines of criticism: the intelligence explosion is already upon us. Creating a modern microprocessor chip is a stupendously complex computational task that is far far beyond the capabilities of any number of un-amplified humans, no matter how intelligent. There was a time when chips were simple enough that a single human could do all of this intellectual work, but we are already decades past that point.

Today new chips are created by modern day super-intelligences (corporations) which in turn ar... (read more)

Computational complexity may place strong limits on how much recursive self-improvement can occur, especially in a software context. See e.g. this prior discussion and this ongoing one. In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.

7Vladimir_Nesov12y
Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge. This would only make sense if we see a specific reason that all algorithms can't exhibit superintelligent competence in the real world (as opposed to ability to solve randomly generated standard-form problems whose complexity can be analyzed by human mathematicians), but we don't understand intelligence nearly enough to carry out such inferences.
-2JoshuaZ12y
This is not a good analogy at all. The probable scale of difference is what matters here. In this sort of context, we're extremely far from physical limitations mattering, as one can see for example by the fact that Koomey's law can continue for about forty years before hitting physical limits. (It will likely break down before then but that's not the point.) In contrast, our understanding of the limits of computational complexity are in some respects stricter but weaker in other respects. The conjectured limits of for example strong versions of the exponential time hypothesis place much more severe limits on what can occur. It is important to note here that these sorts of limits are relevant primarily in the context of a software only or primarily software only recursive self-improvement. For essentially the reasons you outline (the large amount of apparent room for physical improvement), it seems likely that this will not matter much for an AGI that has much in the way of ability to discover/construct new physical systems. (This does imply some limits in that form, but they are likely to be comparatively weak).

The main problems that I see are as Eliezer told in the Singularity Summit: there are problems regarding AGI that we don't know how to solve even in principle(I'm not sure if this applies to AGI ingeneral or only to FAGI). So it might well be that we won't solve these problems ever.

The most difficult part will be to ensure the friendliness of the AI. The biggest danger is someone else carelessly making an AGI that is not friendly.

I have a reason to believe it has less than a 50% chance of being possible. Does that count?

I figure after the low-hanging fruit is taken care of, it simply becomes a question of if a unit of additional intelligence is enough to add another additional unit of intelligence. If the feedback constant is less than one, intelligence growth stops. If it is greater, the intelligence grows until it falls below one. It will vary somewhat with intelligence, and it would have to fall below one eventually. We have no way of knowing what the feedback constant is, so we... (read more)

4jsteinhardt12y
Not being able to determine what the constant is doesn't mean that there is a 50-50 chance that it is larger than 1. In particular, what in your logic prevents one from also concluding that there is also a 50-50 chance of it being larger than 2?
0DanielLC12y
It can't be less than zero. From what I understand about priors, the maximum entropy prior would be a logarithmic prior. A more reasonable prior would be a log-normal prior with the mean on 1 and a high standard deviation
0jsteinhardt12y
By logarithmic do you mean p(x) = exp(-x)? That would only have an entropy of 1, I believe, whereas one can easily obtain unboundedly large amounts of entropy, or even infinite entropy (for instance, p(x) = a exp(-a x) has entropy 1-log(a), so letting a go to zero yields arbitrarily large entropy). Also, as I've noted before, entropy doesn't make that much sense for continuous distributions.
0DanielLC12y
I mean p(x) = 1/x I think it's Jeffreys prior or something. Anyway, it seems like a good prior. It doesn't have any arbitrary constants in it like you'd need with p(x) = exp(-x). If you change the scale, the prior stays the same.
0jsteinhardt12y
p(x) = 1/x isn't an integrable function (diverges at both 0 and infinity). (My real objection is more that it's pretty unlikely that we really have so little information that we have to quibble about which prior to use. It's also good to be aware of the mathematical difficulties inherent in trying to be an "objective Bayesian", but the real problem is that it's not very helpful for making more accurate empirical predictions.)
0DanielLC12y
Which is why I said a log-normal prior would be more reasonable. How much information do we have? We know that we haven't managed to build an AI in 40 years, and that's about it. We probably have enough information if we can process it right, but because we don't know how, we're best off sticking close to the prior.
0jsteinhardt12y
Why a log-normal prior with mu = 0? Why not some other value for the location parameter? Log-normal makes pretty strong assumptions, which aren't justified if we for all practical purposes we have no information about the feedback constant. We may have little specific information about AIs, but we have tons of information about feedback laws, and some information about self-improving systems in general*. I agree that it can be tricky to convert this information to a probability, but that just seems to be an argument against using probabilities in general. Whatever makes it hard to arrive at a good posterior should also make it hard to arrive at a good prior. (I'm being slightly vague here for the purpose of exposition. I can make these statements more precise if you prefer.) (* See for instance the Yudkowsky-Hanson AI Foom Debate.)
0lessdazed12y
You should distinguish between exponential and linear growth. First, the feedback constant is different for every level of intelligence. Whenever the constant is greater than one, the machine is limited by making the transformations involved, and the intelligence is not well characterized as being limited by its intelligence and instead should be thought of as limited by its resources. Whenever the constant is less than one and greater than zero, intelligence growth is only linear, but it is not zero. If the constant remains low enough for long enough, whole periods of time, series of iterations, that have some places where the constant is above one also have sub-exponential growth. The relationship between the AI's growth rate and our assisted intelligence growth rate (including FAIs, paper and pen, computers, etc.) is most of what is important, with the tie-breaker being our starting resources. An AI with fast linear growth between patches of exponential growth, or even one with only fast linear growth, would quickly outclass humans' thinking.
3DanielLC12y
I meant to mention that, but I didn't. It looks like I didn't so much forget as write an answer so garbled you can't really tell what I'm trying to say. I'll fix that. Anyway, it will move around as the intelligence changes, but I figure it would be far enough from one that it won't cross it for a while. Either the intelligence is sufficiently advanced before the constant goes below one, or there's no way you'd ever be able to get something intelligent enough to recursively self-improve. No, it's zero, or at least asymptotic. If each additional IQ point allows you to work out how to grant yourself half an IQ point, you'll only ever get twice as many extra IQ points as you started with. Having extra time will be somewhat helpful, but this is limited. If you get extra time, you'd be able to accomplish harder problems, but you won't be able to accomplish all problems. This will mean that the long-term feedback constant is somewhat higher, but if it's nowhere near one to begin with, that won't matter much.
1lessdazed12y
Were you using "feedback constant" to mean the second derivative of intelligence, and assuming each increase in intelligence will be more difficult than the previous one (accounting for size difference)? I took "feedback constant" to mean the first derivative. I shouldn't have used an existing term and should have said what i meant directly.
1DanielLC12y
I used "feedback constant" to mean the amount of intelligence an additional unit of intelligence would allow you to bring (before using the additional unit of intelligence). For example, if at an IQ of 1000, you can design a brain with an IQ of 1010, but with an IQ of 1001, you can design a brain with an IQ of 10012, the feedback constant is two. It's the first derivative of the most intelligent brain you can design in terms of your own intelligence. Looking at it again, it seems that the feedback constant and whether or not we are capable of designing better brains aren't completely tied together. It may be that someone with an IQ of 100 can design a brain with an IQ of 10, and someone with an IQ of 101 can design a brain with an IQ of 12, so the feedback constant is two, but you can't get enough intelligence in the first place. Similarly, the feedback constant could be less than one, but we could nonetheless be able to make brains more intelligent than us, just without an intelligence explosion. I'm not sure how much the two correlate.

This could be its own web site.