http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/

Briefly surveys various proposed main bottlenecks for an intelligence explosion, and argues that none of them is going to be a major one:

  1. Economic growth rate
  2. Investment availability
  3. Gathering of empirical information (experimentation, interacting with an environment)
  4. Software complexity
  5. Hardware demands vs. available hardware
  6. Bandwidth
  7. Lightspeed lags
New Comment
7 comments, sorted by Click to highlight new comments since:

Minor note- they also discuss the idea that the human brain uses exotic computation (they correctly don't spend much time on this objection).

They don't spend enough time addressing software complexity issues at all. In particular, if the complexity hierarchy strongly fails to collapse (that is, P, NP, co-NP, EXP, PSPACE are all distinct) and hardware design requires difficult computation (this last seems plausible since graph coloring, an NP-complete problem, shows up in memory optimization, while the traveling salesman which is also NP-complete shows up in circuit design) then improvements in hardware will likely result in diminishing marginal returns at making new hardware. Earlier discussion I've had here with cousin_it (e.g. see this) make me less inclined to think that this is a bad a barrier as I thought earlier, but it seems clear that simply saying that it will be handled by the hardware improvements (which is mainly what they do in this article) seems insufficient.

[-]XiXiDu-30

I thought earlier, but it seems clear that simply saying that it will be handled by the hardware improvements (which is mainly what they do in this article) seems insufficient.

Their presumption seems to be an algorithmic human-level artificial general intelligence that is capable of running on a digital computer without diminishing returns and can handle the complexity of its design parameters. You can't argue with that because they just assume all necessary presuppositions.

What is still questionable in my opinion is that any level of intelligence would be capable of explosive recursive self-improvement, since it has to use its own intelligence to become more intelligent, which is by definition the same problem we currently face in inventing superhuman intelligence. Sure, clock-speed is really the killer argument here, but to increase clock-speed it has to use its current intelligence only, just as we humans have to use our intelligence to increase clock-speeds and we don't call that an explosion. Why are they so sure that increasing the amount of available subjective-time significantly accelerates the discovery rate? They mention micro-experiments and nanotechnology, but it would have to invent that as well without micro-experiments and nanotechnology to do so. Just thinking about it might help to read the available literature faster but not come up with new data, which needs real-world feedback and a huge amount of dumb luck. Humans can do all this as well and use the same technology in combination with expert systems, which diminishes the relative acceleration by AGI again.

To talk about an intelligence explosion, one has to know what one means by “intelligence” as well as by “explosion”. So it’s worth reflecting that there are currently no measures of general intelligence that are precise, objectively defined and broadly extensible beyond the human scope. However, since “intelligence explosion” is a qualitative concept, we believe the commonsense qualitative understanding of intelligence suffices.

Some people probably stopped reading after that. Intelligence might very well depend upon the noise of the human brain. A lot of progress is due to luck, in the form of the discovery of unknown unknowns. Intelligence is a goal-oriented evolutionary process equipped with a memory. It is evolutionary insofar as it still needs to stumble upon novelty. Intelligence is not a meta-solution but an efficient searchlight that helps to discover unknown unknowns. Intelligence is also a tool that can efficiently exploit previous discoveries, combine and permute them. But claiming that you just have to be sufficiently intelligent to solve a given problem sounds like it is more than that. I don't see that. I think that if something crucial is missing, something you don't know that it is missing, you'll have to discover it first and not invent it by the sheer power of intelligence. And here the noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to follow routes that no rational, perfectly Bayesian agent would take because there exist no prior evidence to do so. The complexity of human values might very well be key-feature of our success. There is no evidence that intelligence is fathomable as a solution that can be applied to itself effectively.

[-]sark40

You expect that noisy 'non-Bayesian' exploration will yield greater success. If you are correct, then this is what the perfect Bayesian would expect as well. You seem to be thinking that a 'rational' agent needs to have some rationale or justification for pursuing some path of exploration, and this might lead it astray. Well, if it does that, it's just stupid, and not a perfect Bayesian.

I don't think you managed to establish that a perfect Bayesian would do worse than a human. But I think you hit upon an important point, that it is quite possible for the solutions in the search space to be so sparse, that no process whatsoever can reliably hit them to yield consistent recursive self-improvement.

So, one possible bottleneck they missed:

Sparsity of solutions in the search space

[-]XiXiDu-20

If you are correct, then this is what the perfect Bayesian would expect as well.

I tried to say that being irrational aids discovery. If Bayesian equals winning then you are correct. Here is another example. If everyone was perfectly rational then a lot of explorations that unexpectedly yielded new insights would have never happened. That you say that a perfect Bayesian would expect this sounds like hindsight bias to me.

[-]sark20

Yes, the way we define 'perfect Bayesian' is unfair, but is this really a problem?

I tried to say that being irrational aids discovery.

If discovery contributes to utility then our Bayesian (expected utility maximizer) will take note of this.

Here is another example.

You are here relying on a definition of rational which excludes being good at coordination problems.

this sounds like hindsight bias to me

Au contraire, I think our pride in our 'irrationality' is where the hindsight bias is! Like you said we got lucky. This is OK if our 'luck' were of the consistent type. But in all likelihood the way we have exposed ourselves to serendipity was suboptimal.

It's entirely possible for our Bayesian to lose to you. It's just improbable.

I once tried to put all possible bottlenecks into a single short story, although there are more objections I think I captured all of the above.