I'm not sure an intelligence explosion can happen without significant speed or computational power improvements.
I guess it boils down to what happens if you let human-level intelligence self-modify without modifying the hardware (a.k.a how much human intelligence is optimised). Until now the ratio results to computational power used in significantly in favor of humans compared to I.A but the later is improving fast, and you don't need an I.A to be as versatile as human. Is there any work on what the limit on optimisation for intelligence?
It looks like a nitpick since hardware capacity is increasing steadily and will soon exceed the capacities of the human brain, but it is a lot easier to prevent intelligence explosion by putting a limit on the computational power.
It's unclear, but in narrow AI we've seen software get smarter even in cases where the hardware is kept constant, or even made worse. For example, the top chess engine of 2014 beats a top engine from 2006, even when you give the 2014 engine 2% the computing power of the 2006 engine. That would seem to suggest that an intelligence explosion without hardware improvements might be possible, at least in principle.
In practice I would expect an intelligence explosion to lead to hardware improvements as well, though. No reason for the AI to constrain itself just to the software side.
Arguments for risks from general AI are sometimes criticized on the grounds that they rely on a series of linear events, each of which has to occur for the proposed scenario to go through. For example, that a sufficiently intelligent AI could escape from containment, that it could then go on to become powerful enough to take over the world, that it could do this quickly enough without being detected, etc.
The intent of my following series of posts is to briefly demonstrate that AI risk scenarios are in fact disjunctive: composed of multiple possible pathways, each of which could be sufficient by itself. To successfully control the AI systems, it is not enough to simply block one of the pathways: they all need to be dealt with.
I've got two posts in this series up so far:
AIs gaining a decisive advantage discusses four different ways by which AIs could achieve a decisive advantage over humanity. The one-picture version is:
AIs gaining the power to act autonomously discusses ways by which AIs might come to act as active agents in the world, despite possible confinement efforts or technology. The one-picture version (which you may wish to click to enlarge) is:
These posts draw heavily on my old paper, Responses to Catastrophic AGI Risk, as well as some recent conversations here on LW. Upcoming posts will try to cover more new ground.