Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

derekz2 comments on Recursive Self-Improvement - Less Wrong

14 Post author: Eliezer_Yudkowsky 01 December 2008 08:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: derekz2 02 December 2008 03:46:23AM 2 points [-]

From a practical point of view, a "hard takeoff" would seem to be defined by self-improvement and expansion of control at a rate too fast for humans to cope with. As an example of this, it is often put forward as obvious that the AI would invent molecular nanotechnology in a matter of hours.

Yet there is no reason to think it's even possible to improve molecular simulation, required to search in molecular process-space, much beyond our current algorithms, which on any near-term hardware are nowhere near up to the task. The only explanation is that you are hypothesizing rather incredible increases in abilities such as this without any reason to even think that they are possible.

It's this sort of leap that makes the scenario difficult to believe. Too many miracles seem necessary.

Personally I can entertain the possibility of a "takeoff" (though it is no sure thing that it is possible), but the level of optimization required for a hard takeoff seems unreasonable. It is a lengthy process just to compile a large software project (a trivial transformation). There are limits to what a particular computer can do.