Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Recursive Self-Improvement - Less Wrong

14 Post author: Eliezer_Yudkowsky 01 December 2008 08:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 02 December 2008 06:52:44PM 3 points [-]

John: Given any universe whose physics even resembles in character our current standard model, there will be limits to what you can do on fixed hardware and limits to how much hardware you can create in finite time.

But if those limits are far, far, far above the world we think of as normal, then I would consider the AI-go-FOOM prediction to have been confirmed. I.e., if an AI builds its own nanotech and runs off to disassemble Proxima Centauri, that is not infinite power but it is a whole lot of power and worthy of the name "superintelligence".