Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Aron comments on Hard Takeoff - Less Wrong

14 Post author: Eliezer_Yudkowsky 02 December 2008 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Aron 03 December 2008 04:27:44AM 0 points [-]

What could an AI do, yet still be unable to self-optimize? Quite a bit it turns out: everything that a modern human can do as a minimum and possibly a great deal more since *we* have yet to demonstrate that we can engineer intelligence. (I admit here that it may be college-level material once discovered)

If we define the singularity as the wall beyond which is unpredictable, I think we can have an effective singularity without FOOM. This follows from admitting that we can have computers that are superior to us in every way, without even achieving recursive modification. These machines then have all the attendant advantages of limitless hardware, replicability, perfect and expansive memory, deep serial computation, rationality by design, limitless external sensors, etc.

*if* it is useless to predict past the singularity, and *if* foom is unlikely to occur prior to the singularity, does this make the pursuit of friendliness irrelevant? Do we have to postulate foom = singularity in order to justify friendliness?