Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

michael_vassar3 comments on Hard Takeoff - Less Wrong

14 Post author: Eliezer_Yudkowsky 02 December 2008 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread.

Comment author: michael_vassar3 03 December 2008 05:52:23AM 1 point [-]

Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn't expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.

Aron: It's rational to plan for the most dangerous survivable situations. However, it doesn't really make sense to claim that we can build computers that are superior to ourselves but that they can't improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can't quickly self-improve and which we can't quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.