You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SteveG comments on Superintelligence 6: Intelligence explosion kinetics - Less Wrong Discussion

9 Post author: KatjaGrace 21 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: SteveG 21 October 2014 03:30:54AM 3 points [-]

I want to try to break down the chain of events in this region of the graph. Just to start the ball rolling:

So, one question is the degree to which additional computer hardware has to be built in order to support additional levels of recursive self-improvement.

If the rate of takeoff is constrained by a need for the AI to participate in the manufacture of new hardware with assistance from people, then (correct me if I am missing something) we have a slow or moderate takeoff.

If there is already a "hardware overhang" when key algorithms are created, then perhaps a great deal of recursive self-improvement can occur rapidly within existing computer systems.

Comment author: SteveG 21 October 2014 03:41:09AM 5 points [-]

Manufacturing computer components is quite involved. A hardware/software system which can independently manufacture similar components to the ones it runs on would have to already have the abilities of dozens of different human specialists. It would already be a superintelligence.