Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Tim_Tyler comments on Recursive Self-Improvement - Less Wrong

14 Post author: Eliezer_Yudkowsky 01 December 2008 08:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 01 December 2008 09:37:00PM 2 points [-]

the initial recursive cascade of an intelligence explosion can't race through human brains because human brains are not modifiable until the AI is already superintelligent

I am extremely sceptical about whether we will see much modification of human brains by superintelligent agents. Once we have superintelligence, human brains will go out of fashion the way the horse-and-cart did.

Brains will not suddenly become an attractive platform for future development with the advent of superintelligence - rather they will become even more evidently obsolete junk.