You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

somervta comments on Open thread, September 2-8, 2013 - Less Wrong Discussion

0 Post author: David_Gerard 02 September 2013 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (376)

You are viewing a single comment's thread. Show more comments above.

Comment author: brandyn 09 September 2013 02:06:35AM *  1 point [-]

"The Intelligence Explosion Thesis says that an AI can potentially grow in capability on a timescale that seems fast relative to human experience due to recursive self-improvement. This in turn implies that strategies which rely on humans reacting to and restraining or punishing AIs are unlikely to be successful in the long run, and that what the first strongly self-improving AI prefers can end up mostly determining the final outcomes for Earth-originating intelligent life. " -- Eliezer Yudkowsky, IEM.

I.e., Eliezer thinks it'll take less time than it takes you to hit Ctrl-C. (Granted it takes Eliezer a whole paragraph to say what the essay captures in a phrase, but I digress.)

Comment author: somervta 09 September 2013 02:44:54AM 3 points [-]

Eliezer's position is somewhat more nuanced than that. He admits a possibility of a FOOM timescale on the order of seconds, but a timescale on the order of weeks/months/years is also in line with the IE thesis.