somervta comments on Open thread, September 2-8, 2013 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (376)
"The Intelligence Explosion Thesis says that an AI can potentially grow in capability on a timescale that seems fast relative to human experience due to recursive self-improvement. This in turn implies that strategies which rely on humans reacting to and restraining or punishing AIs are unlikely to be successful in the long run, and that what the first strongly self-improving AI prefers can end up mostly determining the final outcomes for Earth-originating intelligent life. " -- Eliezer Yudkowsky, IEM.
I.e., Eliezer thinks it'll take less time than it takes you to hit Ctrl-C. (Granted it takes Eliezer a whole paragraph to say what the essay captures in a phrase, but I digress.)
Eliezer's position is somewhat more nuanced than that. He admits a possibility of a FOOM timescale on the order of seconds, but a timescale on the order of weeks/months/years is also in line with the IE thesis.