Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Phil_Goetz6 comments on Hard Takeoff - Less Wrong

14 Post author: Eliezer_Yudkowsky 02 December 2008 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Phil_Goetz6 03 December 2008 05:45:14PM 3 points [-]

Eliezer: So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM.

But the AI is tied up with the human timescale at the start. All of the work on improving the AI, possibly for many years, until it reaches very high intelligence, will be done by humans. And even after, it will still be tied up with the human economy for a time, relying on humans to build parts for it, etc. Remember that I'm only questioning the trajectory for the first year or decade.

(BTW, the term "trajectory" implies that only the state of the entity at the top of the heap matters. One of the human race's backup plans should be to look for a niche in the rest of the heap. But I've already said my piece on that in earlier comments.)

Thomas: Even if it is wrong - I think it is correct - it is the most important thing to consider.

I think most of us agree it's possible. I'm only arguing that other possibilities should also be considered. It would be unwise to adopt a strategy that has a 1% chance of making 90%-chance situation A survivable, if that strategy will make the otherwise-survivable 10%-chance situation B deadly.