Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Phil_Goetz6 comments on Hard Takeoff - Less Wrong

14 Post author: Eliezer_Yudkowsky 02 December 2008 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Phil_Goetz6 02 December 2008 10:21:40PM 2 points [-]

"All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory."

Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely"?

If you can't make quantitative predictions, then you can't say that the foom might take an hour or a day, but not six months.

A lower-bound (of the growth curve) analysis could be sufficient to argue the inevitability of foom.

I agree there's a time coming when things will happen too fast for humans. But "hard takeoff", to me, means foom without warning. If the foom doesn't occur until the AI is smart enough to rewrite an AI textbook, that might give us years or decades of warning. If humans add and improve different cognitive skills to the AI one-by-one, that will start a more gently-sloping RSI.