Eliezer_Yudkowsky comments on Hard Takeoff - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (33)
AC, "raise the question" isn't strong enough. But I am sympathetic to this plea to preserve technical language, even if it's a lost cause; so I changed it to "demand the question". Does anyone have a better substitute phrase?
These are different problems, akin to "predict exactly where Apophis will go" and "estimate the size of the keyhole it has to pass through in order to hit Earth". Or "predict exactly what this poorly designed AI will end up with as its utility function after it goes FOOM" versus "predict that it won't hit the Friendliness keyhole".
A secret of a lot of the futurism I'm willing to try and put any weight on, is that it involves the startling, amazing, counterintuitive prediction that something ends up in the not-human space instead of the human space - humans think their keyholes are the whole universe, because it's all they have experience with. So if you say, "It's in the (much larger) not-human space" it sounds like an amazing futuristic prediction and people will be shocked, and try to dispute it. But livable temperatures are rare in the universe - most of it's either much colder or much hotter. A place like Earth is an anomaly, though it's the only place beings like us can live; the interior of a star is much denser than the materials of the world we know, and the rest of the universe is much closer to vacuum.
So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. When you look at it that way, it's not so radical a prediction, is it?