Vladimir_Nesov comments on The mind-killer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (151)
I don't believe in feasibility of any scenario like AGI foom.
First, I fail to see how anybody taking an outside view on AI research - which is a clear instance of class of sciences with extraordinary claims and very long history of failure to deliver in spite of unusually adequate funding - can think otherwise - to me it all seems like extreme case of insider bias to assign non-negligible probabilities to scenarios like that. Virtually none sciences with this characteristics delivered what they promised (even if they delivered something useful and vaguely related).
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
Both extraordinarily unlikely events would have to occur before we would be exposed to risk of AGI-caused destruction of humanity, which even in this case is far from certain.
A reasonable position, so long as you remain truly ignorant of what AI is specifically about.
I don't know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class.
http://www.overcomingbias.com/2007/07/beware-the-insi.html
http://en.wikipedia.org/wiki/Reference_class_forecasting
Try applying that to physics, engineering, biology, or any other technical field. In many cases, the outside view doesn't stand a chance.