loqi comments on The mind-killer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (151)
It's not reverse stupidity - it's "reference class forecasting", which is a more specific instance of our generic "outside view" concept. I gather data about AI research as an instance, look at other cases with similar characteristics (hyped overpromised and underdelivered over a very long time span) and estimate based on that. It is proven to work better than inside view of estimating based on details of a particular case.
http://en.wikipedia.org/wiki/Reference_class_forecasting
Oops. You're totally right.
That said, I still take issue with reference class forecasting as support for this statement:
Considering that the general question "is the foom scenario feasible?" doesn't have any concrete timelines attached to it, the speed and direction of AI research don't bear too heavily on it. All you can say about it based on reference class forecasting is that it's a long way away if it's both possible and requires much AI research progress.
I'm not sure "disruptive technology" is the obvious category for AGI. The term basically dereferences to "engineered human-level intelligence", easily suggesting comparisons to various humans, hominids, primates, etc.