loqi comments on The mind-killer - Less Wrong

23 Post author: ciphergoth 02 May 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 04 May 2009 09:52:29AM 3 points [-]

It's not reverse stupidity - it's "reference class forecasting", which is a more specific instance of our generic "outside view" concept. I gather data about AI research as an instance, look at other cases with similar characteristics (hyped overpromised and underdelivered over a very long time span) and estimate based on that. It is proven to work better than inside view of estimating based on details of a particular case.

http://en.wikipedia.org/wiki/Reference_class_forecasting

Comment author: loqi 06 May 2009 05:06:38AM 2 points [-]

Oops. You're totally right.

That said, I still take issue with reference class forecasting as support for this statement:

I don't believe in feasibility of any scenario like AGI foom.

Considering that the general question "is the foom scenario feasible?" doesn't have any concrete timelines attached to it, the speed and direction of AI research don't bear too heavily on it. All you can say about it based on reference class forecasting is that it's a long way away if it's both possible and requires much AI research progress.

Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.

I'm not sure "disruptive technology" is the obvious category for AGI. The term basically dereferences to "engineered human-level intelligence", easily suggesting comparisons to various humans, hominids, primates, etc.