I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
In The Weak Inside View Eliezer Yudkowsky writes that it never occured to him that his views about optimization ought to produce quantitative predictions.
Eliezer further argues that we can't use historical evidence to evaluate completely new ideas.
Not sure what he means by "loose qualitative conclusions".
He says that he can't predict how long it will take an AI to solve various problems.
One thing which makes me worry that something is "surface", is when it involves generalizing a level N feature across a shift in level N-1 causes.
Ar...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.