In The Weak Inside View Eliezer Yudkowsky writes that it never occured to him that his views about optimization ought to produce quantitative predictions.
Eliezer further argues that we can't use historical evidence to evaluate completely new ideas.
Not sure what he means by "loose qualitative conclusions".
He says that he can't predict how long it will take an AI to solve various problems.
One thing which makes me worry that something is "surface", is when it involves generalizing a level N feature across a shift in level N-1 causes.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
Now, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn't expect it to change with the advent of superintelligence.
I don't know what the law of 'Accelerating Change' is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
Oh well...I'll give up and come back to this when I have time to look up every term and concept and decrypt what he means.
Not sure what he means by "loose qualitative conclusions".
Some context:
In this case, the best we can do is use the Weak Inside View - visualizing the causal process - to produce loose qualitative conclusions about only those issues where there seems to be lopsided support.
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a fo...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.