One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.
Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.
Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.
And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!
There are a few ways how this situation can be resolved:
- Biting the outside view bullet like me, and assigning very low probability to them.
- Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.
- Or is there a class of situations for which the outside view is consistently and spectacularly wrong; data is not good enough for precise predictions; and yet we somehow think we can predict them reliably?
How do you reconcile them?
Alter these reference classes even tiny bit, and the result you get is basically just the opposite. For cryonics, just use the reference class of cases where people thought either a) that technology X could prolong the life of the patient, or b) that technology X could preserve wanted items, or c) that technology X could restore wanted media. Comparing it to technologies like this seems much more reasonable than taking the single peculiar property of cryonics(that it could theoretically for the first time grant us immortality) and using only that as a reference class. You could use same argument of using the peculiar property as reference class against any developing technology and consistently reach ~0% chance for it, so it works as perfectly general counter argument too.
Coming of a new world seems more reasonable reference class for singularity, but you seem to be interpreting in a bit strickter way than I would. I'd rephrase that as reference class of enormous changes in society, and there has indeed been many of such. Also, we note that processing and spreading information has been crucial to many of these, so narrowing our reference class to crucial properties of singularity(which basically just means "huge change in society due to artifical being that is able to process information better than we are"), we actually gain opposite result than what you did.
We do have a fairly good track record of making artifical beings that replicate parts of human behavior, too.