One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.
Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.
Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.
And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!
There are a few ways how this situation can be resolved:
- Biting the outside view bullet like me, and assigning very low probability to them.
- Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.
- Or is there a class of situations for which the outside view is consistently and spectacularly wrong; data is not good enough for precise predictions; and yet we somehow think we can predict them reliably?
How do you reconcile them?
If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"
http://www.alcor.org/FAQs/faq01.html#evidence
Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest near 0°C (+32°F ) (Cryobiology 23, 483-494 (1986)). There is no basic reason why such states of "suspended animation" could not be extended indefinitely at even lower temperatures (although the technical obstacles are enormous).
Existing cryopreservation techniques, while not yet reversible, can preserve the fine structure of the brain with remarkable fidelity. This is especially true for cryopreservation by vitrification. The observations of point (a) make clear that survival of structure, not function, determines survival of the organism.
It is now possible to foresee specific future technologies (molecular nanotechnology and nanomedicine) that will one day be able to diagnose and treat injuries right down to the molecular level. Such technology could repair and/or regenerate every cell and tissue in the body if necessary. For such a technology, any patient retaining basic brain structure (the physical basis of their mind) will be viable and recoverable.
I up-voted the post because you talked about two good, basic thinking skills. I think that paying attention to the weight of priors is a good thinking technique in general- and I think your examples of cryonics and AI are good points, but your conclusion fails- the argument you made does not mean they have 0 chance of happening, but you could take out of that more usefully, for example that any given person claiming to have created AI probably has close to 0 chance of having actually done it (unless you have some incredibly good evidence:
"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen
). The thinking technique of abstracting and "stepping back from" or "outside of" or using "reference class forecasting" for your current situation also works very generally. Short post though, I was hoping you would expand more.