One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.
Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.
Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.
And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!
There are a few ways how this situation can be resolved:
- Biting the outside view bullet like me, and assigning very low probability to them.
- Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.
- Or is there a class of situations for which the outside view is consistently and spectacularly wrong; data is not good enough for precise predictions; and yet we somehow think we can predict them reliably?
How do you reconcile them?
Reference class forecasting might be an OK way to criticise an idea (that is, in situations where you've done something a bunch of times, and you're doing the exact same thing and expect a different outcome despite not having any explanations that say there should be a different outcome), but the idea of using it in all situations is problematic, and it's easy to misapply:
It's basically saying 'the future will be like the past'. Which isn't always true. In cases like cryonics -- cases that depend on new knowledge being created (which is inherently unpredictable, because if we could predict it, we'd have that knowledge now) -- you can't say the future will be like the past.
To say the future will be like the past, you need an explanation for why. You can't just say, look, this situation is like that situation and therefore they'll have the same outcome.
The reason I think cryonics is likely is because a) death is a soluble problem and medical advancements are being made all the time, and b) even if it doesn't happen a couple hundred years from now, it would be pretty shocking if it wasn't solved at all, ever (including in thousands or millions of years from now). There would need to be some great catastrophe that prevents humans from making progress. Why wouldn't it be solved at some point?
This idea of applying reference class forecasting to cryonics and saying it has a 0% success rate is saying that we're not going to solve the death problem because we haven't solved it before. But that can be applied to anything where progress beyond what was done in the past is involved. As Roko said, try the reference class of shocking things science hasn't done before.
All of this reasoning doesn't depend on the future being like the past. It depends on explanations about why we think our predictions about the future are good, and the validity of these explanations doesn't depend on the outcomes of predictions of other stuff (though, again, the explanations might be criticised by saying 'hey, the outcome of this prediction which has the same explanation as the one you're using turned out to be false. So you need an explanation about why this criticism doesn't apply').
In short, I'm just saying: you can't draw comparisons between stuff without having an explanation about why it applies, and it's the explanation that's important rather than the comparison.