Meta question here: why does reference-class forecasting work at all?
Presumably, the process is that when you cluster objects by visible features, you also cluster all their invisible features too, and the invisible features are what determines the time evolution of those objects.
If the category boundary of the "reference class" is a simple one, then you can't fool yourself by interfering with the statistical correlation between visible and hidden attributes.
For example, reference class forecasting predicts that cryo will not work because cryo got clustered with all the theistic religious afterlives, and things like the alchemists' search for the Elixir of Life. The visible attribute we're clustering on is "actions that people believe will result in an infinite life, or >200 year life".
But a cryonics advocate might complain that this argument is trampling roughshod over all the careful inside view reasoning that cryonicists have done about why cryo is different than religion or superstition: namely that we have a scientific theory for what is going on.
If you drew the boundary around "Medical interventions that have a well accepted scientific theory backing them up", then cryo fares better. The different boundaries you can draw lead to focus upon different hidden attributes of the object in question: cryonics is like religion in some ways, but it is also like heart transplants.
I suggest a reference class "Predictions of technologies which will allow humans to recover from what would formerly have been considered irreversible death", with successful members such as heart transplants, CPR, and that shock thing medical shows are so fond of. (You know, where they shout "Clear!" before using it.)
An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.
So to sum up, you think you have a heuristic "On average, nothing ever happens for the first time" which beats any argument that something is about to happen for the first time. Cases like the Wright Brothers (reference class: "attempts at heavier-than-air flight") are mere unrepeatable anomalies. To answer the fundamental rationalist question, "What do you think you know and how do you think you know it?", we know the above is so because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking "How long did it take last time?" instead of trying to visualize the details. Is that a fair summary of your position?
But the fact that no perpetual motion machine has been built is not the reason we believe the feat to be impossible. We have independent, well-understood reasons for thinking the feat impossible.
For example I've heard plenty of people being absolutely certain that fall of the Soviet Union was virtually certain and caused by something they like to believe - usually without even the basic understanding of facts, but many experts make identical mistake. The fact is - nobody predicted it (ignoring background noise of people who "predict" such things year in year out) - and relevant reference classes showed quite low (not zero, but far lower than one) probability of it happening.
Everyone I knew from the Intelligence community in 1987 - 1989 were of the opinion that the Soviet Union had less than 5 years, 10 at tops. Between 1985 and 1989, they had massive yearly increases in the contacts from Soviets either wishing to defect or to pass information about the toppling of the control structures. None of them were people who made yearly predictions about a fall, and every one of them was not happy about the situation (as every one of us lost our jobs as a result). I'd hardly call that noise.
I entertain the notion that outside view might be a bad way of analyzing some situations, the post is a question on what this class might look like, and how do we know a situation belongs to such class?
inside view by definition has no evidence of even as little as lack of systemic bias behind it.
Not 'by definition'; if you justify using IV by noting that it's worked on this class of problems before, you're still using IV. Semantic quibbles aside, this really sounds to me like someone trying to believe something interpersonally justifiable (or more justifiable than their opponent), not be right.
Works great when you're drawing from the same barrel as previous occasions. Project prediction, traffic forecasts, which way to drive to the airport... Predicting the far future from the past - you can't call that the Outside View and give it the privileges and respect of the Outside View. It's an attempt to reason by analogy, no more, no less.
I certainly do. It's my strong impression that so does almost everyone outside of the Less Wrong community and a majority of people in this community, so according to the outside view of majoritarianism I'm probably right.
Taleb's "The Black Swan" is basically a treatise on failures from uses of the outside view.
I hereby assign all your skepticism to "beliefs the future will be just like the past" with associated correctness frequency zero.
PONG.
Your move in the wonderful game of Reference Class Tennis.
It doesn't. I simply don't believe in Reference Class Tennis. Experiments show that the Outside View works great... for predicting how long Christmas shopping will take. That is, the Outside View works great when you've got a dozen examples that are no more dissimilar to your new case than they are to each other. By the time you start trying to predict the future 20 years out, choosing one out of a hundred potential reference classes is assuming your conclusion, whatever it may be.
How often do people successfully predict 20 years out - let alone longer - by picking some convenient reference class and saying "The Outside View is best, now I'm done and I don't want to hear any more arguments about the nitpicky details"?
Very rarely, I'd say. It's more of a conversation-halter than a proven mode of thinking about that level of problem, and things in the reference class "unproven conversation halter on difficult problems" don't usually do too well. There, now I'm done and I don't want to hear any more nitpicky details.
WTF?!? Nukes didn't change international relations? We HAVE world peace. No declarations of war, no total wars. Current occupations are different in kind from real wars.
Also, flight continued a trend in transport speeds which corresponded to continuing trends in GDP.
Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable
I'll nominate hypotheses or predictions predicated on materialism, or maybe the Copernican/mediocrity principle. In an indifferent universe, there's nothing special about the current human condition; in the long run, we should expect things to be very different in some way.
Note that a lot of the people around this community who take radically positive scenarios seriously, also take human extinction risks seriously, and seem to try to carefully analyze their uncertainty. The attitude seems markedly different from typical doom/salvation prophecies.
(Yes, predictions about human extinction events have never come true either, but there are strong anthropic reasons to expect this: if there had been a human extinction even in our past, we wouldn't expect to be here to talk about it!)
cryonics
The human brain is made out of matter (materialism). Many people's brains are largely intact at the time of their deaths. By preserving the brain, we give possible future advanced neuroscience and materials technology a chance at restoring the original person. There are certainly a number of good reasons to think that this probably won't happen, but it doesn't belong in the same reference class of "predictions promising eternal life," because most previous predictions about about eternal life didn't propose technological means in a material universe. Cryonics isn't about rapturing people's souls up to heaven; it's about reconstruction of a damaged physical artifact. Conditional on continued scientific progress (which might or might not happen), it seems plausible. I do agree that "technology which isn't even remotely here" is a good reference class. Similarly ...
superhuman AIs
Intelligence doesn't require ontologically fundamental things that we can't create more of, only matter appropriately arranged (materialism). Humans are not the most powerful possible intelligences (mediocrity). Conditional on continued scientific progress, it's plausible tha...
Things not prohibited by physics that humans want to happen don't happen eventually? Very far from clear.
Alter these reference classes even tiny bit, and the result you get is basically just the opposite. For cryonics, just use the reference class of cases where people thought either a) that technology X could prolong the life of the patient, or b) that technology X could preserve wanted items, or c) that technology X could restore wanted media. Comparing it to technologies like this seems much more reasonable than taking the single peculiar property of cryonics(that it could theoretically for the first time grant us immortality) and using only that as a reference class. You could use same argument of using the peculiar property as reference class against any developing technology and consistently reach ~0% chance for it, so it works as perfectly general counter argument too.
Coming of a new world seems more reasonable reference class for singularity, but you seem to be interpreting in a bit strickter way than I would. I'd rephrase that as reference class of enormous changes in society, and there has indeed been many of such. Also, we note that processing and spreading information has been crucial to many of these, so narrowing our reference class to crucial properties of singularity(which basically just means "huge change in society due to artifical being that is able to process information better than we are"), we actually gain opposite result than what you did.
We do have a fairly good track record of making artifical beings that replicate parts of human behavior, too.
The problem with a lot of faulty outside view arguments is that the choice of possible features to focus on is too rich. For a reference class to be a good explanation of the expected conclusion, it needs to be hard to vary. Otherwise, the game comes down to rationalization, and one may as well name "Things that are likely possible" as the reference class and be done with it.
You don't even have to go as far as to cryonics and AI to come up with examples of the outside view's obvious failure. For example, mass production of 16nm processors has never happened in the course of history. Eh, technological advancement in general is a domain where outside view is useless, unless you resort to 'meta-outside views' like Kurzweil, such as predicting an increase in computing power because in the past computing power has increased.
Ultimately, I think the outside view is a heuristic that is sometimes useful and sometimes not; since actual outcomes are fully determined by "inside" causality.
A general observation - a class reference and the outside view are only useful if they are similar enough in the relevant characteristics, whatever they may be.
For general predictions of the future, the best rule is to predict only general classes of futures, the more detailed the predictions, even slightly more detailed, the significantly lower the odds of that future coming about.
I think the odds of successful cryonics to be about even. A Singularity of some sort happening within 50 years slightly better than even. And a FOOM substantially less, though...
"Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness."
Why? The above statement seems spectacularly wrong to me, and to be contradicted by all commonplace human experience, on a small scale or on a large scale.
"Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate."
What? Of such a tech, fairly well und...
Why is this post so highly rated? As far as I can tell, the author is essentially saying that immortality will not happen in the future because it has not already happened. This seems obviously, overtly false.
One possibility among many: I suspect that lots of people, even among those who agree with him, see EY on some level as overconfident / arrogant / claiming undeserved status, and get a kick out of seeing him called out on it, even if not by name.
This seems to me to argue yet again that we need to collect an explicit dataset of prior big long-term sci/tech based forecasts and how they turned out. If I assign a ~5+% chance to cryonics working, then for you to argue that goes against an outside view you need to show that substantially less than 5% of similar forecasts turned out to be wrong.
If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"
http://www.alcor.org/FAQs/faq01.html#evidence
Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest n
Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.
Try the reference class of shocking things that science was predicted to never do, e.g. flying machines or transmutation of elements or travel to the planets.
I don't know if those are the right reference classes for prediction, but those two beliefs definitely fall into those two categories. That should set off some warning signals.
Most people seem to have a strong need to believe in life after death and godlike beings. Anything less than ironclad disproof leads them to strong belief. If you challenge their beliefs, they'll often vigorously demonstrate that these things are not impossible and declare victory. They ignore the distinction between "not impossible" and "highly likely" even wh...
It is not a good idea to try and predict the likelihood of the emergence of future technologies by noting how these technologies failed to emerge in the past. The reason is that cryonics, singularities, and the like, are very obviously more likely to exist in the future than they were in the past (due to the invention of other new technologies), and hence the past failures cease to be relevant as the years pass. Just prior to the successful invention of most new technologies, there were many failed attempts, and hence it would seem (looking backward and applying the same reasoning) that the technology is unlikely ever to be possible.
I think we should taboo the words "outside" and "inside" for purposes of this discussion. They obscure the actual reasoning processes being used, and they bring along analogies to situations that are qualitatively very different.
I put cryonics in the reference class of a "success of a technical project on a poorly understood system". Which means that most of medical research comes under that heading. So not good odds but not very small.
I put AGI in the same class, although it has the off putting property of possible recursion (in that it is trying to understand understanding, which is just a little hairy). Which means it might be a special case in how easy it is to solve, with the evidence so far pointing at the harder end of the spectrum.
For FOOM and the singularity I ...
The main use of outside views is to argue that people with inside views are overconfident, presumably because they haven't considered enough failure modes or delays. Thus your reference class should include some details of the inside view.
Thus I reject your reference classes "things promising eternal life" and "beliefs in almost omnipotent good or evil beings" as not having inside views worth speaking of. "Predictions based on technology which isn't even remotely here" is OK.
likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate.
A new world did come to be following the Industrial Revolution. Another one came about twenty years ago or so, when the technology that allows us to argue this very instant came into its own. People with vision saw that these developments were possible and exerted themselves to accomplish them, so the success rate of the predictions isn't strictly nil. I'd put it above epsilon, even.
Our proposed complicated object here is "cryonics, singularity, superhuman AI etc." and I'm looking for a twist that decomposes it into separate parts with obvious references classes of objects taw finds highly probable. (Maybe. There are other ways to Transform a problem.) How about this: take the set of people who think all of those things are decently likely, then for each person apply the outside view to find out how likely you should consider to be. Or instead of people use journals. Or instead take the set of people who think none of those...
Biting the outside view bullet like me, and assigning very low probability to them.
I am going to stop using the term 'bite the bullet'. It seems to be changing meaning with repeated use and abuse.
For some things (especially concrete things like animals or toothpaste products), it is easy to find a useful reference class, while for other things it is difficult to find which possible reference class, if any, is useful. Some things just do not fit nicely enough into an existing reference class to make the method useful - they are unclassreferencable, and it is unlikely to be worth the effort attempting to use the method, when you could just look at more specific details instead. ("Unclassreferencable" suggests a dichotomy, but it's more of...
Reference class forecasting is meant to overcome the bias among humans to be optimistic, whereas a perfect rationalist would render void the distinction between "inside view" and "outside view" -- it's all evidence.
Therefore a necessary condition to even consider using reference class forecasting for predicting an AI singularity or cryonics is that the respective direct arguments are optimistically biased. If so, which flaws do you perceive in the respective arguments, or are we humans completely blind to them even after applying significant scrutiny?
I'm perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I'd modify the classes slightly, however:
I cannot think of any reference class in which cryonics does well. ... I invite you to try in comments
Okay: "Technologies whose success is predicated only on a) the recoverability of biological information from a pseudo-frozen state, and b) the indistinguishability of fundamental particles."
b) is well-established by repeated experiments, and a) is a combination of proven technologies.
I'd say that this is clashing with the sense that more should be possible in the world, and it has the problem that the reference classes are based on specific results. You almost sound like Lord Kelvin.
The reference class of things promising eternal life is huge, but it's also made of stuff that is amazingly irrational, entirely based on narrative, and propped up with the greatest anti-epistemology the world has ever known. Typically there were no moving parts.
The reference class for coming of a new world, to me, includes predictions like talk about the...
Reference class forecasting might be an OK way to criticise an idea (that is, in situations where you've done something a bunch of times, and you're doing the exact same thing and expect a different outcome despite not having any explanations that say there should be a different outcome), but the idea of using it in all situations is problematic, and it's easy to misapply:
It's basically saying 'the future will be like the past'. Which isn't always true. In cases like cryonics -- cases that depend on new knowledge being created (which is inherently unpredic...
This is a real question concerning this quote:
reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate.
Are you saying that the Industrial Revolution did not has a success rate of greater than 0% to come to pass? The beliefs associated with it may not have been accurate when looking at some of the most critical or the most enthusiastic of supporters for the Industrial Revolution, but most of the Industrialists who made their fortunes from the event understood quite well that it was the end of a...
Any wrongness can be explained as referencing a suboptimal reference class compared to an idealised reference class.
I recognize this is an old post, but I just wanted to point out that cryonics doesn't promise an escape from all forms of death, while Heaven does, meaning Heaven has a much higher burden of proof. Cryonics won't save you from a gunshot or a bomb or an accident, before or after you get frozen. Cryonics promises (a possibility of) an end to death by non-violent brain failure, specifically old age.
Science has been successful in the past at reducing the instances of death by certain non-violent failures of various organs. Open heart surgery and bypass surgery...
I knew that there was something else that I wanted to ask.
How closely is the Optimism Bias similar to the Dunnin-Krueger Effect?
reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate.
That's odd. This would imply that you don't believe in evolution of homo sapiens sapiens from previous hominids, or the invention of agriculture... . Heck, read the descriptions of heaven in the New Testament: the description of the ultimate better world (literally heaven!) is a step backward from everyday life in the West for most people.
It seems we have lots of examples of the transformation of the world for better, though I'd say there's not much room for worse than the lowest-happiness lives already in history.
"Many economic and financial decisions depend crucially on their timing. People decide when to invest in a project, when to liquidate assets, or when to stop gambling in a casino. We provide a general result on prospect theory decision makers who are unaware of the time-inconsistency induced by probability weighting. If a market offers a sufficiently rich set of investment strategies, then such naive investors postpone their decisions until forever. We illustrate the drastic consequences of this “never stopping” result, and conclude that probability distortion in combination with naivité leads to unrealistic predictions for a wide range of dynamic setups.""
The whole issue of "singularity" needs a bit of clarification. If this is a physical singularity, i.e. a breakdown of a theory's ability to predict, then this is in the reference class of "theories of society claiming current models have limited future validity", which makes it nearly certain to be true.
If its a mathematical singularity (reaching infinity in finite time), then its reference class is composed nearly solely of erroneous theories.
You can get compromises between the two extremes (such as nuclear chain reactions - massive self feeding increase until a resource is exhausted), but it's important to define what you mean by singularity before assigning it to a reference class.
You will have an eternal life in heaven after your death isn't a real prediction.
A real prediction is something where you have a test to see whether or not the prediction turned out to be true. There's no test to decide whether someone has eternal life in heaven.
Predictions are about having the possibility to update your judgement after the event you predict happens or doesn't happen.
One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.
Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.
Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.
And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!
There are a few ways how this situation can be resolved:
How do you reconcile them?