'Tis remarkable how many disputes between would-be rationalists end in a game of reference class tennis. I suspect this is because our beliefs are partially driven by "intuition" (i.e. subcognitive black boxes giving us advice) (not that there's anything wrong with that), and when it comes time to try and share our intuition with other minds, we try to point to cases that "look similar", or the examples whereby our brain learned to pattern-recognize and judge "that sort" of case.
My own cached rule for such cases is to try and look inside the thing itself, rather than comparing it to other things - to drop into causal analysis, rather than trying to hit the ball back into your own preferred concept boundary of similar things. Focus on the object level, rather than the meta; and try to argue less by similarity, for the universe itself is not driven by Similarity and Contagion, after all.
Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time.
I think you are unduly confusing mainstream media with mainstream science. Most people do. Unless they're the actual scientists having their claims deformed, misrepresented, and sensationalised by the media.
When has there been a consensus in the established scientific literature about either certitude of catastrophic overpopulation, or imminent turnaround in oil production?
We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.
Hm. Apparently you also have non-conventional definitions of "overwhelming" and "completely wrong".
A great project: collect a history of topics and code each of them for these various features, including what we think of who was right today. Then do a full stat analysis to see which of these proposed heuristics is actually supported by this data.
I think taw's problem is just a case of the more general and simple problem of what kind of similarity is required for induction?.
And it's unwise to use political issues as case studies of unsolved philosophical problems.
I think you're completely right, this is a special case of the problem of induction. The Stanford Encyclopedia of Philosophy has a wonderfully exhaustive article about it that also discusses subjective Bayesianism at length. Among other things, that article offers a simple recommendation for taw's original problem: intersect your proposed reference classes to get a smaller and more relevant reference class.
Your examples of "highly politicised science" are very one-sided (consider autism-vaccines, GM crops, stem cell research, water floridisation, evolution), which I suppose reinforces your point.
In your set-up, some reference classes correspond to systematic biases, and some to increased/decreased variance: they don't all change your probability distribution in the same way.
For example: it takes extreme levels of arrogance to conclude, in ignorance, that most scientists are incorrect on the area of their speciality. By this argument, you should pla...
You encounter a bear. On the one hand, it's in the land mammals reference class, most of whom are not dangerous. On the other hand, it's in the carnivorous predators reference class, most of whom are.
Is the bear dangerous? I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire.
I am confused by your inclusion of nuclear winter in the list of failed scientific predictions.
You can always look at the argument at an object level carefully enough to figure out which components fit into each category. That's not too difficult.
Also, the cornucopians haven't been right either IMHO, for the last 40 years. Rather, the last 40 years has been the age of "things will stay just the same as they are today" being a much better predictor than cornucopian or doomsday predictions, at least for people unlike us for whom the internet doesn't count as much of a cornucopia.
Can't you just put the situation in all reference classes where you think it fits and multiply your prior by the Bayes factor for each? Then, of course, you would have to discount for all of the correlation between the reference classes. That is, if there were two reference classes, you couldn't use the full factors if one of them were already evidence of it being the other.
Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. ...
This is just the Doomsday Problem, which has been discu...
Just so we are clear: What do you think about climate science?
It is important to remember that most of its work was before it was political. Just because energy (mainly coal and oil) companies don't like the policy implications of climate science and are willing to pay lots of people to speak ill of it, shouldn't make it a politicized science. Indeed this would place evolutionary biology into the highly politicized science category.
Allowing a subject's ideological enemies to have a say in its status without having hard evidence is not rational at all.
Good reference classes should be uncontroversial - most people will agree about what constitutes "mainstream scientists", but you'll probably get more disagreement about which parts of science are highly politicized.
We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.
What overwhelming evidence has there been against the hypothesis that differences in average IQ among ethnic groups are at least partly genetic? Am I missing something? And what about nuclear winter? From a glance at the Wikipedia article I can't see such big differences between 21st-century predictions and 20th-century ones as to call the latter “completely wrong”.
We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ[,]
The science was never wrong in this case. Stephen Jay Gould is certainly a scientist, but differential psychology and psychometrics are not his areas of scientific expertise. Jensen's views today are essentially what they were 40 years ago, and among the relevant community of experts they have remained relatively uncontroversial throughout this period.
I really liked Robin's point that mainstream scientists are usually right, while contrarians are usually wrong. We don't need to get into details of the dispute - and usually we cannot really make an informed judgment without spending too much time anyway - just figuring out who's "mainstream" lets us know who's right with high probability. It's type of thinking related to reference class forecasting - find a reference class of similar situations with known outcomes, and we get a pretty decent probability distribution over possible outcomes.
Unfortunately deciding what's the proper reference class is not straightforward, and can be a point of contention. If you put climate change scientists in the reference class of "mainstream science", it gives great credence to their findings. People who doubt them can be freely disbelieved, and any arguments can be dismissed by low success rate of contrarianism against mainstream science.
But, if you put climate change scientists in reference class of "highly politicized science", then the chance of them being completely wrong becomes orders of magnitude higher. We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics. Chances of mainstream being right, and contrarians being right are not too dissimilar in such cases.
Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.
It's also possible to use multiple reference classes - to view impact on climate according to "highly politicized science" reference class, and impact on human well-being according to "science-y Doomsday predictors" reference class, what's more or less how I think about it.
I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire. I don't see how one of these reference class reasonings is obviously more valid than others, nor do I see any clear criteria for choosing the right reference class. It seems as subjective as Bayesian priors, except we know in advance we won't have evidence necessary for our views to converge.
The problem doesn't arise only if you agree to reference classes in advance, as you can reasonably do with the original application of forecasting costs of public projects. Does it kill reference class forecasting as a general technique, or is there a way to save it?