Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Contrarianism and reference class forecasting

26 Post author: taw 25 November 2009 07:41PM

I really liked Robin's point that mainstream scientists are usually right, while contrarians are usually wrong. We don't need to get into details of the dispute - and usually we cannot really make an informed judgment without spending too much time anyway - just figuring out who's "mainstream" lets us know who's right with high probability. It's type of thinking related to reference class forecasting - find a reference class of similar situations with known outcomes, and we get a pretty decent probability distribution over possible outcomes.

Unfortunately deciding what's the proper reference class is not straightforward, and can be a point of contention. If you put climate change scientists in the reference class of "mainstream science", it gives great credence to their findings. People who doubt them can be freely disbelieved, and any arguments can be dismissed by low success rate of contrarianism against mainstream science.

But, if you put climate change scientists in reference class of "highly politicized science", then the chance of them being completely wrong becomes orders of magnitude higher. We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics. Chances of mainstream being right, and contrarians being right are not too dissimilar in such cases.

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.

It's also possible to use multiple reference classes - to view impact on climate according to "highly politicized science" reference class, and impact on human well-being according to "science-y Doomsday predictors" reference class, what's more or less how I think about it.

I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire. I don't see how one of these reference class reasonings is obviously more valid than others, nor do I see any clear criteria for choosing the right reference class. It seems as subjective as Bayesian priors, except we know in advance we won't have evidence necessary for our views to converge.

The problem doesn't arise only if you agree to reference classes in advance, as you can reasonably do with the original application of forecasting costs of public projects. Does it kill reference class forecasting as a general technique, or is there a way to save it?

Comments (90)

Comment author: Eliezer_Yudkowsky 27 November 2009 12:31:42AM 16 points [-]

'Tis remarkable how many disputes between would-be rationalists end in a game of reference class tennis. I suspect this is because our beliefs are partially driven by "intuition" (i.e. subcognitive black boxes giving us advice) (not that there's anything wrong with that), and when it comes time to try and share our intuition with other minds, we try to point to cases that "look similar", or the examples whereby our brain learned to pattern-recognize and judge "that sort" of case.

My own cached rule for such cases is to try and look inside the thing itself, rather than comparing it to other things - to drop into causal analysis, rather than trying to hit the ball back into your own preferred concept boundary of similar things. Focus on the object level, rather than the meta; and try to argue less by similarity, for the universe itself is not driven by Similarity and Contagion, after all.

Comment author: whpearson 27 November 2009 01:16:37AM 3 points [-]

How should we unpack black boxes we don't have yet? For example a non-neural language capable self-maintaining goal-oriented system*.

We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.

*I'm trying to unpack AGI here somewhat

Comment author: CronoDAS 27 November 2009 01:33:12AM *  4 points [-]

Sometimes "looking at the thing itself" is too costly or too difficult. How can the proverbial "bright sixteen-year-old" sitting in a high school classroom figure out the truth about, say, the number of protons in an atom of gold, without having to accept the authority of his textbooks and instructors? If there were a bunch of well-funded nutcases dedicated to arguing that gold atoms have seventy-eight protons instead of seventy-nine, the only way you can really judge who's correct is to judge the relative credibility of the people presenting the evidence. After all, one side's evidence could be completely fraudulent and you'd have no way of knowing that.

Far too often, reference classes and meta-level discussions are all we have.

Comment author: Eliezer_Yudkowsky 27 November 2009 01:46:26AM 6 points [-]

Then let us try to figure out whose authority is to be trusted about experimental results and work from there. Cases where you can reduce it to a direct conflict about easily observable facts, and then work from there, are much more likely to have one dramatically more trustworthy party.

Comment author: taw 27 November 2009 10:17:53AM 2 points [-]

I estimate that even fairly bad reference class / outside view analysis is still far more reliable than the best inside view that can be realistically expected. People are just spectacularly bad at inside view analysis, and reference class analysis puts hard boundaries within which truth is almost always found.

Comment author: Eliezer_Yudkowsky 27 November 2009 11:56:40AM 0 points [-]
Comment author: taw 27 November 2009 05:54:14PM 3 points [-]

Is there any evidence that in cases that where neither "outside view" nor "strong inside view" can be applied, "weak inside view" is at least considerably better than pure chance? I have strong doubts about it.

Comment author: RobinHanson 29 November 2009 01:26:18AM 6 points [-]

Yes, it would be good to have a clearer data set of topics at dates, the views suggested by different styles of analysis, and what we think now about who was right. I'm pretty skeptical about this weak inside view claim, but will defer to some more systematic data. Of course that is my suggesting we take an outside view to evaluating this claim about which view is more reliable.

Comment author: RobinZ 27 November 2009 03:17:10PM *  5 points [-]

If I may attempt to summarize the link: Eliezer maintains that, while the quantitative inside view is likely to fail in cases where the underlying causes are not understood or planning biases are likely to be in effect, the outside view cannot be expected to work when the underlying causes undergo sufficiently severe alterations. Rather, he proposes what he calls the "weak inside view" - an analysis of underlying causes noting the most extreme of changes and stating qualitatively their consequences.

Comment author: RobinHanson 25 November 2009 11:09:36PM 11 points [-]

A great project: collect a history of topics and code each of them for these various features, including what we think of who was right today. Then do a full stat analysis to see which of these proposed heuristics is actually supported by this data.

Comment author: Bongo 26 November 2009 09:01:09AM *  9 points [-]

I think taw's problem is just a case of the more general and simple problem of what kind of similarity is required for induction?.

And it's unwise to use political issues as case studies of unsolved philosophical problems.

Comment author: cousin_it 26 November 2009 10:59:28AM *  10 points [-]

I think you're completely right, this is a special case of the problem of induction. The Stanford Encyclopedia of Philosophy has a wonderfully exhaustive article about it that also discusses subjective Bayesianism at length. Among other things, that article offers a simple recommendation for taw's original problem: intersect your proposed reference classes to get a smaller and more relevant reference class.

Comment author: MichaelVassar 26 November 2009 04:45:47PM 0 points [-]

Agreed with the first part and with the heuristic, but taw is using the possibility of politicization as an element of reference class membership. Honestly, I wouldn't even consider global warming to be a "political issue". The science seems completely trivial to understand at the object level.

Comment author: DanArmak 26 November 2009 05:13:48PM *  5 points [-]

The logic used and the predictions made are trivial. But the underlying facts and observations have been (politically, I presume) called into question. For instance in the recent CRU possibly-scandal, see Eric Raymond saying CRU published fake data and Willis Eschenbach describing how the CRU illegally denied FOIA requests for their weather data and even threatened to destroy them to prevent others from trying to replicate their studies.

Because this issue is so heavily politicized, I for one have no clear idea of the real extent of GW danger.

Comment author: Vladimir_Nesov 27 November 2009 06:16:31PM 4 points [-]

Honestly, I wouldn't even consider global warming to be a "political issue". The science seems completely trivial to understand at the object level.

I'd be shocked if it is.

Comment author: taw 26 November 2009 04:23:31PM -2 points [-]

Not really, induction problems philosophers talk about are pure theory, and totally irrelevant to daily life. Everybody knows blue/green are correct categories, while grue/bleen are not.

Figuring out proper reference class on the other hand, is a serious problem of applied rationality.

Comment author: Toby_Ord 26 November 2009 05:20:57PM *  6 points [-]

Everybody knows blue/green are correct categories, while grue/bleen are not.

Philosophers invented grue/bleen in order to be obviously incorrect categories, yet difficult to formally separate from the intuitively correct ones. There are of course less obvious cases, but the elucidation of the problem required them to come up with a particularly clear example.

Comment author: dclayh 26 November 2009 06:39:46PM 2 points [-]

I don't know about "bleen", but "grue" is perfectly sensible as the category of "things that may eat you if you venture around Zork without a light".

Comment author: Bongo 26 November 2009 08:39:20AM 6 points [-]

You encounter a bear. On the one hand, it's in the land mammals reference class, most of whom are not dangerous. On the other hand, it's in the carnivorous predators reference class, most of whom are.

Is the bear dangerous? I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire.

Comment author: taw 27 November 2009 10:35:42AM 3 points [-]

I would estimate that vast majority of carnivorous predators are tiny insects and such, so the class is even less dangerous than land mammals class. ;-) On the other hand class of "animals bigger than me" tends to be quite dangerous.

Comment author: DanArmak 27 November 2009 11:51:08AM 3 points [-]

"Animals bigger than me" are dangerous once you've encountered them up close, but normally there's no reason to do so unless you're hunting them. The total life risk of "being hurt by a carnivore" is much greater than the total life risk of "being hurt by an animal bigger than me".

This is true both today and in prehistoric environments: most of the predators who tend to tangle with humans aren't much bigger than us - snakes and leopards, mostly. OTOH, predators who are much bigger than humans don't routinely hunt humans (tigers, lions). (Although tigers may have done so long ago??? I don't really know.)

Comment author: gwern 05 December 2009 05:38:08PM *  4 points [-]

Hippopotamuses are the most dangerous mammals in Africa, and they are much bigger than humans.

Note that its closest competitor is the Cape Buffalo. Also bigger than humans.

Comment author: gwern 06 December 2009 12:28:16AM 4 points [-]

To downvoters: It is customary to explain unobvious downvotes. I've just demonstrated with multiple references that both of the top human killers on the second most populated continent in the world are larger than humans, and they are herbivores to boot. This would seem, to me anyway, to argue pretty decisively against Armak's theory that carnivores are more dangerous than large animals.

Comment author: Alicorn 06 December 2009 01:02:46AM 0 points [-]

I didn't downvote you, but the example didn't seem to contradict the claim, which was:

The total life risk of "being hurt by a carnivore" is much greater than the total life risk of "being hurt by an animal bigger than me".

Being hurt =/= being killed. Even in Africa, I'm sure people get scratched by housecats or bitten by dogs sometimes, and I don't think so many people are attacked (fatally or no) by hippos that hippos are more likely to hurt any given person than small carnivores. (Heck, if we count mosquitoes...) DanArmak's point seems to be that large animals are mostly avoidable if you want to avoid them. Small carnivores are not necessarily as easy to avoid.

Comment author: gwern 06 December 2009 01:25:59AM 3 points [-]

Literally read, 'hurt' doesn't mean being killed. But look at the examples Dan was using: tigers, snakes, leopards, lions. Is it unreasonable to infer that he was really talking about mortal dangers & hurts?

Comment author: DanArmak 09 December 2009 06:35:02PM 0 points [-]

Good point. I couldn't find any statistics on human deaths or injuries by animal type in a minute's search, and I don't have time to spare right now. But I agree that my hypothesis needs to be fact checked. (Just two animal examples, hippos and buffalos, in a single continent in a couple of decades don't make a theory. And all four of your links don't refer to any actual data, they just state that hippos are the most dangerous.)

Comment author: RobinZ 25 November 2009 08:48:40PM *  4 points [-]

I am confused by your inclusion of nuclear winter in the list of failed scientific predictions.

Comment author: taw 25 November 2009 09:45:11PM 10 points [-]

As far as I understand history of this claim, back during the Cold War it was common to predict that even small a scale nuclear exchange will get the world back to long term Ice Age, due to widespread urban fires. Similar predictions were even made about Kuwait oil wells fires in 1991 (it's a good model, as the effect was not supposed to be related to nuclear explosions as such, just resulting fires).

It turns out from more recent models, and actual data from the Gulf War that the actual magnitude of cooling is orders of magnitude smaller than what was predicted, and there was never any genuine research that really suggested levels that were widely claimed; the most straightforward explanation is that people opposed to nuclear weapons wanted to exaggerate its effect to scare people off. It might have been a calculated lie, or something they genuinely wanted to believe - the point is that politicized science is not very accurate even if you agree with its political goals.

Comment author: CarlShulman 24 February 2012 03:17:52AM 2 points [-]

It turns out from more recent models, and actual data from the Gulf War that the actual magnitude of cooling is orders of magnitude smaller than what was predicted,

Wikipedia and some searching didn't show these models. Do you have citations?

Comment author: RobinZ 25 November 2009 11:50:10PM 2 points [-]

I see. Thank you.

Comment author: toto 26 November 2009 02:55:24PM *  11 points [-]

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time.

I think you are unduly confusing mainstream media with mainstream science. Most people do. Unless they're the actual scientists having their claims deformed, misrepresented, and sensationalised by the media.

This says it all.

When has there been a consensus in the established scientific literature about either certitude of catastrophic overpopulation, or imminent turnaround in oil production?

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.

Hm. Apparently you also have non-conventional definitions of "overwhelming" and "completely wrong".

Comment author: Zachary_Kurtz 25 November 2009 08:46:00PM 4 points [-]

Isn't Robin Hanson a contrarian economist? Or does he not include economists in that.

Comment author: SilasBarta 25 November 2009 08:04:53PM *  4 points [-]

Can't you just put the situation in all reference classes where you think it fits and multiply your prior by the Bayes factor for each? Then, of course, you would have to discount for all of the correlation between the reference classes. That is, if there were two reference classes, you couldn't use the full factors if one of them were already evidence of it being the other.

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. ...

This is just the Doomsday Problem, which has been discussed ad infinitum. Yes, most of the time, people will be right, but if population grows exponentially and at some point everyone dies, then most people will be wrong. Which counts more?

Comment author: taw 25 November 2009 08:40:10PM 2 points [-]

That's not the Doomsday I was talking about, just prediction of massive suffering due to one cause or another, be in overpopulation, nuclear war, biological war, food shortages, water shortages, oil shortages, phosphorus shortages, guano shortages, whale oil shortages, rare earth metal shortages, shortages of virtually every commodity, flu pandemic, AIDS pandemic, mass scale terrorism, workers' revolutions, Barbarian takeover, Catholic takeover, Communist takeover, Islamic takeover, or whatnot, just to name a few.

Pretty much none of them caused the massive suffering and collapse of civilization predicted. Most do not involve end of humanity, so invocations of the antropic principle are misguided.

Comment author: DanArmak 25 November 2009 11:36:43PM 2 points [-]

Barbarian takeover

Well, there was the (drawn out) fall of the Western Roman Empire. It was quite a collapse of civilization, with a lot of death and suffering.

Comment author: taw 26 November 2009 12:14:08AM 1 point [-]

Stories of collapse of the Roman Empire are greatly exaggerated.

A more accurate description would be that center of Roman civilization shifted from Italy to Eastern Mediterranean long before that (Wikipedia says that population of Rome fell from almost a million to mere 30 thousand in Late Antiquity, making it really just a minor town before the Barbarians moved in).

Yes, Western peripheries of the Empire were lost to Barbarians (who became increasingly civilized in process), and southern peripheries to Arabs (who also became increasingly civilized in process). In neither of these civilization really collapsed, and most importantly at least until battle of Manzikert in 1071 the central parts of Roman (Byzantine) Empire were doing just fine.

Comment author: DanArmak 26 November 2009 01:20:00PM 2 points [-]

A more accurate description would be that center of Roman civilization shifted from Italy to Eastern Mediterranean long before that (Wikipedia says that population of Rome fell from almost a million to mere 30 thousand in Late Antiquity, making it really just a minor town before the Barbarians moved in).

Roman civilization had had several major centers. The ones in the West gradually ceased to exist. That's the only sense in which the center of civilization "shifted". Some wealthy citizens of the city of Rome may have fled east, but the vast majority of the population of the western empire (Italy, Gaul, Iberia, Britain, Africa, not to mention the western Balkans and adjacent areas which were also conquered by barbarians in the 4th century) were agricultural and could flee only if they left all their posessions behind.

IOW, the fall of population by 60-80% in these areas during the 4th and 5th centuries wasn't accomplished by emigration. (Not to mention the immigration of barbarians.)

As for the city of Rome, it was sacked by barbarians in the years 410 and 455. WP suggests that its population declined from several hundred thousand to 80,000 during approximately the fifth century, but this is unsourced and I would like better information. At any rate, at the time of the 410 sack the population was already far below its 2nd century peak of 2 million. By the 4th century the emperors didn't live there anymore (some of the 5th century ones apparently did though), so its decline started before the invasions. Still, it was much more than a "minor town" in 410, containing many riches to plunder and rich and noble people to hold for ransom.

All in all, the Roman Empire did collapse. In ~400 the Western parts of the empire existed as it had for >200 years. By 450 it was effectively restricted to Italy and parts of southern Gaul, and in 476 it was officially terminated by the death of the last Western Emperor.

Compare this map of the entire empire in 117 (not much different than in 400). That's a loss, inside 60 years, of all of Europe west of the Balkans (including Italy), and all of Africa west of Egypt (the province of Africa, around Carthage, had been a major source of agricultural produce).

The Eastern empire did reconquer some of the West in mid 6th century. It lost half of that again by 600, and most of the other half by 650. In any case its rule there wasn't very much like the original Roman system in terms of culture (the barbarians were the local rulers) or economics, taxes and representation.

at least until battle of Manzikert in 1071 the central parts of Roman (Byzantine) Empire were doing just fine.

Those central parts were on the order of one-twentieth the area ruled by Romans pre-collapse, and many of them were to the east of the original Empire. Just because they preserved unbroken political succession and the name of Romans doesn't mean we should identify them with the original Empire.

Comment author: DanArmak 26 November 2009 06:37:00PM 0 points [-]

In ~400 the Western parts of the empire existed as it had for >200 years. By 450 it was effectively restricted to Italy and parts of southern Gaul, and in 476 it was officially terminated by the death of the last Western Emperor.

Not quite accurate; in 376 a big bunch of barbarians half-forced, half-negotiated their way into the Empire, became disloyal subjects, and subsequently pillaged the Balkans and defeated killed an (Eastern) emperor and his army. So it's better to say that the Western Empire declined almost entirely during the 100 years 376-476. (Politically, militarily, and on a local rule level this is true. Culturally the collapse did take longer in some places.)

Comment author: cabalamat 29 November 2009 12:51:15PM 0 points [-]

Culturally the collapse did take longer in some places

It'd argue that culturally the Roman Empire didn't end: today 200 million Europeans (and even more outside Europe) speak languages descended from Latin; to a first approximation, all writing is in the Roman script; and the Roman Catholic Church is the largest religion in areas and populations much greater than ancient Rome.

Oh and that last paragraph included c.15 words derived from Latin.

Comment author: DanArmak 29 November 2009 06:01:16PM 3 points [-]

A few small, scattered, out of context, highly mutated facets of Roman culture have survived here and there. None of these, except Christianity, were among those most important to Romans, or those they saw as primarily distinguishing them from other cultures.

And RC Christianity, apart from the name, is vastly different today than in 500 CE (and both are vastly different from RC Christianity in, say, 1300 CE). A modern Catholic would certainly be considered a sinner and a heretic many times over in 500 CE, and probably vice versa as well (I haven't checked).

Incidentally, we are corresponding in a language that has much more in common with old Germanic tongues than with Latin, but it doesn't follow that we retain any of their culture. And here in Israel I talk and write a Hebrew which is quite similar to late Roman-era Hebrew - certainly more so than English is to German or Latin - and Orthodox Jews are the biggest religious segment in the country, but it doesn't follow that we (the non-religious people) have anything in common with ancient Jewish culture. (Consider that the vast majority of Europeans don't strictly follow RC rules either.)

Comment author: SilasBarta 25 November 2009 08:57:03PM 2 points [-]

Nevertheless, if most everyone in the world is affected by a such a disaster, then a large fraction of people will be right, so the point still applies.

Comment author: DanArmak 25 November 2009 11:38:01PM 2 points [-]

On the other hand, if many disasters are predicted and (at most) one actually happens, then averaging over separate predictions or scenarios (instead of over people), we should expect any one scenario to be very improbable.

Comment author: SilasBarta 26 November 2009 04:37:38AM *  2 points [-]

Why does that measure matter? You care about the risk of any existential threat. The fact that it happened by grey goo rather than Friendliness failure is little consolation.

Comment author: DanArmak 26 November 2009 01:21:30PM 1 point [-]

It may matter because, if many scenarios have costly solutions that are very specific and don't help at all with other scenarios, and you can only afford to build a few solutions, you don't know which ones to choose.

Comment author: SilasBarta 26 November 2009 01:59:08PM 2 points [-]

Yes, I know that reasons exist to distinguish them, but I was asking for a reason relevant to the present discussion, which was discussing how to assess total existential risk.

Comment author: DanArmak 26 November 2009 04:02:51PM 1 point [-]

Well, it has to do more with the original discussion. If you're going to discount doomsday scenarios by putting them in appropriate reference classes and so forth, then either you automatically discount all predictions of collapse (which seems dangerous and foolish); or you have to explain very well indeed why you're treating one scenario a bit seriously after dismissing ten others out of hand.

Comment author: SilasBarta 26 November 2009 04:18:17PM 1 point [-]

The original discussion was on this point:

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.

taw was saying that you should discount existential risk as such because it (the entire class of scenarios) is historically wrong. So it is the existential risk across all scenarios that was relevant.

We'd see the exact same type of evidence today if a doomsday (of any kind) were coming, so this kind of evidence is not sufficient.

Comment author: taw 27 November 2009 10:48:00AM -1 points [-]

I thought I addressed this with "usually not just barely enough to be discounted by anthropic principle, but spectacularly so" part. Anthropic principle style of reasoning can only be applied to disasters that have binary distributions - wipe out every observer in the universe (or at least on Earth), or don't happen at all - or at least extremely skewed power law distributions.

I don't see any evidence that most disasters would follow such distribution. I expect any non-negligible chance of destruction of humanity by nuclear warfare implying an almost certainty of limited scale nuclear warfare with millions dying every couple of years.

I think anthropic principle reasoning is so overused here, and so sloppily, that we'd be better off throwing it away completely.

Comment author: wedrifid 26 November 2009 02:03:40PM 0 points [-]

It may matter because, if many scenarios have costly solutions that are very specific and don't help at all with other scenarios, and you can only afford to build a few solutions, you don't know which ones to choose.

This is a good point. Fortunately as it happens we can just create an FAI and pray unto Him to 'deliver us from evil'.

Comment author: timtyler 25 November 2009 10:05:48PM 2 points [-]

A relative absence of smaller disasters must count as evidence against the views of those predicting large disasters. There are some disasters which are all-or-nothing - but most disasters are variable in scale. We have had some wars and pandemics - but civilization has mostly brushed them off so far.

Comment author: SilasBarta 26 November 2009 04:45:23AM 2 points [-]

What absence of smaller disasters? Why don't the brushes with nuclear war and other things you mention count?

Also, civilizations have fallen. Not in the sense of their genes dying out [1] but in the sense of losing the technological level they previously had. The end of the Islamic Golden Age, China after the 15th century, the fall of Rome (I remember reading one statistic that said that the glass production peak during the Roman empire wasn't surpassed until the 19th century, and it wasn't from lack of demand.)

[1] unless you count the Neanderthals, which were probably more intelligent than h. sapiens. And all the other species in genus homo.

Comment author: wedrifid 26 November 2009 04:49:08AM 2 points [-]

unless you count the Neanderthals, which were probably more intelligent than h. sapiens. And all the other species in genus homo.

Really? Can you give more detail (or a link) please?

Comment author: SilasBarta 26 November 2009 02:11:30PM 3 points [-]

Here's a summary of the different species in homo -- note the brain volumes. (I didn't mean to say all were intelligent, just that they were all near-human and went extinct, but the Neanderthals were likely more intelligent.)

And here:

Neanderthals, according to Jordan (2001), appear to have had psychological traits that worked well in their early history but finally placed them at a long-term disadvantage with regards to modern humans. Jordan is of the opinion that the Neanderthal mind was sufficiently different from that of Homo sapiens to have been "alien" in the sense of thinking differently from that of modern humans, despite the obvious fact that Neanderthals were highly intelligent, with a brain as large or larger than our own. This theory is supported by what Neanderthals possessed, and just as importantly, by what they lacked, in cultural attributes and manufactured artifacts.

Comment author: DanArmak 26 November 2009 03:58:37PM 1 point [-]

The WP table you link to gives these cranial volume ranges: H. sapiens, 1000-1850. H. neanderthalensis, 1200-1900.

Given the size of the ranges and > 70% overlap, the difference between 1850 and 1900 at the upper end doesn't seem necessarily significant. Besides, brain size correlates strongly with body size, and Neanderthals were more massive, weren't they?

More importantly, if the contemporary variation for H. sapiens (i.e. us) is all or most of that huge range (1000-1850 cc), do we know how it correlates with various measures of intellectual and other capabilities? Especially if you throw away the upper and lower 10% of variation.

Comment author: SilasBarta 26 November 2009 04:11:35PM 1 point [-]

It wasn't just the brain size, but the greater technological and cultural achievements that are evidenced in their remains, which are listed and cited in the articles.

Comment author: DanArmak 26 November 2009 04:41:34PM *  1 point [-]

By greater do you mean greater than those of H. sapiens who lived at the same time? AFAICS, the Wikipedia articles seem to state the opposite: that Neanderthals, late ones at least, were technologically and culturally inferior to H. sapiens of the same time.

The paragraph right after the one you quoted from your second link states:

There once was a time when both human types shared essentially the same Mousterian tool kit and neither human type had a definite competitive advantage, as evidenced by the shifting Homo sapiens/Neanderthal borderland in the Middle East. But finally Homo sapiens started to attain behavioural or cultural adaptations that allowed "moderns" an advantage.

The following paragraphs (through to the end of that section of the article) detail tools and cultural or social innovations that were (by conjecture) exclusive to H. sapiens. There are no specific things listed that were exclusive to Neanderthals. What "greater achievements" do you refer to?

Also, I see no basis (at least in the WP article) for "the obvious fact that Neanderthals were highly intelligent", except for brain size which is hardly conclusive. Why can't we conclude that they were considerably less intelligent than their contemporary H. sapiens?

Comment author: Douglas_Knight 26 November 2009 05:04:03PM 0 points [-]

More importantly, if the contemporary variation for H. sapiens (i.e. us) is all or most of that huge range (1000-1850 cc), do we know how it correlates with various measures of intellectual and other capabilities?

.2

Comment author: DanArmak 26 November 2009 05:17:53PM 1 point [-]

Can you expand please? Exactly what measurement is correlated with cranial capacity at .2?

Comment author: timtyler 26 November 2009 09:54:08AM -1 points [-]

This is still civilisation's very first attempt, really. I did acknowledge the existence of wars and pandemics. However, disasters that never happened (such as nuclear war) are challenging to accurately assess the probability of.

Comment author: Stuart_Armstrong 26 November 2009 09:26:30PM 6 points [-]

Your examples of "highly politicised science" are very one-sided (consider autism-vaccines, GM crops, stem cell research, water floridisation, evolution), which I suppose reinforces your point.

In your set-up, some reference classes correspond to systematic biases, and some to increased/decreased variance: they don't all change your probability distribution in the same way.

For example: it takes extreme levels of arrogance to conclude, in ignorance, that most scientists are incorrect on the area of their speciality. By this argument, you should place your own estimate of the most probable future course of the climate close to the scientific consensus (and the scientific consensus on global warming is pretty conistent). However, the science has been very politicised, so alternative theories may have been undermined; therefore your estimate can accept a large variance. "I agree with the scientific consensus, but tentatively", in other words.

Other example: doomseday-ee predictions. Overpopulation, peak oil, overfishing: in each case, the science in the prediction was pretty right. Population went up; there would have been peak oil with the methods of extraction of the time; fish stocks were destroyed. What was wrong was underestimating future progress and the ability of people to adapt to new situations. So scientific doomseday-ee predictions are a reference class with a special status: correct on their merits, incorrect on their supposed consequences.

So you've not to figure out which reference classes your issue belongs to, but also the type of that reference class, and its complete effect on your estimates.

I'd say reference class reasoning can be saved, but it's more of an art than an easily shared rational tool. If used honestly, you can get good information - but it's easy to abuse, and very unlikely to be convincing to others.

Comment author: taw 27 November 2009 10:30:02AM 8 points [-]

Other example: doomseday-ee predictions. Overpopulation, peak oil, overfishing: in each case, the science in the prediction was pretty right.

This is not what they were about. What they predicted was massive suffering in each case. Overpopulation doomsdayers predicted food and resource shortages, wars for land and water and such; peak oilers predicted total collapse of economy, death of over half of humanity, and such. Other than for their supposedly massive consequences peak oil is as interesting as peak typewriters, that is not at all unless you work in oil/typewriter industry.

By the way false predictions of underlying process were false in all three cases you mention - population growth is sublinear for quite some time, peak oil reliably doesn't take place on any of predicted dates, and total fish production is increasing via aquaculture - or true in only the most restricted way, far more restricted than what was claimed - population did increase at all, old oil fields are depleting, wild fish production is not increasing - but this is irrelevant - the core of doomsdayer predictions is the doom part, which almost invariably doesn't happen.

Comment author: Stuart_Armstrong 30 November 2009 02:42:55PM 0 points [-]

or true in only the most restricted way, far more restricted than what was claimed

That's exactly my position. Doomesday predictions are combinations of reasonable science and unwaranted conclusions. They're like the mirror image of homeopathy, which has wild craziness leading to a partially correct conculsion: "take this pill, and you'll feel better".

Comment author: MichaelVassar 26 November 2009 04:44:06PM 2 points [-]

You can always look at the argument at an object level carefully enough to figure out which components fit into each category. That's not too difficult.

Also, the cornucopians haven't been right either IMHO, for the last 40 years. Rather, the last 40 years has been the age of "things will stay just the same as they are today" being a much better predictor than cornucopian or doomsday predictions, at least for people unlike us for whom the internet doesn't count as much of a cornucopia.

Comment author: taw 27 November 2009 10:33:55AM 5 points [-]

"things will stay just the same as they are today" would be a horrible horrible predictor for the last 40 years.

Check gapminder 1969 to 2009 for the poorest end how drastic and cornucopian the changes were for most of them. For the rich end, Internet, mobiles, and other wonderful technology counts very much as cornucopia.

Comment author: rortian 26 November 2009 03:22:21AM -1 points [-]

Just so we are clear: What do you think about climate science?

It is important to remember that most of its work was before it was political. Just because energy (mainly coal and oil) companies don't like the policy implications of climate science and are willing to pay lots of people to speak ill of it, shouldn't make it a politicized science. Indeed this would place evolutionary biology into the highly politicized science category.

Allowing a subject's ideological enemies to have a say in its status without having hard evidence is not rational at all.

Comment author: Jayson_Virissimo 28 November 2009 12:48:19PM 3 points [-]

"Just because energy (mainly coal and oil) companies don't like the policy implications of climate science and are willing to pay lots of people to speak ill of it, shouldn't make it a politicized science."

It seems as though energy companies have an incentive to downplay science that provides justification for limiting CO2, but don't scientists with government funding have incentive to play up science that provides justification for an increase in government power? How could we find out the magnitude of there effects without actually understanding the research ourselves?

Comment author: taw 26 November 2009 06:03:52AM 3 points [-]

You just confirm my point. The very fact that you use phrases like "policy implications of climate science", and "subject's ideological enemies" shows it's a highly politicized field. You wouldn't say "policy implications of quantum physics" or "chemistry's ideological enemies".

In case you didn't follow Climategate, it look that scientists from East Anglia University engaged in politics a lot, including dirty politics; and they were nothing like neutral scientists merely seeking the truth and letting others deal with policy. You may find their actions warranted due to some greater good, or not, but it's not normal scientific practice, and I'd be willing to bet against pretty high rates that you would not find anything like that on any evolutionary biology department.

That doesn't mean their findings are wrong. There are plenty of highly politicized issues where the mainstream is right or mostly right, but this rate is significantly lower than for non-politicized science. For example mainstream accounts of histories of most nations tend to be strongly whitewashed, as it's politically convenient. They are mostly accurate when it comes to events, but you can be fairly sure there are some systemic distortions. That's the reference class in which I put climate science - most likely right on main points, most likely with significant distortions, and with non-negligible chance of being entirely wrong.

On the other hand the moment climate scientists switch from talking about climate to talking about policy or impact of climate change of human well-being, I estimate that they're almost certainly wrong. There is no reference class I can think of which suggests otherwise, and the closest reference class of Doomsday predictors has just this kind of track record.

If you want some more, I did blog a bit about climate change recently: 1, 2, 3.

Comment author: SilasBarta 26 November 2009 08:44:20PM 1 point [-]

I replied to your point about evolutionary biology here.

Comment author: [deleted] 29 December 2012 03:19:26PM 0 points [-]

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.

What overwhelming evidence has there been against the hypothesis that differences in average IQ among ethnic groups are at least partly genetic? Am I missing something? And what about nuclear winter? From a glance at the Wikipedia article I can't see such big differences between 21st-century predictions and 20th-century ones as to call the latter “completely wrong”.

Comment author: Pablo_Stafforini 26 November 2009 09:54:46PM *  0 points [-]

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ[,]

The science was never wrong in this case. Stephen Jay Gould is certainly a scientist, but differential psychology and psychometrics are not his areas of scientific expertise. Jensen's views today are essentially what they were 40 years ago, and among the relevant community of experts they have remained relatively uncontroversial throughout this period.

Comment author: CarlShulman 28 November 2009 07:16:59AM *  2 points [-]

What are you basing this claim of uncontroversial status on?

http://en.wikipedia.org/wiki/Snyderman_and_Rothman_(study)

Surveys of psychometricians and other relevant psychologists have never shown a consensus in support of Jensen's views on group differences. At most, a more moderate version of his position (some genetic component, perhaps small) has held plurality or bare majority support in the past (but might not anymore, in light of work such as Flynn's) while remaining controversial.

Comment author: Pablo_Stafforini 29 November 2009 04:33:18PM *  3 points [-]

Hi Carl,

I claimed that Jensen's views are relatively uncontroversial, not that they are entirely so. In making that claim, I wasn't thinking only of Jensen's views about the genetic component of the Black-White gap in IQ scores, but also about his views on the existence of such a gap and on the degree to which such scores measure genuine differences in intellectual ability. Perhaps it was confusing on my part to use Jensen's name to refer to the cluster of views I had in mind. The point I wished to make was that the various views about race and IQ that taw might have had in mind in writing the sentence quoted above are not significantly more controversial today than they were in the past, and are shared by a sizeable portion of the relevant community of experts. As Snyderman and Rothman write (quoted by Gottfredson, p. 54),

On the whole, scholars with any expertise in the area of intelligence and intelligence testing ... share a common view of [what constitute] the most important components of intelligence, and are convinced that [intelligence] can be measured with some degree of accuracy. An overwhelming majority also believe that individual genetic inheritance contributes to variations in IQ within the white community, and a smaller majority express the same view about the black-white and SES [socioeconomic] differences in IQ.

Anecdotally, I myself have become an agnostic about the source of the Black-White differences in IQ, after reading Richard Nisbett's Intelligence and How to Get It.

Comment author: toto 29 November 2009 05:07:28PM 0 points [-]

IIRC Jensen's original argument was based on very high estimates for IQ heritability (>.8). When within-group heritability is so high, a simple statistical argument makes it very likely that large between-group differences contain at least a genetic component. The only alternative would be that some unknown environmental factor would depress all blacks equally (a varying effect would reduce within-group heritability), which is not very plausible.

Now that estimates of IQ heritability have been revised down to .5, the argument loses much of its power.

Comment author: Pablo_Stafforini 29 November 2009 06:01:11PM *  1 point [-]

Bouchard's recent meta-analysis upholds such high estimates, at least for adulthood. These are the figures listed on Table 1 (p. 150):

Age 5: .22

Age 7: .40

Age 10 .54

Age 12 .85

Age 16 .62

Age 18 .82

Age 26 .88

Age 50 .85

Age >75 .54–.62

Comment author: RobinZ 29 November 2009 06:07:43PM 0 points [-]

Did you type the number for Age 16 correctly? I can think of no sensible reason why there should be a divot there.

Comment author: Pablo_Stafforini 29 November 2009 06:28:07PM 1 point [-]

I uploaded Bouchard's paper here. I also uploaded Snyderman and Rothman's study here.

Comment author: Pablo_Stafforini 29 November 2009 06:21:17PM 1 point [-]

Yes, the figure is correct.

Comment author: Emile 25 November 2009 08:51:51PM 0 points [-]

Good reference classes should be uncontroversial - most people will agree about what constitutes "mainstream scientists", but you'll probably get more disagreement about which parts of science are highly politicized.

Comment author: taw 26 November 2009 12:26:35AM 5 points [-]

Would that not bias the results? Category like "mainstream science" bundles together chemistry with virtually impeccable record, and psychology with highly dubious one. Using category like that we'll greatly underestimate certainty of chemical predictions, and greatly overestimate certainty of psychological ones.

What I wanted to say is that we move from supporters and opponents arguing about particulars of a situation to supporters and opponents arguing about proper reference class. Which might be an improvement, but it doesn't solve the issue.

Comment author: SilasBarta 26 November 2009 08:43:48PM *  4 points [-]

Would that not bias the results? Category like "mainstream science" bundles together chemistry with virtually impeccable record, and psychology with highly dubious one.

You can put something in multiple categories, like I said before, and like cousin_it also said.

The fact that mainstream science covers fields of widely-varying veracity just means that it has a near-unity Bayes factor. The reason that chemistry is so much more credible is that it's also in several other high-Bayes-factor reference classes. (ETA: e.g., "theories on which products in daily use are predicated")

There seems to be a halo bias going on in some commenters here. You can put something in a low-credibility class and still consider it high credibility -- for example, if it belongs to other classes with a high Bayes factor. So you can consider e.g. evolutionary biology to be politicized, but still credible because its other achievements outweigh the discount from politicization.

Agreeing with something doesn't mean saying only positive things about it.

Comment author: pdf23ds 26 November 2009 01:42:43AM 0 points [-]

psychology with highly dubious one

Clinical psychology, sure. But psychology in general, i.e. cognitive science, umm, no.

Comment author: taw 26 November 2009 02:31:40AM 7 points [-]

I'm pretty sure that if you compare track record of any field of psychology with track record of chemistry, it will be highly unflattering for the former. I did not wish to imply that psychology is entirely without results, just that it compares rather poorly with hard science.