Vladimir_M comments on Vote Qualifications, Not Issues - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (185)
jimrandomh:
Trouble is, there are lots of historical examples when the level of smarts on one side of an issue was noticeably higher, but in retrospect, it turned out that the intellectuals' favored position was frightfully deluded. For example, just look at the enormous popularity of Marxism among Western intellectuals two generations ago.
The basic problem is that intellectuals care much more about the status-signaling aspects of their opinions than the common folk, so even if they have more information and higher intellectual abilities, their incentives to bias their views for the sake of appearing enlightened and affiliated with high-status positions and individuals are also greater. (As Orwell commented about some ideological positions that were fashionable in his time: "One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.")
The fact that the educated and intelligent are sometimes in the wrong doesn't mean it isn't a good heuristic. Pretty much any heuristic is going to fail sometimes. The question is whether the heuristic is accurate (in the sense of being more often correct than not) and, if so, how accurate it is. This heuristic seems to be one where the general trend is clear. I can't identify a single example other than Marxism in the last hundred years where the intellectual establishment has been very wrong, and even then, that's an example where the general public in many areas also had a fair bit of support for that view.
I'm curious about your claim that that "intellectuals care much more about the status-signaling aspects of their opinions than the common folk." This seems plausible to me, but I'd be curious what substantial evidence there for the claim.
I'm reading The Rational Optimist at the moment which has a few examples.
Malthusian ideas about impending starvation or resource exhaustion due to population growth have been popular with intellectuals for a long time but particularly so in the last 100 years. Paul Ehrlich is a well known example. He famously lost his bet with economist Julian Simon on resource scarcity. His prediction in The Population Bomb in 1968 that India would never feed itself was proved wrong that same year. These ideas were generally widely held in intellectual circles (and still are) but there is a long track record of specific predictions relating to these theories that have proved wrong.
Another case that springs to mind: it looks increasingly likely that the mainstream advice on diet as embodied in things like the USDA food guide pyramid was deeply flawed. The dominant theory in the intellectual establishment regarding the relationship between fat, cholesterol and heart disease also looks pretty shaky in light of new research and evidence.
I'd also argue that the intellectual establishment over the latter half of the twentieth century has over emphasized the blank-slate / nurture side of the nature vs. nurture debate and neglected the evidence for a genetic basis to many human differences.
Population/natural resource exhaustion related crises are a bit iffy, because it is plainly obvious that if they remain exponentially growing forever, relative to linearly growing or constant resources (like room to live on), one or the other has got to give. Mispredicting when it will happen is different from knowing that it has to happen eventually, and how could it not? Even expanding into space won't solve the problem, since the number of planets we can reach as time goes on is smaller than exponential population growth rates and demands for resources.
There are definitely plenty of other scientifically held views that get overturned here and there, though - another example is fever, which for centuries has been considered a negative side effect of an infection, but lately it's been found to have beneficial properties, as certain elements of your immune system function better when the temperature rises (and certain viruses function worse). http://www.newscientist.com/article/mg20727711.400-fever-friend-or-foe.html
Obviously the people disputing the wrong predictions know this. Julian Simon was just as familiar with this trivial mathematical fact as Paul Ehrlich. The fact that this knowledge led Paul Ehrlich to make bad predictions indicates that his analysis was missing something that Julian Simon was considering. Often this missing something is a basic understanding of economics.
JoshuaZ:
Well, on any issue, there will be both intellectuals and non-intellectuals on all sides in some numbers. We can only observe how particular opinions correlate with various measures of intellectual status, and how prevalent they are among people who are in the upper strata by these measures. Marxism is a good example of an unsound belief (or rather a whole complex of beliefs) that was popular among intellectuals because its basic unsoundness is no longer seriously disputable. Other significant examples from the last hundred years are unfortunately a subject of at least some ongoing controversy; most of that period is still within living memory, after all.
Still, some examples that, in my view, should not be controversial given the present state of knowledge are various highbrow economic theories that managed to lead their intellectual fans into fallacies even deeper than those of the naive folk economics, the views of human nature and behavior of the sort criticized in Steven Pinker's The Blank Slate, and a number of foreign policy questions in which the subsequent historical developments falsified the fashionable intellectual opinion so spectacularly that the contemporary troglodyte positions ended up looking good in comparison. There are other examples I have in mind, but those are probably too close to the modern hot-button issues to be worth bringing up.
Frankly, in matters of politics and ideology, I don't find the trend so clear. To establish the existence of such a trend, we would have to define a clear metric for the goodness of outcomes of various policies, and then discuss and evaluate various hypothetical and counterfactual scenarios of policies that have historically found, or presently find, higher or lower favor among the (suitably defined) intellectual class.
This, however, doesn't seem feasible in practice. Neither is it possible to evaluate the overall goodness of policy outcomes in an objective or universally agreed way (except perhaps in very extreme cases), nor is it possible to construct accurate hypotheticals in matters of such immense complexity where the law of unintended consequences lurks behind every corner.
My answer is similar to the earlier comment by Perplexed: given the definition of "intellectual" I assume, the claim is self-evident, in fact almost tautological.
I define "intellectuals" as people who derive a non-negligible part of their social status -- either as public personalities or within their social networks -- from the fact that other people show some esteem and interest for their opinions about issues that are outside the domain of mathematical, technical, or hard-scientific knowledge, and that are a matter of some public disagreement and controversy. This definition corresponds very closely to the normal usage of the term, and it implies directly that intellectuals will have unusually high stakes in the status-signaling implications of their beliefs.
Opposition to nuclear power?
OK, but apart from Marxism, nuclear power, coercive eugenics, Christianity, psychoanalysis, and the respective importance of nature and nurture - when has the intellectual establishment ever been an unreliable guide to finding truth?
Come to think of it, one thing I'm surprised nobody mentioned is the present neglect of technology-related existential risks.
Yeah, that provides some more examples. The elite was very worried about existential risks from nuclear war ("The Fate of the Earth"), resource shortages and mass starvation ("Club of Rome"), and technology-based totalitarianism ("1984"). Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
I don't think worrying about nuclear war during the Cold War constituted either "crying wolf" or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after "The Fate of the Earth" was published), and various false alert incidents could have resulted in nuclear war, and I'm not sure why anyone who opposed nuclear weapons at the time would be "embarrassed" in the light of what we now know.
I don't think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concerns about some technology risks like EMP attacks and nuclear terrorism are still taken seriously, even though these are probably unlikely to happen and the damage would be much less severe than a nuclear war.
I agree. And nuclear war was certainly a risk that was worth taking seriously at the time.
However, that doesn't make my last sentence any less true, especially if you replace "embarrassed" with "exhausted". The risk of a nuclear war, somewhere, some time within the next 100 years, is still high - more likely than not, I would guess. It probably won't destroy the human race, or even modern technology, but it could easily cost 400 million human lives. Yet, in part because people have become tired of worrying about such things, having already worried for decades, no one seems to be doing much about this danger.
When you say that no one seems to be doing much, are you sure that's not just because the efforts don't get much publicity?
There is a lot that's being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There's an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israel destroyed the Iraqi and Syrian nuclear programmes with airstrikes. OK, self-interested, but existing nuclear states stop their enemies getting nuclear weapons then it reduces the risk of a nuclear war.
Somebody wrote the Stuxnet worm to attack Iran's enrichment facilities (probably) and Iran is under massive international pressure not to develop nuclear weapons.
Western leaders are at least talking about the goal of a world without nuclear weapons. OK, probably empty rhetoric.
India and Pakistan have reduced the tension between them, and now keep their nuclear weapons stored disassembled.
The US is developing missile defences to deter 'rogue states' who might have a limited nuclear missile capability (although I'm not sure why the threat of nuclear retaliation isn't a better deterrent than shooting down missiles). The Western world is paranoid about nuclear terrorism, even putting nuclear detectors in its ports to try to detect weapons being smuggled into the country (which a lot of experts think is silly, but I guess it might make it harder to move fissile material around on the black market).
etc. etc.
Sure, in the 100 year timeframe, there is still a risk. It just seems like a world with two ideologically opposed nuclear-armed superpowers, with limited ways to gather information and their arsenals on a hair trigger, was much riskier than today's situation. Even when "rogue states" get hold of nuclear weapons, they seem to want them to deter a US/UN invasion, rather than to actually use offensively.
Plus we invented the internet - greatly strengthening international relations - and creating social and economic interdependency.
This doesn't appear to be the case at all. There are a variety of claimed existential risks which the intellectual elite are in general quite worried about. They just don't overlap much with the kind of risks people here talk about. Global warming is an obvious example (and some people here probably think they're right on that one) but the overhyped fears of SARS and H1N1 killing millions of people look like recent examples of lessons about crying wolf not being learned.
I don't know about SARS, but in the case of H1N1 it wasn't "crying wolf" so much as being prepared for a potential pandemic which didn't happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn't become as virulent as expected doesn't mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It's difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I've pointed to some examples of what look like over-confident predictions of disaster (there's lots more in The Rational Optimist). I'm not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That's not to say that the media didn't hype swine flu and bird flu, but that doesn't mean that the government preparations were an overreaction.
That's not to say that some threats aren't exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don't get enough attention.
I wouldn't put much trust in Matt Ridley's abilities to estimate risk:
http://news.bbc.co.uk/1/hi/7052828.stm (yes, it's the same Matt Ridley)
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
http://www.cdc.gov/h1n1flu/qa.htm
Wikipedia takes a more middle-of-the-road view, noting that it's not entirely clear how much transmission happens in which route, but still:
http://en.wikipedia.org/wiki/Influenza
Which really suggests to me that hand-washing (or sanitizing) just isn't going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would've behaved differently and lowered the validity of the test..
Just because some institutions over-reacted or implemented ineffective measures, doesn't mean that the concern wasn't proportionate or that effective measures weren't also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed ("Catch it, bin it, kill it").
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn't mean it's not real.
SARS and H1N1 both looked like media-manufactured scares, rather than actual concern from the intellectual elite.
It wasn't just the media:
So Nabarro explicitly says that he's talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.
Haha, ok point taken. I'm clearly wrong on this and there are a lot of examples. (At this point I'm also reminded of this Monty Python sketch although this is sort of the inverse).
I would like to define an "intellectual" as a person who I believe to be well educated and smart. Unfortunately, this definition will be deprecated as too subjective. An objective alternative definition would be to define intellectuals as a class of people who consider each other to be well educated and smart.
If that definition is accepted, then I think the claim is almost self-evident.
Coercive eugenics was very popular in intellectual circles until WWII.
It was actually pretty popular in non-intellectual circles as well, but yes that example seem to still be a decent one.
(Incidentally, I'm not actually sure what is wrong with coercive eugenics in the general sense. If for example we have the technology to ensure that some very bad alleles are not passed on (such as those for Huntington's disease), I'm not convinced that we shouldn't require screening or mandatory in vitro for people with the alleles. This may be one reason this example didn't occur to me. However, I suspect that discussion of this issue in any detail could be potentially quite mind-killing given unfortunate historical connections and related issues.)
The historical meaning of the term is problematic partly because it wasn't based on actual gene testing - I doubt they even tried to sort out whether someone's low IQ was inheritable or caused by, say, poor nutrition - and partly because it was, and still in some cases would be, very subjective in terms of what traits are considered undesirable. How many of us wouldn't be here if there'd been genetic tests for autism/aspergers or ADD or other neurodifferences developed before we were born?
It gets much harder when you start talking about autism or deafness or any of a whole range of conditions that are abnormal but aren't strictly disadvantageous.
Are there people who, having a deaf newborn child, would refuse to cure the condition based on argument that deafness is not strictly disadvantageous?
Yes, and there's been a fair bit of controversy in the "deaf community" over whether they should engage in selection for deaf children. See for example this article.
I've heard from more than one source that deaf parents of deaf children often take that stance - and that some deaf parents intentionally choose to have deaf children, even to the point of getting a sperm donor involved if the genetics require it.
I rather sympathize - if I ever get serious about procreating, stacking the deck in favor of having an autistic offspring will be something of a priority. (And, as I think about it, it's for pretty much the same reason: Being deaf or autistic isn't necessarily disadvantageous, but having parents that one has difficulty in communicating with is - and deaf people and autistic people both tend to find it easier to communicate with people who are similar.)
What do you mean by necessarily disadvantageous, then? I disagree that a difficulty in communication with parents is a more necessary disadvantage than deafness, but maybe we interpret the words differently. (I have no precise definition yet.)
Being deaf or autistic (or for that matter gay or left-handed or female or male or tall or short) is a disadvantage in some situations, but not all, and it's possible for someone with any of the above traits to arrange their life in such a way that the trait is an advantage rather than a disadvantage, if other aspects of their life are amenable to such rearranging. (In the case of being deaf, a significant portion of the advantage seems to come from being able to be a member of the deaf community, and even then I have a little bit of trouble seeing it, but I'm inclined to believe that the deaf people who make that claim know more about the situation than I do.)
For contrast, consider being diabetic: It's possible to arrange one's life such that diabetes is well-controlled, but there seems to be a pretty good consensus among diabetics that it's bad news, and I've never heard of anyone intentionally trying to have a diabetic child, xkcd jokes aside.
In what situations is being deaf an advantage?
I dunno, i don't agree with deaf parents deliberately selecting for deaf chldren but there is definitely a large element of trying to medicalise something that the people with the condition don't consider to be a bad thing.
Anyway, I think Silas nailed deaf community attitudes with the comparison between being deaf and being black, the main difference being that one is considered cultural (and therefore the problem is other people's attitudes towards it) and the other medical.
Edit: After further thought, I think I am using necessarily disadvantageous to mean that the disadvantages massively outweigh any advantages. Since being deaf gets you access to the deaf community, an awesome working memory for visual stuff, and (if you live in urban America) doesn't ruin your life, I don't think it's all disadvantage.
I don't see being black or white any more cultural than being deaf, in either case you are born that way and being raised in different culture doesn't change that a bit. The main difference is that the problem with being black is solely result of other people's attitudes. It is possible not to be a racist without any inconvenience, and if no people were racists, it wouldn't be easier to be white. On the other hand, being deaf brings many difficulties even if other people lack prejudices against the deaf. Although I can imagine a society where all people voluntarily cease to use spoken language and sound signals and listen to music and whatever else may give them advantage over the deaf, such a vision is blatantly absurd. On the other hand, society (almost) without racism is a realistic option.
Interestingly enough, one thing both these examples have in common is that they are cases of intellectuals arguing that intellectuals should have more power.
I'm not convinced this is the case. Rather, I think intellectuals tend to try to signal status to other intellectuals, whereas non-intellectuals tend to try to signal status to other non-intellectuals.