This is less useful than it sounds. Disappointingly, there are not too many ideas that are believed solely by stupid people. As mentioned before, even creationism can muster a list of Ph.Ds who support it. When I was much younger, I was once quite impressed to hear that there were creationist Ph.Ds with a long list of scientific accomplishments in various fields. Since then, I learned about compartmentalization.
I don't think compartmentalization completely explains this phenomenon. I remember a usenet or Straight Dope debate from a decade and change back where a PHD specialized in molecular biology strongly and honestly expressed an argument for the young-earth creationist theory (and no, she she was not an evangelical christian, she was Jewish). Now molecular biologists wouldn't have to be experts in genetics and evolution per-se - though they've certainly studied the basics - but they do see all kinds of real world data every day that supports the idea that evolution has taken place over a very long period of time - specifically in examples of vestigial structures across basically all kingdoms and phyla.
There is no conclusion I could take from all this except that she was incompetent as a molecular biologist. If you can compartmentalize that much it would seem you lack something - not pure intelligence - but some capacity for abstract thought that would seem necessary to do something as basic as devise a series of experiments to test a new hypothesis.
But if we imagine he really is the way his celebrity status makes him seem - the World's Top Expert in the field of genetics - then his opinion carries special weight for two reasons: first of all, it's the only data point we have in the field of "what the World's Top Expert thinks", and second, it suggests that a large percentage of the rest of the scientific community agrees with him
His opinion carries even more weight than that, because we assume that lesser experts (and experts with less reputation for being, shall we say, free with their opinions) are more likely to keep controversial opinions to themselves.
A plausible bias theory is that Watson and Crick tended to apply their Big Idea promiscuously and err in the direction of attributing too much to genetics, with which their status is intertwined. Note that knowledge of molecular genetics doesn't tell you about past selective pressures on human populations, or about the psychology/behavioral genetics work on current human ability distributions.
Another plausible bias theory might be that they are simply old and extremely high status and that high status old people rarely make major updates to their beliefs in the absence of compelling evidence even if they are very intelligent. It could be that there was a consensus in favor of genetic differences in intelligence when they were young, and that there is now stronger but not overwhelming evidence against that old consensus, plus strong social pressure. Who's likely to not move in response to that evidence plus social pressure?
Yet another bias theory is that Watson just likes to be outrageous and see what he can get away with. I've read "The Double Helix" and it encourages that interpretation..
but it's a signal that he's the kind of guy who realizes beliefs should be based on that sort of thing and is probably pretty smart.
Or he's just smart enough to use the label "evidence" to describe his reasoning.
Keep in mind that most communication is done not to convince anyone of anything, but to signal the character of the person arguing (source: I arrived at this conclusion using evidence)
I'm tempted to downvote the post solely for this. I'll hold off until I've had a chance to write more about (first) status, then (possibly) signalling. "Signalling" is the other concept that I'm seeing used a lot around here in a way that manages to consistently confuse me.
Meanwhile, let me just be on record as being very, very skeptical of the above assertion.
I was hoping that would be identifiable as a joke. You know, article about making assertions and then saying you have evidence without giving any, then making a controversial assertion and saying I have evidence without giving any? Sort of ironic, ha ha? No? Okay, sorry, I won't do it again.
At the very least, we're capable of holding multiple motives for any particular action, such as trying to persuade and to signal at the same time.
What both terms convey to me is "a communicative act intended to instill a belief in the interlocutor".
To pick a recent example, if I say "Yesterday I felt faint while waiting in line", it's not clear how one is to tease apart what beliefs I intend you to have: it seems naive to suggest that because this looks superficially like a declarative sentence, all I intended was to convince you that I felt faint yesterday.
Depending on the context, I could be trying to convince you that I'm in poor health and in need of affective support, or (if you're my doctor) that I have a treatable issue, or that waiting in line is hazardous to one's health.
I agree that if we're having a debate, that sets up some context and expectations about my speech acts which are different from what they'd be if we were having a conversation, and different again from what they'd be if we were merely making small talk, which isn't quite the same as a conversation. And there are yet other modes of talking, such as gossiping, holding forth, and so on.
So I'm suspicious of assertions about "most communication" which do not attend to this variety of possible modes of communication.
There was some related discussion on Overcoming Bias:
http://www.overcomingbias.com/2007/01/extraordinary_c.html http://lesswrong.com/lw/gu/some_claims_are_just_too_extraordinary/ http://www.overcomingbias.com/2007/01/a_model_of_extr.html
Allow to to start by saying I enjoyed this post and think Yvain makes an interesting point. It may help explain why rumor spreads well. I have however one difference of opinion, which is that if a person adds "I arrived at this belief through evidence", I would believe their statement more. I would assume they are talking about a non-Bayesian, layman's version of "evidence." (If they're talking about Bayesian evidence, they're probably Bayesian and this is also a mark in their favor)
For instance, in mathematics it's common to state that something is true, and add "And we can prove this." It's often too much work, or beyond the scope of the course, or beyond the mathematical abilities of the class, to actually look at the proof.
I often arrive at opinions by asking others' their opinion. I do not want to spend the time to evaluate the evidence on global warming, for example. I simply trust that friends and experts I know to be well-informed know what they are talking about when they say a trend exists. But, I you know me to be rational in evaluating arguments and a careful reader, and you do not trust my unknown friends as much, you should trust what I say more if I indicate I arrived at it through firsthand evidence, than though the opinions of my friends. You may want to trust firsthand evidence more regardless, because content and thus authority is lost at every telling.
This might be one reason urban rumors in particular spread so much better; they're typically told as though they happened to the teller or a close friend of the teller.
I wish you'd used a different example, but your analysis looks good as far as it goes. Nothing groundbreaking, but correct execution of established theory in an interesting case study.
The example being race/intelligence correlation? Assuming any genetic basis for intelligence whatsoever, for there to be absolutely no correlation at all with race (or any distinct subpopulation, rather) would be quite unexpected, and I note Yvain discussed the example only in terms as uselessly general as the trivial case.
Arguments involving the magnitude of differences, singling out specific subpopulations, or comparing genetic effects with other factors seem to quickly end up with people grinding various political axes, but Yvain didn't really go there.
What about casual use of poorly chosen examples reinforcing cultural concepts such as sexism? I'm referencing this paper. Summary: Example sentences in linguistics far more often have males verbing females than females verbing males.
There are a lot of questions which (to the best of my understanding) are still up in the air. Yvain's casual use of the controversial race/intelligence connection as an example at best glosses over these questions, and at worst subtly signals presumed answers to the questions without offering actual evidence. (Just like the males verbing females examples subtly signals some sort of cultural sexism.)
Questions like: Is intelligence a stable, innate quality? Is intelligence the same thing as IQ? Is intelligence a sharp, rigid concept, suitable for building theory-structures with? Is intelligence strongly correlated to IQ? Is race a sharp, rigid concept, suitable for building theory-structures with? Is IQ strongly correlated to self-identified race? Is race strongly correlated to genetics? Is the best explanation of these correlations that genetics strongly influences intelligence? Is the state of the scientific evidence settled enough that people ought to be taking the research and applying it to daily lives or policy decisions?
My take on it is that intelligence is a dangerously fuzzy concept, sliding from a general "tendency to win" on the one hand, to a simple multiple-choice questionaire on the other, all the while scattering assumptions of that it's innate, culture-free and unchangeable through your mind. Race is a dangerous concept too, with things like the one-drop rule confusing the connection to genetics and the fact that (according to the IAT) essentially everyone is a little bit racist, which has to affect your thinking about race. Thirdly, there's a very strong tendency for the racist/anti-racist politicals to hijack tentative scientific results and use them as weapons, which clouds the waters and makes everything a bit more explosive.
The post has an overt message (regarding assertions) and a covert signalling message, something about the putative race/intelligence connection. My sense of the lesswrong aesthetic has been wrong before, but I think we would prefer one-level to two-level posts (explicitness as a rationalist virtue).
Example sentences in linguistics far more often have males verbing females than females verbing males.
I'm deeply saddened by this, and will do my part to remedy it by inviting the females here to verb me.
I appreciate having been invited, but perhaps it would have been more useful to merely suggest that we could verb you?
I used the race-intelligence connection as an example because it was the example used in Morendil's post to which this was a reply, which itself used it because it was the topic of the comment Morendil noticed was doing it wrong. I probably should have made this clearer; if you weren't following the history, it does look like a really really badly chosen example.
I personally have no strong opinion one way or the other. IQ is very hard to pin down genetically, seems to be distributed across hundreds of genes, and would probably be much harder to change than things like skin pigmentation or lactose tolerance. But generally I classify it in the category of "things whose political flammability is so far out of proportion to its actual importance that it's not worth thinking about unless you're itching for a fight".
Johnicholas makes an excellent point about the fuzziness of both intelligence and race. The complexities involved with defining both of these concepts has prevented (to my knowledge) anything like a scientific study of the OP's assertion.
Does it make sense to assign a higher credibility to a question of fact on the basis of the opinion of a top expert in the field, when there is no way that his opinion is informed by anything resembling a scientific study? I don't think so.
and a covert signalling message, something
Something: what, exactly? Show, don't tell us, that the post has a covert message.
There's two different questions in your post: First, you ask what the covert message is. Second, you ask me to show has the post has a covert message. One's own writing always reads apparently clearly, but I thought I offered sufficient evidence that the original post has a covert message, even though I wasn't picking out any particular covert meaning.
I believe the covert message is "Intelligence is: real/important/relevant/effectively measured by IQ/a useful concept for theory-building/probably innate/probably stable/probably genetic. Race is: real/important/relevant/effectively measured by self-identification/strongly correlated to genetics. There is scientific evidence that the genetic component of race is a significant causative influence on intelligence."
So, you seem to be saying that Yvain is using Watson as an example, at least partly (and significantly so) in order to convince others of (not to put too fine a point on it) the racist IQ hypothesis.
I had a different reading of the post, which was that even for someone who disagreed with the racist IQ hypothesis, Watson's pronouncements should carry more weight than a random person's. I actually agree with Yvain - though I also note that looking into Watson's actual argument in even a little more detail was enough for me to dismiss it.
I wonder if you would be willing to bet against a person that held such beliefs. It would be interesting to see what would happen if you bet on the IQ of randomly selected Americans by looking at their picture. Presumably, you would guess close to the mean for each picture since you think a concept like race is too fuzzy to make use of and your opponent would adjust his guess according to the perceived race of the person in the picture. Do you believe you would win such a contest?
Downvoted for what looks like willful misunderstanding of the grandparent. (Will withdraw the downvote if it turns out to be a honest misunderstanding.)
The dispute concerns the causal origins of the so-called "IQ gap". The fact of the "IQ gap" isn't itself in dispute (or if it is, it is a different dispute than the one Yvain refers to), so the bet wouldn't settle anything, besides being in extremely poor taste. Racism and discrimination compete with genetic explanations to explain that fact, and the grandparent provides some detail on why settling the issue isn't trivial.
The dispute concerns the causal origins of the so-called "IQ gap". The fact of the "IQ gap" isn't itself in dispute (or if it is, it is a different dispute than the one Yvain refers to).
Nothing in my post was directed at the grandparent. It was direct at Johnicholas comment:
Is race a sharp, rigid concept, suitable for building theory-structures with?
If he doesn't think race is a rigid enough concept for coming up with theories, surely he wouldn't mind betting against someone who used it explicitly to make predictions? If it helped people make predictions that were more accurate than his own, how could he maintain the claim that they are too fuzzy for inclusion in theories?
The way that you slipped easily between intelligence and IQ is exactly the dangerous fuzziness that I was referring to.
I won't vote you down, but if you reread my comment you will see that I never used the word "intelligence" at all. I was trying to see how strongly you believe that race is too fuzzy a concept to include in predictive theories (nothing about intelligence per se).
if you reread my comment you will see that I never used the word "intelligence" at all
This is true, but it appears the problem was that you responded to a discussion about intelligence by talking about IQ.
This is true, but it appears the problem was that you responded to a discussion about intelligence by talking about IQ.
No I didn't. The comment I responded to said this:
Is IQ strongly correlated to self-identified race?
This is about IQ and Race, which is exactly what my reply was about.
Response to: The "show, don't tell" nature of argument
Morendil says not to trust simple assertions. He's right, for the certain class of simple assertions he's talking about. But in order to see why, let's look at different types of assertions and see how useful it is to believe them.
Summary:
- Hearing an assertion can be strong evidence if you know nothing else about the proposition in question.
- Hearing an assertion is not useful evidence if you already have a reasonable estimate of how many people do or don't believe the proposition.
- An assertion by a leading authority is stronger than an assertion by someone else.
- An assertion plus an assertion that there is evidence makes no factual difference, but is a valuable signal.
Unsupported assertions about non-controversial topics
Consider my assertion: "The Wikipedia featured article today is on Uriel Sebree". Even if you haven't checked Wikipedia today and have no evidence on this topic, you're likely to believe me. Why would I be lying?
This can be nicely modeled in Bayesian terms - you start with a prior evenly distributed across Wikipedia topics, the probability of me saying this conditional on it being false is pretty low, and the probability of me saying it conditional on it being true is pretty high. So noting that I said it nicely concentrates probability mass in the worlds where it's true. You're totally justified in believing it. The key here is that you have no reason to believe there's a large group of people who go around talking about Uriel Sebree being on Wikipedia regardless of whether or not he really is.
Unsupported assertions about controversial topics
The example given in Morendil's post is that some races are biologically less intelligent than others. Let's say you have no knowledge of this whatsoever. You're so naive you don't even realize it might be controversial. In this case, someone who asserts "some races are biologically less intelligent than others" is no less believable than someone who asserts "some races have slightly different frequencies of pancreatic cancer than others." You'd accept the second as the sort of boring but reliable biological fact that no one is particularly prone to lie about, and you'd do the same with the first.
Now let's say you're familiar with controversies in sociology and genetics, you already know that some people believe some races are biologically more intelligent, and other people don't. Let's say you gauge the people around you and find that about 25% of people agree with the statement and 75% disagree.
This survey could be useful. You have to ask yourself - is this statement about race and genetics more likely to have the support of a majority of people in a world where it's true than in a world where it's false? "No" is a perfectly valid answer here - you might think people are so interested in signalling that they're not racist that they'll completely suspend their rational faculties. But "yes" is also a valid answer here if you think that the people around you have reasonably intelligent opinions on the issue. This would be a good time to increase your probability that it's true.
Now I, a perfectly average member of the human race, make the assertion that I believe that statement. But from your survey, you already have information that negates any evidence from my belief - that given that the statement is false and there's a 25% belief rate, there's a 25% chance I would agree with it, and given that the statement is true and there's a 25% belief rate, there's a 25% chance I would agree with it. If you've already updated on your survey, my assertion is equally likely in both conditions and doesn't shift probability one way or the other.
Unsupported assertions on extremely unusual topics
There is a case, I think, in which a single person asserting ze believes something can increase your probability. Imagine that I say, truthfully, that I believe that a race of otter-people from Neptune secretly controls the World Cup soccer tournament. If you've never heard this particular insane theory before, your estimate of the number of people who believed it was probably either zero, or so low that you wouldn't expect anyone you actually meet (even for values of "meet" including online forums) to endorse it. My endorsing it actually raises your estimate of the percent of the human race who endorse it, and this should raise your probability of it being true. Clearly, it should not raise it very much, and it need not necessarily raise it at all to the degree that you can prove that I have reasons other than truth for making the assertion (in this case, most of the probability mass generated by the assertion would leak off into the proposition that I was insane) but it can raise it a little bit.
Unsupported assertions by important authorities
This effect becomes more important when the person involved has impressive credentials. If someone with a Ph.D in biology says that race plays a part in intelligence, this could shift your estimate. In particular, it would shift it if you previously thought the race-intelligence connection was such a fringe theory that they would be unlikely to get even one good biologist on their side. But if you already knew that this theory was somewhat mainstream and had at least a tiny bit of support from the scientific community, it would be giving no extra information. Consider this the Robin Hanson Effect, because a lot of the good Robin Hanson does comes from being a well-credentialed guy with a Ph.D willing to endorse theories that formerly sounded so crazy that people would not have expected even one Ph.D to endorse them.
In cases of the Hanson Effect, the way you found out about the credentialled supporter is actually pretty important. If you Googled "Ph.D who supports transhumanism" and found Robin's name, then all it tells you is that there is at least one Ph.D who supports transhumanism. But if you were at a bar, and you found out the person next to you was a Ph.D, and you asked zir out of the blue if ze supported transhumanism, and ze said yes, then you know that there are enough Ph.Ds who support transhumanism that randomly running into one at the bar is not that uncommon an event.
An extreme case of the Hanson Effect is hearing that the world's top expert supports something. If there's only one World's Top Expert, then that person's opinion is always meaningful. This is why it was such a big deal when Watson came out in favor of a connection between race and intelligence. Now, I don't know if Watson actually knows anything about human genetic variation. He could have just had one clever insight about biochemistry way back when, and be completely clueless around the rest of the field. But if we imagine he really is the way his celebrity status makes him seem - the World's Top Expert in the field of genetics - then his opinion carries special weight for two reasons: first of all, it's the only data point we have in the field of "what the World's Top Expert thinks", and second, it suggests that a large percentage of the rest of the scientific community agrees with him (his status as World's Top Expert makes him something of a randomly chosen data point, and it would be very odd if we randomly pick the only data point that shares this opinion).
Assertions supported by unsupported claims of "evidence"
So much for completely unsupported assertions. Seeing as most people are pretty good at making up "evidence" that backs their pet beliefs, does it add anything to say "...and I arrived at this conclusion using evidence" if you refuse to say what the evidence is?
Well, it's a good signal for sanity. Instead of telling you only that at least one person believes in this hypothesis, you now know that at least one person who is smart enough to understand that ideas require evidence believes it.
This is less useful than it sounds. Disappointingly, there are not too many ideas that are believed solely by stupid people. As mentioned before, even creationism can muster a list of Ph.Ds who support it. When I was much younger, I was once quite impressed to hear that there were creationist Ph.Ds with a long list of scientific accomplishments in various fields. Since then, I learned about compartmentalization. So all that this "...and I have evidence for this proposition" can do on a factual level is highlight the existence of compartmentalization for people who weren't already aware of it.
But on a nonfactual level...again, it signals sanity. The difference betwee "I believe some races are less intelligent than others" and "I believe some races are less intelligent than others, and I arrived at this conclusion using evidence" is that the second person is trying to convince you ze's not some random racist with an axe to grind, ze's an amateur geneticist addressing an interesting biological question. I don't evaluate the credibility of the two statements any differently, but I'd much rather hang out with the person who made the second one (assuming ze wasn't lying or trying to hide real racism behind a scientific veneer).
Keep in mind that most communication is done not to convince anyone of anything, but to signal the character of the person arguing (source: I arrived at this conclusion using evidence). One character signal may interfere with other character signals, and "I arrived at this belief through evidence" can be a powerful backup. I have a friend who's a physics Ph.D, an evangelical Christian with an strong interest in theology, and an American living abroad. If he tries to signal that he's an evangelical Christian, he's very likely to get shoved into the "redneck American with ten guns and a Huckabee bumper sticker" box unless he immediately adds something like "and I base this belief on sound reasoning." That is one very useful signal there, and if he hadn't given it, I probably would have never bothered talking to him further. It's not a signal that his beliefs are actually based on sound reasoning, but it's a signal that he's the kind of guy who realizes beliefs should be based on that sort of thing and is probably pretty smart.
You can also take this the opposite way. There's a great Dilbert cartoon where Dilbert's date says something like "I know there's no scientific evidence that crystals can heal people, but it's my point of view that they do." This is a different signal; something along the lines of "I'd like to signal my support for New Agey crystal medicine, but don't dock me points for ignoring the scientific evidence against it." This is more of a status-preserving manuever than the status-claiming "I have evidence for this" one, but astoundingly it seems to work pretty well (except on Dilbert, who responded, "When did ignorance become a point of view?")