cousin_it comments on A Parable On Obsolete Ideologies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (272)
"Don't matter" sounds to me like a cop-out akin to religion's "retreat to faith". A lot of evidence for absence of systematic variation in ability gets noticed and promoted by SWPL followers, indicating the question's importance to the belief system.
It's more that it really doesn't matter. Noise from individual variation swamps group variation in almost every practical case, and other outward cues of ability (i.e., hard-to-fake signals) are more informative.
The main reasons to make a big deal out of group variation are, in order of how common they are:
We have a lot of epistemic rationalists here in the third category, but in most cases if you talk about the reality of group variation people will assume you are one of the first two types, which probably isn't helpful for anyone.
To first paragraph: the variables "variation within group" and "difference of means between groups" should be regarded as belonging to different statistical data types, not yielding any significant insight when compared. For example, variation of physical strength among men is greater than the difference between mean strengths of men and women, but that doesn't imply the latter is insignificant. Same holds for many traits of many real-world ensembles, e.g. most behavioral differences within/between dog breeds.
To the rest: ad hominem and Bulverism fallacy. But if you really insist on knowing my reasons, I'm irritated by hearing popular falsehoods.
Well, yes, we all know this. But you miss the point--difference of means are almost never relevant. If I need someone to help me carry something, I need a strong person, and I'd do better looking at more useful cues ("does this person look fit and healthy?") rather than thinking about group means ("oh, I'll ignore this female athlete and get the scrawny nerd to help me").
This is why it "doesn't matter".
In other words, you assign intrinsic value to truth independent of instrumental value, which is exactly what I said. This is fine! We like truth here. But, outside of LW, this can lead to people making uncharitable assumptions about your motivations, which is all I was saying.
When you have more relevant information you're better off using that. When you don't, e.g. when using a dating site, you're better off following justified stereotypes than ignoring them.
One particularly controversial such case is police stop-and-frisks. I read somewhere that NYC blacks get stop-and-frisked disproportionately more often than whites, but black criminals have a lower chance of getting stop-and-frisked than white criminals due to PC backlash. The different concepts of fairness seem hard to reconcile in low-information situations like that. Or take racial profiling in airports: if you call for using using more relevant information in that case, rather than less as the SWPL crowd desires, you'll need an Orwellian level of knowledge about the passengers which causes new morality problems.
Another example would be having to make decisions concerning large groups of people. Would you or wouldn't you allow unsupervised immigration from a certain country based on the average IQ there? What if it's 50? As you make your decision, keep in mind that people from this country will try harder to get into yours because they're worse off than other potential donor countries, so you might get a self-selection effect on your hands.
Yes, but such low-information situations are fairly rare.
This is possibly a case where group means actually are relevant, yes, modulo a lot of assumptions about how people are selected for the stop-and-frisk.
Again, not objectionable on the surface, However, given the stunningly ineffective nature of airport security (cf. Bruce Schneier and the "security theater" concept) I doubt this actually provides a benefit, for reasons wholly unrelated to race.
As an aside, in adversarial situations you need to be careful that weighted targetting based on superficial cues doesn't merely give the enemy information on how to disproportionately avoid scrutiny.
EDIT: Either I missed this part or you added it while I was replying, but:
Assuming you want to filter applicants based on IQ, testing individuals seems vastly more helpful than assuming based on population mean, especially given that the demographics of applicants will not be the same as the total population. Also, if you admit all immigrants from any country you're likely to get a self-selected group of people who are less successful at home.
Typically, this works to your advantage. If you scrutinize middle eastern travelers more, terrorist groups might decide to recruit white hijackers. But where are they going to find them? Wherever you think they might look, you can have CIA undercover agents trying to be hired.
The primary strength conspiracies have is small, close-knit groups. Anything you can do to force them to become larger or more diverse can help.
This probably works for terrorist groups. And actually explains why they recruit so many children of diplomats. (This suggests the CIA should try to recruit agents among them more, but that's a problem if they're already terrorists.)
But for groups that can be a lot more diverse, here's a note to cops: If you see a group of high school students, most of whom are dark-skinned boys with long hairs and marijuana-related T-shirts, and one of whom is a prim-and-proper white girl with her nose constantly buried in a textbook, stop carding and searching the boys all the time. The girl has the weed, and thinks you're insulting their intelligence.
Yes, your arguments sound pretty convincing. I'll have to reconsider my position on the frequent applicability of stereotypes.
The problem is you can't use IQ tests. You see IQ tests are clearly racist and insidiously so since culturally loaded sub-tests like vocabulary seem to actually show smaller ethnic differences.
No matter. We'll find out how the tests are racist some day!
Your mockery, at least as concerns standardized testing in general, is misplaced
That is in reference to a firm that produces standardized tests deciding to abandon a particular question because inner city kids did better on it than suburban kids did.
I'm talking about IQ tests constructed by academics not firms.
I haven't heard about the sub-test discrepancies, that sounds interesting. Do you have a link?
The examples I saw were analogies; you would get things like cup :: saucer or yacht :: regatta which upper-class kids were far more likely to be familiar with than lower-class kids (and since class and race are correlated, that meant there would be racial discrepancies).
IQ test may or may not measure intelligence, they measure doing well at what's usually called abstract thinking pretty well. They don't really measure cultural knowledge, at least real IQ tests don't.
Cultural bias being behind the observed gaps isn't really something that seems very likley to most psychometricians. From a paper titled "Mainstream Science on Intelligence" published to clear up public misconceptions about expert opinion during the debates surrounding the publishing of Murray's book the Bell Curve.
Some experts of course disagree with this. And common sense. I can sort of see why Askenazi Jews do better on IQ tests than other European Americans because they are culturally literate, wealthy and more educated, rather than high IQs leading to cultural literacy and education. But why do East Asians? In Asia even? I suppose it could just happen to be that stuff that goes with high status in Western society has the same function in East Asian societies. In basically every developed country ever.
That then brings us dangerously close to considering that some cultures may be objectively better at making a working, prosperous and safe technological society, maybe, we should work to change those aspects among immigrants, perhaps even their native countries to improve their quality of life? Oh but that's a no no you nasty nasty cultural imperialist utilitarian!
But leaving this last tangent aside, the "IQ tests are biased and thus don't suggest actual differences in intelligence" is far from the ultimate argument against such policies, that we all know it would be used as such against any proposal to limit immigration to any Western country with IQ testing. Funny how formal education as a criteria for immigration isn't ever put down with the same argument.
At this point lets once again for the sake of argument say that the dissenting experts are right. Even so, the test still accurately measure how well large groups of people do in Western type societies, since they are positively correlated with everything from health to education attainment to low criminality to high income. Any country trying to craft the best policy for its citizens would find such a measure useful, since it measures something as trivial as ... you know... how well the people are actually probably going to do, as it is, not how well society wants to pretend they are likley to do.
Hence my sarcastic comment, aimed at the kind of objections that would be used to oppose such a no-brainer (pardon the pun) policy.
This can get kind of interesting if what is assumed to be true affects what is actually true.
Why was this voted down? It's a good point.
Because many people are sceptical of stereotype threat.
If all of science agreed that members of one race really were slighty smarter on average than members of another race, that could be extremely demoralizing for the people of the slightly-dumber race. See stereotype threat. This seems like a high price to pay to eliminate a popular falsehood--I imagine their are other falsehoods that would be easier to eliminate for a smaller price. And unlike religion, there's no long-term benefit associated with its removal.
Maybe. Maybe not.
It's mentioned below to some extent, but if you start with the assumption that there's no systematic variation between groups, then if you see a statistical difference in how groups are treated, that's evidence of a systematic bias. If it turns out that there is systematic variation between groups, the data can be explained without the bias. It's the difference between "Harvard is racist" and "Harvard accepts bright people regardless of race".
Harvard has two responses to these claims: they can continue getting called racist, or they can unfairly admit minorities who don't deserve to get in. Both of these alternatives are preferable to telling a large group with normal human psychology that they are inferior.
Edit: I tend to think it's OK for Harvard to admit too many minorities because I think having the upper class be made up of people from a variety of backgrounds is valuable. But I could see how this could be a problem in other domains, such as cousin_it's mortgage example.
You do realize the top universities (and especially the Ivy Leagues) systematically discriminate against Asian students right?
Its really hard to fight or talk about something like that if one can't just point out that East Asians have higher average IQs than say Europeans.
Science makes a lot of vastly more demoralizing conclusions like Darwinism or the possibility of nuclear weapons. If you really believe what you say you believe, you should focus on debunking those first.
The first step in solving a problem is to recognise it. If I discovered I have cancer I would be demoralised immensely but I'd prefer that and take a shot at recovering rather than die unknowingly.
Denial is not a path to improvement.
Strongly seconded. I speak from experience: when evidence starts mounting for some horrible, nightmarish proposition that you're scared of, it is tempting to tell yourself that even if it were true, it wouldn't really matter, that there would be no benefit to acknowledging it, that you can just go on acting as you've always done, as if nothing's changed. But on your honor as an aspiring rationalist, you must face the pain directly. When you get a hint that this world is not what you thought it was, that you are not what you thought you were--look! And update!---no matter how much it hurts, no matter how much your heart may cry for the memory of the world you thought this was. Do it selfishly, in the name of the world you thought you knew: because once you have updated, once you see this hellish wasteland for what it really is, then you can start to try to patch what few things up that you can.
Suppose you really don't like gender roles, and you're quietly worried about something you read about evolutionary psychology. Brushing it all under the rug won't help. Investigate, learn all you can, and then do something. Maybe something drastic, maybe something trivial, but something. Experiment with hormones! Donate a twenty to GenderPAC! Use your initials in your byline! But something, anything other than defaulting to ignorance and letting things take their natural course.
As soon as you start talking about your "honor as an aspiring rationalist", you're moving from the realm of rationality to ideology.
Like I said, I don't think this question matters and I'm mostly indifferent to what the answer actually is. I'm just trying to protect the people who do care.
Well, sure, but the ideological stance is "You should care about rationality." I should think that that's one of the most general and least objectionable ideologies there is.
But I do care, and I no longer want to be protected from the actual answer. When I say that I speak from experience, it's really true. There's a reason that this issue has me banging out dramatic, gushy, italics-laden paragraphs on the terrible but necessary and righteous burden of relinquishing your cherished beliefs---unlike in the case of, say, theism, in which I'm more inclined to just say, "Yeah, so there's no God; get over it"--although I should probably be more sympathetic.
So, why does it matter? Why can't we just treat the issue with benign neglect, think of ourselves as strictly as individuals, and treat other people strictly as individuals? It is such a beautiful ideal--that my works and words should be taken to reflect only on myself alone, and that the words and works of other people born to a similar form should not be taken to reflect on me. It's a beautiful ideal, and it seems like it should be possible to swear our loyalty to the general spirit of this ideal, while still recognizing that---
In this world, it's not that simple. In a state of incomplete information (and it is not all clear to me what it would even mean to have complete information), you have to make probabilistic inferences based on what evidence you do have, and to the extent that there are systematic patterns of cognitive sex and race differences, people are going to update their opinions of others based on sex and race. You can profess that you're not interested in these questions, that you don't know---but just the same, when you see someone acting against type, you're probably going to notice this as unusual, even if you don't explicitly mention it to yourself.
There are those who argue--as I used to argue--that this business about incomplete information, while technically true, is irrelevant for practical purposes, that it's easy to acquire specific about an individual, which screens off any prior information based on sex and race. And of course it's true, and a good point, and an important point to bear in mind, especially for someone who comes to this issue with antiegalitarian biases, rather than the egalitarian biases that I did. But for someone with initial egalitarian biases, it's important not to use it---as I think I used to use it---as some kind of point scored for the individualist/egalitarian side. Complex empirical questions do not have sides. And to the extent that this is not an empirical issue; to the extent that it's about morality---then there are no points to score.
It gets worse---you don't even have anywhere near complete information about yourself. People form egregiously false beliefs about themselves all the time. If you're not ridiculously careful, it's easy to spend your entire life believing that you have an immortal soul, or free will, or that the fate of the light cone depends solely on you and your genius AI project. So information about human nature in general can be useful even on a personal level: it can give you information about yourself that you would never have gotten from mere introspection and naive observation. I know from my readings that if I'm male, I'm more likely to have a heart attack and less likely to get breast cancer than would be the case if I were female, whereas this would not at all be obvious if I didn't read. Why should this be true of physiology, but not psychology? If it turns out that women and men have different brain designs, and I don't have particularly strong evidence that I'm a extreme genetic or developmental anomaly, then I should update my beliefs about myself based on this information, even if it isn't at all obvious from the inside, and even though the fact may offend me and make me want to cry. For someone with a lot of scientific literacy but not as much rationality skill, the inside view is seductive. It's tempting to cry out, "Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I'm different, I'm special, I'm an exception; I'm a gloriously androgynous creature of pure information!" But if you actually want to achieve your ideal (like becoming a gloriously androgynous creature of pure information), rather than just having a human's delusion of it, you need to form accurate beliefs about just how far this world is from the ideal, because only true knowledge can help you actively shape reality.
It could very well be that information about human differences could have all sorts of terrible effects if widely or selectively disseminated. Who knows what the masses will do? I must confess that I am often tempted to say that I have no interest in such political questions--that I don't know, that it doesn't matter to me. This attitude probably is not satisfactory for the same sorts of reasons I've listed above. (How does the line go? "You might not care about politics, but politics cares about you"?) But for now, on a collective or political or institutional level, I really don't know: maybe ignorance is bliss. But for the individual aspiring rationalist, the correct course of action is unambiguous: it's better to know, than to not know, it's better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
I think people may be overapplying the concept of "screening off", though?
If for example RACE -> INTELLIGENCE -> TEST SCORE, then knowing someone's test score does not screen off race for the purpose of predicting intelligence (unless I'm very confused). Knowing test score should still make knowing race less useful, but not because of screening off.
On the other hand, if for example GENDER -> PERSONALITY TRAITS -> RATIONALIST TENDENCIES, then knowing someone's personality traits does screen off gender for the purpose of predicting rationalist tendencies.
I agree with the overall thrust of this, but I do have some specific reservations:
The temptation is real, and it's a temptation we should be wary of. But it's also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special (of course, we should still have some evidence for this, but the point is that the more within-group variation there is, the weaker that evidence needs to be). It's difficult to know what to do with information about average differences between groups unless one also knows something about within-group variation.
Again, I'm very sympathetic to this view, but I think you're overselling the case. If (correctly) believing your group performs poorly causes you to perform worse then there's a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that's not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there's a lot of evidence that "subconscious" decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer's How We Decide, and pretty much all of Gerd Gigerenzer's books make this argument (in addition to the more anecdotal account in Malcolm Gladwell's Blink). I tend to think they oversell the case for intuition somewhat, but it's still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
One day, I started looking at the people around me, and I began to realize how much they looked like apes. I quickly stopped doing this, because I feared it would cause me to treat them with contempt. And you know what? Doublethink worked. I didn't start treating people with more contempt as a result of purposely avoiding certain knowledge that I knew would cause me to treat people with more contempt.
If you think that my efforts to suppress observations relating the appearance of humans with the appearance of apes were poorly founded, then you have a very instrumentally irrational tendency towards epistemic rationality.
Maybe the lesson to draw is that you don't respect apes enough?
Guilty as charged! I don't want to win by means of doublethink---it sounds like the sort of thing an ape would do. A gloriously androgynous creature of pure information wins cleanly or not at all. (I think I'm joking somewhat, but my probability distribution on to what extent is very wide.)
At some point you will have limited resources, and you will need to decide how much that preference for winning cleanly is worth. How much risk you're willing to take of not winning at all, in exchange for whatever victory you might still get being a clean one.
For example, say there are two people you love dangling off a cliff. You could grab one of them and pull them to safety immediately, but in doing so there's some chance you could destabilize the loose rocks and cause the other to fall to certain death.
The gloriously androgynous creature of pure information (henceforth GACoPI) that you wish you were cannot prioritize one love over the other in a timely fashion, and will simply babble ineffectual encouragement to both until professional rescuers arrive, during which time there is some chance that somebody's arms will get tired or the cliff will shift due to other factors, killing them both. An ape's understanding of factional politics, on the other hand, has no particular difficulty judging one potential ally or mate as more valuable, and then rigging up a quick coin-flip or something to save face.
The grandparent uses "winning cleanly" to mean winning without resorting to doublethink. To go from that to assuming that a GACoPI is unable to enact your proposed ape solution or come up with some other courses of action than standing around like an idiot smacks of Straw Vulcanism (WARNING: TVTropes).
Wouldn't your efforts be better directed at clearing up whatever confusion leads you to react with contempt to the similarity to apes?
I can maybe see myself selling out epistemic rationality for an instrumental advantage in some extreme circumstance, but I find abhorrent the idea of selling it so cheaply. It seems to me a rationalist should value their ability to see reality higher, not give it up at the first sign of inconvenience.
Even on instrumental grounds. Just like theoretical mathematics tends to end up having initially unforeseen practical application, giving up on epistemic rationality carries potential of unforeseen instrumental disadvantage.
Good point.
Might be just mind projection on my part, but it seems to me that in those accursed cases, where various aspects of an "egalitarian" way of thought - that includes both values, moral intuitions, ethical rules, actual beliefs about the world and beliefs about one's beliefs - all get conflated, and the entire (silly for an AI but identity-forming for lots of current humans) system perceives some statement of fact as a challenge to its entire existence... well, the LW crowd at least would pride themselves on not being personally offended.
If tomorrow it was revealed with high certainty that us Slavs are genetically predisposed against some habits that happen to be crucial for civilized living and running a state nicely, I'd most definitely try to take it in stride. But when something like this stuff is said, I tend to feel sick and uncertain; I don't see a purely consequentialist way out which would leave me ethically unscathed.
Ah, if only it was so easy as identifying the objective state of the world, than trying to act in accordance with (what you've settled on as) your terminal values.* But this would require both more rationality and more compartmentalization than I've seen in humans so far.
I agree that denial usually seems like a bad idea, but the problem with things like stereotype threat is that they suggest (and more importantly provide evidence) that sometimes it might actually be useful (a path to improvement, even if not necessarily the first-best path).
The trick, presumably, is to distinguish the situations when this will hold from those when it doesn't.
Yes, but there is a long-term benefit associated with the removal of your cancer. On the other hand, if you had a blemish on your shoulder, you'd be better off not noticing it.