A few examples (in approximately increasing order of controversy):
If you proceed anyway...
- Identify knowledge that may be dangerous. Forewarned is forearmed.
- Try to cut dangerous knowledge out of your decision network. Don’t let it influence other beliefs or your actions without your conscious awareness. You can’t succeed completely at this, but it might help.
- Deliberately lower dangerous priors, by acknowledging the possibility that your brain is contaminating your reasoning and then overcompensating, because you know that you’re still too overconfident.
- Spend a disproportionate amount of time seeking contradictory evidence. If believing something could have a great cost to your values, make a commensurately great effort to be right.
- Just don’t do it. It’s not worth it. And if I found out, I’d have to figure out where you live, track you down, and kill you.
In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.
But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don't want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don't think you supported your conclusions there with real evidence.
This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that "IQ test results" are not the same as "human worth", it doesn't follow that an arbitrary person would not change their behavior towards someone who is "measurably not very smart" in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears) the fear may very well be justified.
When I read Cialdinni's book Influence, I was struck by the number of times his chapters took the form: (1) describe mental shenanigan, (2) offer evidence that people are easily and generally tricked in this way (3) explain how it functions as a bias when manipulated and a useful heuristic in non-evil environments, (4) offer laboratory evidence that basic warnings to people about the trick offer little protective benefit, (5) exhort the reader to "be careful anyway" with some ad hoc and untested advice.
Advice should be supported with evidence... and sometimes I think a rationalist should know when to shut up and/or bail out of a bad epistemic situation.
Evidence from implicit association tests indicate that people can be biased against other people without even being aware of it. When scientists tried to measure the degree of "cognitive work" it takes to parse racist situations they found that observing overt racism against black people was mentally taxing to white people while observing subtle racism against black people racism was mentally taxing to black people. The whites were oblivious to subtle racism and didn't even try to process it because it happened below their perceptual awareness, overt racism made them stop and momentarily ponder if maybe (shock!) we don't live in a colorblind world yet. The blacks knew racism was common (but not universal) and factored it into their model of the situation without lots of trouble when racism was overt - the tricky part was subtle racism where they had to think through the details to understand what was going on.
(I feel safe saying that white people are frequently oblivious to racism, and are sometimes active but unaware perpetrators of subtle forms of racism because I myself am white. When talking about group shortcomings, I find it best to stick to the shortcomings of my own group.)
Based on information like this, I can easily imagine that I might learn a true (relatively general) fact, use it to leap to an unjustifiable conclusion with respect to an individual, have that individual be harmed by my action, and never notice unless "called on it".
But when called on it, its quite plausible that I'd leap to defend myself and engage in a bunch of motivated cognition to deny that I could possibly ever be biased... and I'd dig myself even deeper into a hole, updating the wrong way when presented with "more evidence". So it would seem that more information would just leave me more wrong than I started with, unless something unusual happened.
(Then, to compound my bad luck I might cache defensive views of myself after generating them in the heat of an argument.)
So it seems reasonable to me that if we don't have the time to drink largely then maybe we should avoid shallow draughts. And even in that case we should be cautious about any subject that impinges on mind killer territory because more evidence really does seem to make you more biased in such areas.
I upvoted the article (from -2 to -1) because the problems I have with it are minor issues of tone, rather major issues with the the content. The general content seems to be a very fundamental rationalist "public safety message", with more familiarity assumed than is justified (like assuming everyone automatically agrees with Paul Graham and putting in a joke about violence at the end).
I don't, unfortunately, know of any experimentally validated method for predicting whether a specific person at a specific time is going to be harmed or helped by a specific piece of "true information" and this is part of what makes it hard to talk with people in a casual manner about important issues and feel justifiably responsible about it. In some sense, I see this community as existing, in part, to try to invent such methods and perhaps even to experimentally validate them. Hence the up vote to encourage the conversation :-)
Those are good points.
What I was trying to encourage was a practice of trusting your own strength. I think that morally conscientious people (as I suspect WrongBot is) err too much on the side of thinking they're cognitively fragile, worrying that they'll become something they despise. "The best lack all conviction, while the worst are full of passionate intensity."
Believing in yourself can be a self-fulfilling prophecy; believing in your own ability to resist becoming a racist might also be self-fulfilling. There's plenty of evidence for co... (read more)