LW systemic bias: US centrism
Recently, I have noticed a cultural bias for the United States running through LW threads. It is perhaps to be expected of an English-language website, but for one that is about, among other things, overcoming bias, it is important to recognize one's own.
Aspects of the bias I have observed include:
- Using Imperial units over the SI system, which is the standard for scientific literature and discussion.
- Presuming the US by default when it is assumed that no country name needs to be given.
- Expecting reader familiarity with US-specific cultural concepts.
- A tendency to focus on the US first and foremost when talking about worldwide problems and scenarios.
I'm not the first to raise such concerns, either.
By comparison, e.g. the English Wikipedia strikes me as an example of an international English-language project that's relatively successful at recognizing and fighting systemic bias, and a whole set of template messages to mark articles with identified problems.
To quote Wikipedia itself:
The average Wikipedian on the English Wikipedia is (1) a male, (2) technically inclined, (3) formally educated, (4) an English speaker (native or non-native), (5) European–descent, (6) aged 15–49, (7) from a majority-Christian country, (8) from a developed nation, (9) from the Northern Hemisphere, and (10) likely employed as a white-collar worker or enrolled as a student rather than employed as a labourer.
The reason I haven't mentioned other obvious biases, such as gender, age, education, or First World biases, is because those (in my experience) tend to be more subtle here on LW and because I'm myself subject to some of them. However, I might cook something up on them later.
A study in Science on memory conformity
I believe this may be a good addition to the cognitive bias literature:
Following the Crowd: Brain Substrates of Long-Term Memory Conformity
- Micah Edelson1,*,
- Tali Sharot2,
- Raymond J. Dolan2,
- Yadin Dudai1
1Department of Neurobiology, Weizmann Institute of Science, Israel.
- 2Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK.
ABSTRACT
Human memory is strikingly susceptible to social influences, yet we know little about the underlying mechanisms. We examined how socially induced memory errors are generated in the brain by studying the memory of individuals exposed to recollections of others. Participants exhibited a strong tendency to conform to erroneous recollections of the group, producing both long-lasting and temporary errors, even when their initial memory was strong and accurate. Functional brain imaging revealed that social influence modified the neuronal representation of memory. Specifically, a particular brain signature of enhanced amygdala activity and enhanced amygdala-hippocampus connectivity predicted long-lasting but not temporary memory alterations. Our findings reveal how social manipulation can alter memory and extend the known functions of the amygdala to encompass socially mediated memory distortions.
Evolution, bias and global risk
Sometimes we make a decision in a way which is different to how we think we should make a decision. When this happens, we call it a bias.
When put this way, the first thing that springs to mind is that different people might disagree on whether something is actually a bias. Take the bystander effect. If you're of the opinion that other people are way less important than yourself, then the ability to calmly stand around not doing anything while someone else is in danger would be seen as a good thing. You'd instead be confused by the non-bystander effect, whereby people (when separated from the crowd) irrationally put themselves in danger in order to help complete strangers.
The second thing that springs to mind is that the bias may exist for an evolutionary reason, and not just be due to bad brain architecture. Remember that evolution doesn't always produce the behavior that makes the most intuitive sense. Creatures, including presumably humans, tend to act in a way as to maximize their reproductive success; they don't act in the way that necessarily makes the most intuitive sense.
The statement that humans act in a fitness-maximizing way is controversial. Firstly, we are adapted to our ancestral environment, not our current one. It seems very likely that we're not well adapted to the ready availability of high-calorie food, for example. But this argument doesn't apply to everything. A lot of the biases appear to describe situations which would exist in both the ancestral and modern worlds.
A second argument is that a lot of our behavior is governed by memes these days, not genes. It's certain that the memes that survive are the ones which best reproduce themselves; it's also pretty plausible that exposure to memes can tip us from one fitness-maximizing behavioral strategy to another. But memes forcing us to adopt a highly suboptimal strategy? I'm sceptical. It seems like there would be strong selection pressure against it; to pass the memes on but not let them affect our behavior significantly. Memes existed in our ancestral environments too.
And remember that just because you're behaving in a way that maximizes your expected reproductive fitness, there's no reason to expect you to be consciously aware of this fact.
So let's pretend, for the sake of simplicity, that we're all acting to maximize our expected reproductive success (and all the things that we know lead to it, such as status and signalling and stuff). Which of the biases might be explained away?
The bystander effect
Eliezer points out:
We could be cynical and suggest that people are mostly interested in not being blamed for not helping, rather than having any positive desire to help - that they mainly wish to escape antiheroism and possible retribution.
He lists two problems with this hypothesis. Firstly, that the experimental setup appeared to present a selfish threat to the subjects. This I have no convincing answer to. Perhaps people really are just stupid when it comes to fires, not recognising the risk to themselves, or perhaps this is a gaping hole in my theory.
The other criticism is more interesting. Telling people about the bystander effect makes it less likely to happen? Well, under this hypothesis, of course it would. The key to not being blamed is to formulate a plausible explanation; the explanation "I didn't do anything because no-one else did either" suddenly sounds a lot less plausible when you know about the bystander effect. (And if you know about it, the person you're explaining it to is more likely to as well. We share memes with our friends).
The affect heuristic
This one seems quite complicated and subtle, and I think there may be more than one effect going on here. But one class of positive-affect bias can be essentially described as: phrasing an identical decision in more positive language makes people more likely to choose it. The example given is "saving 150 lives" versus "saving 98% of 150 lives". (OK these aren't quite identical decisions, but the difference in opinion is more than 2% and goes in the wrong direction). Apparently putting in the word 98% makes it sound more positive to most people.
This also seems to make sense if we view it as trying to make a justifiable decision, rather than a correct one. Remember, the 150(ish) lives we're saving aren't our own; there's no selective pressure to make the correct decision, just one that won't land us in trouble.
The key here is that justifying decisions is hard, especially when we might be faced with an opponent more skilled in rhetoric than ourselves. So we are eager for additional rhetoric to be supplied which will help us justify the decision we want to make. If I had to justify saving 150 lives (at some cost), it would honestly never have occurred to me to phrase it as "98% of 153 lives". Even if it had, I'd feel like I was being sneaky and manipulative, and I might accidentally reveal that. But to have the sneaky rhetoric supplied to me by an outside authority, that makes it a lot easier.
This implies a prediction: when asked to justify their decision, people who have succumbed to positive-affect bias will repeat the postive-affective language they have been supplied, possibly verbatim. I'm sure you've met people who quote talking points verbatim from their favorite political TV show; you might assume the TV is doing their thinking for them. I would argue instead that it's doing their justification for them.
There is a class of people, who I will call non-pushers, who:
- would flick a switch if it would cause a train to run over (and kill) one person instead of five, yet
- would not push a fat man in front of that train (killing him) if it could save the five lives
So what's going on here? Our feeling of shouldness is presumably how social pressure feels from the inside. What we consider right is (unless we've trained ourselves otherwise) likely to be what will get us into the least trouble. So why do non-pushers get into less trouble than pushers, if pushers are better at saving lives?
It seems pretty obvious to me. The pushers might be more altruistic in some vague sense, but they're not the sort of person you'd want to be around. Stand too close to them on a bridge and they might push you off. Better to steer clear. (The people who are tied to the tracks presumably prefer pushers, but they don't get any choice in the matter). This might be what we mean by near and far in this context.
Another way of putting it is that if you start valuing all lives equally, and not put those closest to you first, then you might start defecting in games of reciprocal altruism. Utilitarians appear cold and unfriendly because they're less worried about you and more worried about what's going on in some distant, impoverished nation. They will start to lose the reproductive benefits of reciprocal altruism and socialising.
Global risk
In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer lists a number of biases which could be responsible for people's underestimation of global risks. There seem to be a lot of them. But I think that from an evolutionary perspective, they can all be wrapped up into one.
Group Selection doesn't work. Evolution rewards actions which profit the individual (and its kin) relative to others. Something which benefits the entire group is nice and all that, but it'll increase the frequency of the competitors of your genes as much as it will your own.
It would be all to easy to say that we cannot instinctively understand existential risk because our ancestors have, by definition, never experienced anything like it. But I think that's an over-simplification. Some of our ancestors probably have survived the collapse of societies, but they didn't do it by preventing the society from collapsing. They did it by individually surviving the collapse or by running away.
But if a brave ancestor had saved a society from collapse, wouldn't he (or to some extent, she) become an instant hero with all the reproductive advantage that affords? That would certainly be nice, but I'm not sure the evidence backs it up. Stanislav Petrov was given the cold shoulder. Leading climate scientists are given a rough time, especially when they try and see their beliefs turned into meaningful action. Even Winston Churchill became unpopular after he helped save democratic civilization.
I don't know what the evolutionary reason for hero-indifference would be, but if it's real then it pretty much puts the nail in the coffin for civilization-saving as a reproductive strategy. And that means there's no evolutionary reason to take global risks seriously, or to act on our concerns if we do.
And if we make most of our decisions on instinct - on what feels right - then that's pretty scary.
Should I be afraid of GMOs?
I was raised to believe that genetically-modified foods are unhealthy to eat and bad for the environment, and given a variety of reasons for this, some of which I now recognize as blatantly false (e.g., human genetic code is isomorphic to fundamental physical law), and a few of which still seem sort of plausible.
Because of this history, I need to anchor my credence heavily downward from my sense of plausibility.
The major reasons I see to believe that GMOs are safe are:
- I would probably think they were dangerous even if they were safe, due to my upbringing.
- In general, whenever someone opposes a particular field of engineering on the grounds that it's unnatural and dangerous, they're usually wrong.
- It's not quite obvious to me that introducing genetically-engineered organisms to a system is significantly more dangerous than introducing non-native naturally-evolved organisms.
The major reason I see to believe that GMOs are dangerous is:
- I might believe they were safe even if they were dangerous, due to "yay science" (which was also part of my upbringing).
- We are designing self-replicating things and using them without reliable containment, thereby effectively releasing them into the wild.
So: green goo, yes or no?
"I know I'm biased, but..."
Inspired by: The 5-Second Level, Knowing About Biases Can Hurt People
"I know I'm biased, but..." and its equivalents seem to be relatively common in casual conversation--I've encountered the phrase in classroom discussions, on Internet message boards, and in political arguments. In most cases, "I know I'm biased, but..." is used as a way of feigning humility and deflecting criticism by preemptively responding to accusations of bias. That is, the speaker acknowledges that their argument may be flawed in order to deny their opponent the opportunity point out particular biases. It's a way of signaling to the audience, "Yes, there are errors in this line of reasoning, but I already know that, so you can't accuse me of being biased."
But as we all know by now, it's not enough to just acknowledge biases--you have to actually correct the error before you can move on. Admitting that your argument is based on bias does not absolve you of your error, and it doesn't make your argument any truer.
Therefore, "I know I'm biased, but" is a cached thought that we would be better off without. But how can we get rid of it? Tabooing the phrase "I know I'm biased, but..." is not enough, since your brain will probably end up substituting something similar, such as "I may be wrong, but..." instead of making the appropriate correction. Instead, it is necessary to force your brain to consciously think about the bias instead of instinctively rationalizing the biased argument. This is a skill that takes place on the 5-second level: you have to stop your train of thought mid-sentence and think about the situation more clearly. The following should serve as an anti-pattern for when you notice yourself thinking, "I know I'm biased, but...":
1) Stop. I'm not ready to proceed. If there's a bias in my argument, bulldozing over it is never the correct solution. I need to just cut myself off in mid-sentence and think about this.
2) Identify the bias. What is this bias that my brain is trying to cover up? Does it have a name? Where have I read about it before? What heuristic am I using that is causing the problem? Do I have any emotional attachment to this argument that might cloud my judgment? How would I feel if this argument was wrong? Where is my information coming from? Did I do a thorough job researching this argument?
3) Think about potential solutions. What heuristic should I be using instead of the one I am using? Can I substitute a quantitative analysis or Bayesian update instead of jumping to a particular conclusion? Do I need to do more research to determine if this argument is true? What other sources of evidence can I consult?
4) Re-analyze using a different method. What happens when I use the heuristics I just thought about instead of the ones I originally used? What pieces of evidence really support my argument? What facts would need to be different for it to be false? Can I compare multiple perspectives on this argument?
5) Re-evaluate the argument. Does the argument still look correct? Does approaching the problem with a different method yield the same results? Have I completely explained away the bias?
An abstract explanation isn't always enough, so here is an example:
"...and that's why," Albert concluded, "the iPhone is absolutely terrible!"
"I know I'm biased," Barry replied, "but iPhone is the best smartphone on the market!"
Uh-oh, thought Barry. I said that phrase again. Something's not right here. "Hang on a moment..."
Why would I think that the iPhone is the best smartphone on the market? How would I feel if it wasn't the best phone? Well, I'd be kind of annoyed that I spent all that money to buy one. I'd feel disappointed because the advertisement made it look really awesome, and I've always told everyone that it was worth the price. Am I rationalizing this? Hmm, maybe I am rationalizing and I just don't want to believe that I made a bad purchase.
Ok, so what if it is rationalization? What am I supposed to do now? Didn't I read something on LessWrong about this? This feels like "politics is the mind-killer" territory--I should probably be re-thinking my arguments and checking for bias.
But how should I be evaluating the quality of my iPhone? I guess I should ask myself what features I care about--let's pick three. Well, the most important thing to me is service--I make a lot of calls for work and I don't want any of them to be dropped. I want my phone to be durable, too--I'm pretty clumsy and I drop it from time to time. And the phone bill is important too.
Alright, let's add all of this up: The iPhone is pretty fragile, I've already cracked the screen slightly. And it does drop calls sometimes--there might be a network with better coverage, I'm not sure. And the phone bill--my old phone was definitely a lot cheaper, but it also wasn't a smartphone. I'd have to research other networks' coverage and pricing to be sure.
Wow, I might've been wrong about this. That means I wasted a lot of money. And it also means that the iPhone probably isn't "the best" phone out there. Wait, that's not right--it could be the best, but I don't have the evidence to prove it, so my argument isn't right. I have to gather more evidence.
"Are you still there?" Albert frowned in puzzlement. "You kinda fuzzed out there for a second."
"Nevermind," said Barry. "What I should have said was, the iPhone doesn't really do all of the things I want it to do. Say, where's the electronics store?"
Next time you catch yourself thinking, "I know I'm biased, but...", don't let your brain finish the sentence--stop that train of thought and analyze it!
Edit: Many commenters have suggested that "I know I'm biased, but..." is sometimes used to signal being open to counterarguments. As a result, it is best to double-check what you (or your discussion partners) are really signaling so that you can respond appropriately.
[LINK, TED video] Kathryn Schulz on Being Wrong
http://www.ted.com/talks/kathryn_schulz_on_being_wrong.html
Kathryn Schulz is a self-identified "Wrongologist" (in fact, @wrongologist is her user name on Twitter). She has written a popular book ("Being Wrong: Adventures in the Margin of Error", web site) and also writes the Slate column 'The Wrong Stuff'. Her TED talk covers the problem of disagreement, the nature of belief, overconfidence bias and how to actually change your mind. She maintains that most folks actively avoid the unpleasant feeling of "being wrong", which is an important point I have not seen before (but see The Importance of Saying 'Oops' and Crisis of Faith). Unfortunately, she does not discuss reasoning about uncertainty, so her arguments against 'the feeling of right' end up seeming rather shallow.
Discuss her TED talk here. (Her broader work is also obviously on topic.)
Examine success as much as failure
Harvard Business Review has posted something right up our alley: "Why Leaders Don't Learn From Success"
Also, the HBR essay links to a similar discussion of how Pixar avoids being brainwashed by its own success (something I had always wondered about - they seem too consistently successful): "How Pixar Fosters Collective Creativity".
Is there a name for this bias?
There are certain harmful behaviors people are tricked into engaging in, because whereas the benefits of the behavior are concentrated, the harms are diffuse or insidious. Therefore, when you benefit, P(benefit is due to this behavior) ≈ 1, but when you're harmed, P(harm is due to this behavior) << 1, or in the insidious form, P(you consciously notice the harm) << 1.
An example is when I install handy little add-ons and programs that, in aggregate, cause my computer to slow down significantly. Every time I use one of these programs, I consciously appreciate how useful it is. But when it slows down my computer, I can't easily pinpoint it as the culprit, since there are so many other potential causes. I might not even consciously note the slowdown, since it's so gradual ("frog in hot water" effect).
People Neglect Who They Really Are When Predicting Their Own Future Happiness [link]
The scientists who conducted this interesting study...
found that our natural sunny or negative dispositions might be a more powerful predictor of future happiness than any specific event. They also discovered that most of us ignore our own personalities when we think about what lies ahead -- and thus miscalculate our future feelings.
Smart people who are usually wrong
There are several posters on Less Wrong whom I
- think are unusually smart, and would probably test within the top 2% in the US on standardized tests, and
- think are usually wrong when they post or comment here.
So I think they are exceptionally smart people whose judgement is consistently worse than if they flipped a coin.
How probable is this?
Some theories:
- This is a statistical anomaly. With enough posters, some will by chance always disagree with me. To test this hypothesis, I can write down my list of smart people who are usually wrong, then make a new list based on the next six months of LessWrong, and see how well the lists agree. I have already done this. They agree above chance.
- I remember times when people disagree with me better than times when they agree with me. To test this, I should make a list of smart people, and count the number of times I votes their comments up and down. (It would be really nice if the website could report this to me.)
- This is a bias learned from an initial statistical anomaly. The first time I made my list, I became prejudiced against everything those people wrote. This could be tested using the anti-kibitzer.
- I have poor judgement on who is smart, and they are actually stupid people. I'm not interested in testing this hypothesis.
- What I am actually detecting is smart people who have strong opinions on, and are likely to comment on, areas where I am either wrong, or have a minority opinon.
- These people comment only on difficult, controversial issues which are selected as issues where people perform worse than random.
- Many of these comments are in response to comments or posts I made, which I made only because I thought they were interesting because I already disagreed with smart people about the answers.
- Intelligence does not correlate highly with judgement. Opinions are primarily formed to be compatible with pre-existing opinions, and therefore historical accident is more significant than intelligence in forming opinons. The distribution of historical accidents is probably such that some number of smart people will have opinions based on an entire worldview that is largely wrong.
Does cognitive therapy encourage bias?
"Cognitive behavioral therapy" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:
(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two.
(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.
So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.
Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is "I'm inadequate." They want to replace that bad one with a more positive one, namely, "I'm adequate in most ways (but I'm only human, too)." Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:
[Therapist]: What evidence do you have that you're inadequate?
[Patient]: Well, I didn't understand a concept my economics professor presented in class today.
T: Okay, write that down on the right side, then put a big "BUT" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.
P: Well, it was the first time she talked about it. And it wasn't in the readings.
Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:
T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.
P: Well, I worked on my literature paper.
T: Good. Write that down. What else?
(pp. 179-180; ellipsis and emphasis both in the original)
When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.
This is not how one should approach evidence...assuming one wants correct beliefs.
So why does Beck advocate this approach? Here are some possible reasons.
A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).
B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.
C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form "I'm inadequate," along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to anticipate? (If this were the rationale, then what would the sense of the term "evidence" be in this context?)
What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's A Guide to Rational Living and Jacqueline Persons' Cognitive Therapy in Practice: A Case Formulation Approach) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.
Edit: Thanks a lot to Vaniver for the help on link formatting.
[LINK] Humans are bad at summing up a bunch of small numbers
"Outsmart your brain by knowing when you are wrong":
http://troysimpson.co/outsmart-your-brain-by-knowing-when-you-are-w
Humans are incredibly bad at summing up a bunch of small numbers. I had recently read a study that looked into why people are so bad at this task, but the important part was people commonly underestimate the total.
...
Knowing what you are bad can be incredibly important. Use this trick when estimating the timeline for a lot of small tasks, or figuring out what your monthly expenses are. It always seems shocking your credit card bill is so high when it is a bunch of small purchases. Learn what else your brain cannot perform well and use that to your advantage.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)