The Fundamental Attribution Error

Also known, more accurately, as "Correspondence Bias."

http://lesswrong.com/lw/hz/correspondence_bias/

The "more accurately" part is pretty important; bias -may- result in error, but need not -necessarily- do so, and in some cases may result in reduced error.

A Simple Example

Suppose I write a stupid article that makes no sense and rambles on without any coherent point.  There might be a situational cause of this; maybe I'm tired.  Correcting for correspondence bias means that more weight should be given to the situational explanation than the dispositional explanation, that I'm the sort of person who writes stupid articles that ramble on.  The question becomes, however, whether or not this increases the accuracy of your assessment of me; does correcting for this bias make you, in fact, less wrong?

In this specific case, no, it doesn't.  A person who belongs to the class of people who write stupid articles is more likely to write stupid articles than a person who doesn't belong to that class - I'd be surprised if I ever saw Gwern write anything that wasn't well-considered, well-structured, and well-cited.  If somebody like Gwern or Eliezer wrote a really stupid article, we have sufficient evidence that he's not a member of that class of people to make that conclusion a poor one; the situational explanation is better, he's having some kind of off day.  However, given an arbitrary stupid article written by somebody for which we have no prior information, the distribution is substantially different.  We have different priors for "Randomly chosen person X writes article" and "Article is bad" implies "X is a bad writer of articles" than we do for "Well-known article author Y writes article" and "Article is bad" implies "Y is a bad writer of articles".

Getting to the Point

The FAE is putting emphasis on internal factors rather than external.  It's jumping first to the conclusion that somebody who just swerved is a bad driver, rather than first considering the possibility that there was an object in the road they were avoiding, given only the evidence that they swerved.  Whether or not the FAE is an error - whether it is more wrong - depends on whether or not the conclusion you jumped to was correct, and more importantly, whether, on average, that conclusion would be correct.

It's very easy to produce studies in which the FAE results in people making incorrect judgements.  This is not, however, the same as the FAE resulting in an average of more incorrect judgements in the real world.

Correspondence Bias as Internal Rationalization

I'd suggest the major issue with correspondence bias is not, as commonly presented, incorrectly interpreting the behavior of other people - rather, the major issue is with incorrectly interpreting your own behavior.  The error is not in how you interpret other peoples' behaviors, but in how you interpret your own.

Turning to Eliezer's example in the linked article, if you find yourself kicking vending machines, maybe the answer is that -you- are a naturally angry person, or, as I would prefer to phrase it, you have poor self-control.  The "floating history" Eliezer refers to sounds more to me like rationalizations for poor behavior than anything approaching "good" reasons for expressing your anger through violence directed at inanimate objects.  I noticed -many- of those rationalizations cropping up when I quit smoking - "Oh, I'm having a terrible day, I could just have one cigarette to take the edge off."  I don't walk by a smoker and assume they had a terrible day, however, because those were -excuses- for a behavior that I shouldn't be engaging in.

It's possible, of course, that Eliezer's example was simply a poorly chosen one; the examples in studies certainly seem better, such as assuming the authors of articles held the positions they wrote about.  But the examples used in those studies are also extraordinarily artificial, at least in individualistic countries, where it's assumed, and generally true, that people writing articles do have the freedom to write what they agree with, and infringements of this (say, in the context of a newspaper asking a columnist to change a review to be less hostile to an advertiser) are regarded very harshly.

Collectivist versus Individualist Countries

There's been some research done, comparing collectivist societies to individualist societies; collectivist societies don't present the same level of effect from the correspondence bias.  A point to consider, however, is that in collectivist societies, the artificial scenarios used in studies are more "natural" - it's part of their society to adjust themselves to the circumstances, whereas individualist societies see circumstance as something that should be adapted to the individual.  It's -not- an infringement, or unexpected, for the state-owned newspaper to require everything written to be pro-state.

Maybe the differing levels of effect are less a matter of "Collectivist societies are more sensitive to environment" so much as that, in both cultures, the calibration of a heuristic is accurate, but it's simply calibrated to different test cases.

Conclusion

I don't have anything conclusive to say, here, merely a position: The Correspondence Bias is a bias that, on the whole, helps people arrive at more accurate, rather than less accurate, conclusions, and should be corrected with care to improving accuracy and correctness, rather than the mere elimination of bias.

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 3:37 PM
[-][anonymous]9y90

Yeah, I came to the conclusion a while ago that the fundamental attribution error is actually the attributation we give to ourselves. We should really be more likely to see something (EG being late) as a part of our character that we can work on, instead of something caused by outside circumstance.

[-][anonymous]9y10

We are more likely to cheat towards forgiving ourselves than towards being harsh on others, hence the https://en.wikipedia.org/wiki/Nemo_iudex_in_causa_sua

While I do agree, I think a certain amount of caution should be maintained. When evaluating yourself, internal factors should still be considered as relevant data (albeit with no small degree of caution). If we make the Correspondence Bias about overvaluing external factors during self evaluation (as opposed to undervaluing them for others), attempting to correct for that does allow for the possibility of dismissing relevant internal factors which otherwise might improve the accuracy of our evaluations. This is no different than deciding that people who write bad articles are just having bad days, despite the fact that it should, at least in part, lend some weight to the idea that they are just poor writers.

I think you need to hold fundamental attribution lightly, especially when you have a small sample.

If someone posts a badly thought out article, all you can be sure of is that they don't have reliable inhibitions against posting bad articles. If you see two or three in a row, you can make a stronger judgement of what their writing is likely to be like.

A problem with fundamental attribution is that once you've made a judgment about someone, you're at risk of looking for evidence that you're right. (Sorry, no cite, but this process is blatantly present once someone likes or hates a politician.)

It can be hard to know whether you have a good enough sample-- I'd been assuming that some store staff people were temperamentally grumpy, but the true situation was that I was shopping late in the day. They're much more cheerful if I show up early.

It can be hard to know whether you have a good enough sample-- I'd been assuming that some store staff people were temperamentally grumpy, but the true situation was that I was shopping late in the day. They're much more cheerful if I show up early.

This confuses me. What's the difference between grumpiness and what you updated to after showing up earlier?

I know somebody who is -always-, for lack of a better word, grumpy - what I think you refer to as "temperamentally grumpy". Grumpy doesn't really describe that, though. My internal representation for "grumpiness" as a description of a person isn't "Always a grouch", it's a heavier weight on the rate at which people get grumpy. Same with anger.

I suspect that the (theoretically) correct approach is to form a judgment about the set of (personality, situation) -- estimate a joint probability distribution, so to say. In simpler terms, to come to a conclusion as to how this kind of person behaves in this kind of situation.

Are you annotating the difference between the theoretically correct approach and the pragmatically correct approach?

No, I am just pointing out that you don't necessarily have to form the dichotomy "evaluate the person" vs "evaluate the situation". The joint evaluation of the (person, situation) set bypasses the whole FAE problem but with obvious costs (the number of cases) and limitations (you still want to forecast what person X will do in situation Y).

Granted. A complete consideration - provided you have time to do one - is always going to be more accurate than an off-the-cuff conclusion. I'd call that the "theoretically correct approach".

The pragmatically correct conclusion would be the situation where the result matters little enough that the off-the-cuff conclusion is sufficient, and thus most cost-effective.

Is that the distinction you wished to draw? Or am I reading something into the parenthetical (theoretically) that isn't there to be read?

There isn't really much there. Basically I had a wee little itty bitty tiny epiphany that considering things jointly is not only the theoretically-correct approach, but also successfully dissolves the FAE issue. I agree that like most theoretically-correct approaches its usefulness in practice is limited.

Correcting for correspondence bias means that more weight should be given to the situational explanation than the dispositional explanation, that I'm the sort of person who writes stupid articles that ramble on.

I may have misunderstood you here, but I interpret the correspondence bias differently. Correcting for it doesn't mean you should necessarily always put more weight on the situational explanation than the personality, which your example clearly shows would sometimes lead to mistakes. It means that you mostly don't give it as much weight as you should.

The Correspondence Bias is a bias that, on the whole, helps people arrive at more accurate, rather than less accurate, conclusions, and should be corrected with care to improving accuracy and correctness, rather than the mere elimination of bias.

I think it's useful to think of each bias as isolated. Correcting for the correspondence bias should always make you more accurate, because it's defined relatively to what's true. It doesn't talk about comparing people with yourself. However it mostly might not make sense to think of it this way in practice, since interpreting others' actions rarely happens without some sort of comparison with what you would do in the same situation. I wouldn't be surprised if there's a significant correlation between FAE and the opposite bias of underestimating the importance of your own personality in how you react to things.

Does this sound like something you could agree with?

I may have misunderstood you here, but I interpret the correspondence bias differently. Correcting for it doesn't mean you should necessarily always put more weight on the situational explanation than the personality, which your example clearly shows would sometimes lead to mistakes. It means that you mostly don't give it as much weight as you should.

The context by which the correspondence bias tends to be assessed, however, are in artificial environments where it leads to incorrect conclusions. How do we judge whether we give the correct weight or not?

Does this sound like something you could agree with?

I have no idea where to place my priors on the possibility of a strong correlation; I'd guess that low rationalization is associated both with high and low FAE (owing to virtue ethics on one tail and rationalists on the other), and that the middle is a bit of a wash. My inclination is to look for studies. Know of any?

Yes, people with bad habits blame their circumstances instead of themselves (duh), regardless of whether it is due to the circumstances.

Your key sentence is "This is not, however, the same as the FAE resulting in an average of more incorrect judgements in the real world.", but you provide no evidence that this is in fact not the case. On the whole, do you think that people are ascribing actions to personalities not often enough, as opposed to too often?

On the whole, do you think that people are ascribing actions to personalities not often enough, as opposed to too often?

I would argue people aren't ascribing their own actions to their personalities often enough.

I was under the impression that the FAE is about judging others, not ourselves. Yes, we come up with convenient explanations for ourselves, when really we should be ascribing our actions to our personalities more often. If you lie to yourself it is very hard for others to call you on it, so such lies can be cheap and frequent. I would be surprised if many people here disagreed with this. I don't think this 'defends' the FAE though - the first sentence of the thread introducing the correspondence bias is "We tend to see far too direct a correspondence between others' actions and personalities." (emphasis mine).

So let me repeat/clarify my question: On the whole, do you think that people are describing actions by other people to personalities not often enough, as opposed to too often?

Your key sentence is "This is not, however, the same as the FAE resulting in an average of more incorrect judgements in the real world.", but you provide no evidence that this is in fact not the case.

I've encountered no evidence that this is the case, either. All I've encountered in my research is a lot of artificial situations in which the FAE is deliberately manipulated to produce incorrect results - in which case, it produces incorrect results.

On the whole, do you think that people are ascribing actions to personalities not often enough, as opposed to too often?

Null. My position is that people are, on average, calibrated more-or-less correctly for the culture in which they grew up.

I've been teaching part time at a community center for a while now, and it's been interesting for me to see how the first impressions I had of the various students stacked up against the experiences I had knowing them over an extended period.

I can put numbers to it- out of a bit over 50 students, there were three for whom I found my first impressions to be substantial misjudgments of their habitual character, and one who I came to suspect I had misjudged, but for whom it turned out that the evidence that let me to suspect my initial judgment was wrong was actually uncharacteristic of him, whereas the behavior that formed my first impression was not. Of course, there's a likelihood of confirmation bias here, but since I discuss the students' personalities and behavior extensively with the other teachers, our assessments of them tend towards agreement over time.

Of course, error rates are going to depend strongly on context, but it's nice to have some idea of my expected error rate in this particular context.

Excellent post.

Related: It only takes a small extension of the logic to show that the Just World Hypothesis is a useful heuristic.

It only takes a small extension of the logic to show that the Just World Hypothesis is a useful heuristic.

I don't see it, how is it useful?

The Just World Hypothesis holds that people get what they deserve.

Because bad things aren't purely random. The person on the motorcycle with the helmet, versus the person on the motorcycle without, are not courting tragedy equally; one of them is doing a little bit to "earn" their tragedy.

Likewise, Tit-for-Tat means evil people tend to be the recipients of evil in turn.

I think the "Just World Hypothesis", as typically described, is largely incorrect in its use of the concept of deserving, versus the concept of having some responsibility for - but I also think most people who follow a variant of the JWH use the non-moralizing "responsibility" version, and it is largely (but not exclusively) those who oppose the Just World Hypothesis who insert moralizing, to make it seem more reprehensible. Regardless of whether they wear a helmet or not, motorcyclists don't deserve to get hit; rather, whether or not they wear a helmet determines part of their responsibility for what happens when they do.

Those who believe in the Just World Hypothesis tend to analyze their behavior after something bad happens to them, and hold something they've done partially responsible, and try to correct their behavior in the future - and do the same thing to other people who have something bad happen to them. Those who oppose the hypothesis sometimes refer to this tendency as "victim blaming".

Personally, I call it "willingness to accept and learn from mistakes". But then, I tend to upset the sorts of people who use phrases like "victim blaming".

ETA: Retracted, because I failed to actually answer the question, and Salemicus did.

[This comment is no longer endorsed by its author]Reply

To expand on what OrphanWilde wrote:

The Just World Hypothesis can be summarised as "you reap what you sow." If you wish to argue that you don't "deserve" to reap what you sow (perhaps because you didn't have access to better seeds), or that it's not "just" to reap what you sow (because everyone should reap in rough equality, regardless of how they sowed), or similar, that's fine, but you aren't arguing against the Just World Hypothesis.

So when we see the fruit, the Just World Hypothesis tells us: that's probably how the person sowed the seeds. And yes, there is noise, which is why it's a heuristic, not an infallible rule. But the whole reason to sow the seeds in the first place was to cause them to bear fruit. "Ye shall know them by their fruits. Do men gather grapes of thorns, or figs of thistles?" In other words, Coherent Extrapolated Volition.

So to take an example from the original post - smoking. If I meet someone with lung cancer, the overwhelming likelihood is that they are responsible for their own problem, through smoking. But if I smoke and then I get lung cancer, I'll want to make excuses for myself, and will stubbornly refuse to make the connection between my own culpable past behaviour (the sowing) and my present misfortune (the reaping). People who complain about the Just World Hypothesis want me to extend this non-judgemental behaviour to everyone else. But just as with the Fundamental Attribution Error, the problem is not that I am being too harsh on other people, but that I am being too easy on myself. I am right to draw the connection between behaviour and outcomes for everyone else, and I should do the same for myself.

This is mostly true. One example of unjust happenings is the following: Bob was being good, i.e., acting in a way that benefits the community, and he was punished for it even though the community benefited.

[-][anonymous]9y00

There is another issue with FAE, but I am not very good at expressing it clearly. Basically it seems there is a very strong assumption in most minds that behavior that is as a value judgement can be judged as bad must be unusual - that what most people do is "per definition" not bad, just "normal". In other words, we tend to abnormalize wrongdoing. Terms like "he is a sick fuck" suggest that wrongdoing comes from abnormity / unusualness.

Now if FAE says people are not actually that different, it suggests either that we should not blame wrongdoers or we should blame everybody.

To me the second makes more sense. If a behavior is unacceptable, even if it turns out that everybody would do it if the circumstances would be "right", it is still unacceptable.

Thus, we end up with having to say that we need a secular, atheist version of "we all are sinners".

Seriously, what else is there? Either you accept the unacceptable... of pretend wrongdoers are special, different, "sick"... or you say most of us are in a way, "fallen".

One alternative I've encountered is to blame the behavior, rather than the person; change the behavior, rather than the person. (I'm not particularly fond of that approach; it sets off a shitload of my personal ethical alarm bells relating to manipulative behaviors and de-agentizing people. But for people who use less explicit ethics, it could work).

Not sure you have much in the way of good alternatives as "changing the person" should set off even louder bells from your manipulative-behaviors alarms.

How would you test that the FAE leads to in average better judgments ? And better than what ? Eliminating the FAE does not mean only considering external factors either, or you'd have another bias.

I cannot imagine a formal way to test the accuracy of FAE.

From a "better" perspective, I lean towards the perspective that a hypothesis about somebody's personality is almost always more useful than a hypothesis about somebody's situation. Somebody else's personality is something I have to interact with; somebody else's situation is not. I can update inaccurate profiles of personalities based on further information, so I don't see a significant cost to inaccurate profiles.

[-]emr9y00

Good points. This may be another case where we evolved to have probability-weighted-by-utility intuitions, and where we work backwards from these intuitions when ask for a model of raw probability.