Regarding the "status quo bias" example with the utility company, I think it's fallacious, or at least misleading. For realistic typical humans with all their intellectual limitations, it is rational to favor the status quo when someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences. (And not to mention the swindles that might be hiding in the fine print.)
Moreover, if the utility company had actually started selling different deals rather than just conducting a survey about hypotheticals, it's not like typical folks would have stubbornly held to unfavorable deals for years. What happens in such situations is that a clever minority figures out that the new deal is indeed more favorable and switches -- and word about their good experience quickly spreads and soon becomes conventional wisdom, which everyone else then follows.
This is how human society works normally -- what you call "status quo bias" is a highly beneficial heuristic that prevents people from ruining their lives. It makes them stick to what's worked well so far instead of embarking on attractive-looking, ...
I'd say the utility company example is, in an important sense, the mirror image of the Albanian example. In both cases, we have someone approaching the common folk with a certain air of authority and offering some sort of deal that's supposed to sound great. In the first case, people reject a favorable deal (though only in the hypothetical) due to the status quo bias, and in the second case, people enthusiastically embrace what turns out to be a pernicious scam. At least superficially, this seems like the same kind of bias, only pointed in opposite directions.
Now, while I can think of situations where the status quo bias has been disastrous for some people, and even situations where this bias might lead to great disasters and existential risks, I'd say that in the huge majority of situations, the reluctance to embrace changes that are supposed to improve what already works tolerably well is an important force that prevents people from falling for various sorts of potentially disastrous scams like those that happened in Albania. This is probably even more true when it comes to the mass appeal of radical politics. Yes, it would be great if people's intellects were powerful and unbia...
Re. the Roy Meadows/Sally Clark example, you say:
a pediatrician had testified that the odds of two children in the same family dying of infant death syndrome was 73 million to 1. Unfortunately, he had arrived to this figure by squaring the odds of a single death. Squaring the odds of a single event to arrive at the odds for it happening twice only works if the two events are independent. But that assumption is likely to be false in the case of multiple deaths in the same family
More importantly, the 73 million to 1 number was completely irrelevant.
The interesting number is not "the odds of two children in the same family dying of infant death syndrome" but "the odds of two children in the same family having died of infant death syndrome given that two children in that family had died", which are obviously much higher.
Of course, you need to know the first number in order to calculate the second (using Bayes' theorem) but Meadows (and everyone else present at the trial) managed to conflate the two.
edit 05/07: corrected bizarre thinko at the end of penultimate paragraph.
The problem with most of these is that they are ways that other people's irrationality hurts you.
It is no good to advise someone that it is in their interest to spend time and effort becoming more rational, because if everyone else were more rational, then their life would be better. I control my rationality, not everyone else's. And that is the problem.
Out of your 10 examples, only two (financial losses due to trying to predict the stock market and driving instead of flying after 9/11) are cases where your investment in rationality pays back to you. And the 9/11 case is hardly a very big return -- if 300 extra people died in the USA as a result, then that's a 1 in 1,000,000 reduction in probability of death for being rational, which translates to an $20 return (statistical value of a life), which is worth far less than the time it would take the average person to learn rationality. Perhaps you could argue the figure of $20 up to $100 or so, but still, that isn't a big enough return. I'm not counting the Albanian Ponzi schemes because if the society you're in collapses, it is no use to you that you avoided the Ponzi Scheme (and I think that leaving Albania was unlikely to be an o...
Regarding the "financial harm" example: the only irrational thing Paulos did was keeping all his eggs in one basket, instead of diversifying his portfolio. As a rule almost without exception, unless you have insider information, it's never more rational to buy one single stock instead of another, regardless of which ones are soaring and which ones plunging at the moment. If you're not convinced, consider the following: at each point during Paulos's purchases of Worldcom stock, if it had been certain from public information, or even highly likely, that the stock would keep plunging, then shorting it would have been a killer investment -- to the point where it would be foolish for big investors to invest in anything else at the moment. But of course, that was not the case.
Maybe I'm reading too much into your example, but it seems like you believe that investing in Worldcom stock at that point was somehow especially irrational compared to other single-stock investments. Yet, any non-diversified investment in a single stock is pretty much an equivalent gamble, and Paulos was not more irrational than other people who get into that sort of gambling. (Except arguably for a tiny...
Come to think of it, some of the MWI proponents here should agree that by their criteria, there was nothing irrational about Paulos's investment at all.
Anyone with diminishing returns on the utility of money doesn't like volatility, whether probabilistic or MWI.
Another interesting point :
After 9/11, people became afraid of flying and started doing so less. Instead, they began driving more. Unfortunately, car travel has a much higher chance of death than air travel.
I have no doubt that there is a widespread and fundamentally irrational bias when it comes to people's fear of flying vs. driving. However, I'm not sure how much the above change was due to irrational fears, and how much due to the newly introduced inconveniences and indignities associated with air travel. Are there actually some reliable estimates about which cause was predominant? I'm sure at least some of the shift was due to entirely rational decisions motivated by these latter developments.
...One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much h
Overwhelming prior makes my claim more likely to be correct than the majority of claims made by myself or others. ;)
There are a lot of particular studies and events mentioned here - more specific references would be good. If they're all just culled from the one book, page numbers would work.
Page numbers:
Heh... another comment that's just occurred to me:
Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.
Again, this is by no means necessarily irrational. The effects of government policies are by no means limited to their immediate fiscal implications. People typically care much more -- and often with good reason -- about their status-signaling implications. By deciding t...
An alternate hypothesis: people are loss-averse.
One of the best pieces of evidence for this theory is an incident that occurred during the development of online role-playing game World of Warcraft. While the game was in beta testing, its developer, Blizzard, added a "rest" system to the game to help casual players develop their characters at a pace slightly closer to that of the game's more serious players, who tended to devote much more time to the game and thus "leveled up" much more quickly.
The rest system gives "rested experience" at a gradual rate to players who are not logged into the game. As initially implemented, characters who had available rest experience would acquire experience points for their character at a 100% rate, diminishing their rest experience in the process. Once you were out of rest experience, your character was reduced to earning experience at a 50% rate. Because rest experience accumulated slowly, only while offline, and capped out after about a day and a half, players who logged on to the game every day for short periods of time were able to earn experience points most efficiently, lowering the extent to which they were out...
Love the post. Just pointing out that it largely does not answer the X-treme rationality question, (at least as I define it). Regular rationality will do for most of these cases, except perhaps the autism one - there are extreme emotions involved (speaking from personal experience, I fell for the MMR racket for a while due to personal acquaintance of the main doctors involved on top of all the emotions of wanting your kid to get better; it was a real lollapalooza effect).
Another group calculated that for driving to be as dangerous as flying, there would have to be an incident on the scale of 9/11 once a month!
I'm assuming "driving" and "flying" should be swapped here?
Why does general "rationality" skill prevent these things, rather than (or better than) situation-specific knowledge? Yes, if you were a good rationalist and could apply it to all parts of your life with ease, yes, you'd dodge the problems listed. But does this generalize to calling the development of rationalist skills a good use of your time per unit disutility avoided?
Rationality, I think, has to develop in you pretty well before it becomes the source of your resistance to these problems.
So I'm not sure if these are costs of irrationality per se, but rather, of lacking both well-developed rationality, and a specific insight.
Great post; thanks for providing these examples.
One textual complaint: This passage is unclear:
Should the reduction for the family with an income of $100,000 be the same, or should they be given more of a reduction because of their bigger income? Here, most people would say no.
(Either/or question answered with yes/no.)
The example given for status quo bias is not necessarily indicative of impaired rationality. That are such things as hysteresis effects:
Consider the case of the family subject to frequent power outages. They will learn to adjust. This could be as simple as buying an alternative power source (generator). Or, perhaps they adopt their life to perform activities requiring no power whenever their is an outage. If you have already bought a generator, it might not be worth your while to pay a higher price for a more reliable power supply. Whereas the family accustomed to a stable supply faces capital cost associated with making an adjustment.
Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.
Note typo.
Great post! I actually started trying to argue against your analysis here in the child tax example, based on my own intuition. Then I realized I was being a sophist. I had good reasons for both preferences, but the reason for the progressive penalty wasn't applied to the flat bonus, nor vice versa.
I might have to be careful about how this 'politics' thing affects my thinking.
My reasoning with the country taxation one would be:
A family earning $0 will be paying what amount of taxes, before any special bonuses or penalties?
I would expect them to be paying $0.
If they have one child, they then either have to pay $1000, or they receive $500. That's a big difference between the two hypothetical worlds.
It's an issue of missing information (what tax do people at other incomes pay) being filled in with reasonable answers.
Least Convenient Possible World time: even with full information on the taxing system, and the cost of living in the...
You use indirect evidence to suggest that rationality increases people's ability to choose good actions and thus improves their lives. However, direct empirical evidence contradicts this: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1692437 https://ideas.repec.org/p/otg/wpaper/1308.html
I think that the valley of bad rationality is much wider than we'd at first suspect.
I read a blog entry about Status Quo Bias the other day that is in a similar vein as this post, though even more cynical: http://youarenotsosmart.com/2010/07/27/anchoring-effect/
I thought the Freakonomics guy debunked that driving was more dangerous than flying, when you correct for the amount of time spent doing either activity.
I have an objection to the first example listed.
...Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same
But this is just unfair. You're judging rationality according to rational arguments, and so OF COURSE you end up finding that rationality is sooo much better.
I, on the other hand, judge my irrationality on an irrational basis, and find that actually it's much better to be irrational.
What's the difference? Of course in response to this question you're bound to come up with even more rational arguments to be rational, but I don't see how this gets you any further forward.
I, on the other hand, being irrational, don't have to argue about this if I don't want t...
This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.
People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality.
What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...
Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)
A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."
Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.
Stock market losses due to irrationality are not atypical. From the beginning of 1998 to the end of 2001, the Firsthand Technology Value mutual fund had an average gain of 16 percent per year. Yet the average investor who invested in the fund lost 31.6 percent of her money over the same period. Investors actually lost a total of $1.9 billion by investing in a fund which was producing 16 percent of a profit per year. That happened because the fund was very volatile, causing people to invest and cash out at exactly the wrong times. When it gained, it gained a lot, and when it lost, it lost a lot. When people saw that it had been making losses, they sold, and when they saw it had been making gains, they bought. In other words, they bought when high and sold when low - exactly the opposite of what you're supposed to do if you want to make a profit. Reporting on a study of 700 mutual funds during 1998-2001, finanical reporter Jason Zweig noted that "to a remarkable degree, investors underperformed their funds' reported returns - sometimes by as much as 75 percentage points per year."
Be manipulated and robbed of personal autonomy. Subjects were asked to divide 100 usable livers to 200 children awaiting a transplant. With two groups of children, group A with 100 children and group B with 100 children, the overwhelming response was to allocate 50 livers to each, which seems reasonable. But when group A had 100 children, each with an 80 percent chance of surviving when transplanted, and group B had 100 children, each with a 20 percent chance of surviving when transplanted, people still chose the equal allocation method even if this caused the unnecessary deaths of 30 children. Well, that's just a question of values and not rationality, right? Turns out that if the patients were ranked from 1 to 200 in terms of prognosis, people were relatively comfortable with distributing organs to the top 100 patients. It was only when the question was framed as "group A versus group B" that people suddenly felt they didn't want to abandon group B entirely. Of course, these are exactly the same dilemma. One could almost say that the person who got to choose which framing to use was getting to decide on behalf of the people being asked the question.
Two groups of subjects were given information about eliminating affirmative action and adopting a race-neutral policy at several universities. One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much higher in the percentage group.
In a hypothetical country, a family with no children and an income of $35,000 pays $4,000 in tax, while a family with no children and an income of $100,000 pays $26,000 in tax. Now suppose that there's a $500 tax reduction for having a child for a family with an income of $35,000. Should the family with an income of $100,000 be given a larger reduction because of their higher income? Here, most people would say no. But suppose that instead, the baseline is that a family of two children with an income of $35,000 pays $3,000 in tax and that a family of two children with an income of $100,000 pays $25,000 in tax. We propose to make the families with no children pay more tax - that is, have a "childless penalty". Say that the family with the income of $100,000 and one child has their taxes set at $26,000 and the same family with no children has their taxes set at $27,000 - there's a childless penalty of $1,000 per child. Should the poorer family which makes $35,000 and has no children also pay the same $2,000 childless penalty as the richer family? Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.
End up falsely accused or imprisoned. In 2003, an attorney was released from prison in England when her conviction of murdering her two infants was overturned. Five months later, another person was released from prison when her charge of having murdered her children was also overturned. In both cases, the evidence presented against them had been ambiguous. What had convinced the jury was that in both cases, a pediatrician had testified that the odds of two children in the same family dying of infant death syndrome was 73 million to 1. Unfortunately, he had arrived to this figure by squaring the odds of a single death. Squaring the odds of a single event to arrive at the odds for it happening twice only works if the two events are independent. But that assumption is likely to be false in the case of multiple deaths in the same family, where numerous environmental and genetic factors may have affected both deaths.
In the late 1980s and early 1990s, many parents were excited and overjoyed to hear of a technique coming out of Australia that enabled previously totally non-verbal autistic children to communicate. It was uncritically promoted in highly visible media such as 60 Minutes, Parade magazine and the Washington Post. The claim was that autistic individuals and other children with developmental disabilities who'd previously been nonverbal had typed highly literate messages on a keyboard when their hands and arms had been supported over by the typewriter by a sympathetic "facilitator". As Stanovich describes: "Throughout the early 1990s, behavioral science researchers the world over watched in horrified anticipation, almost as if observing cars crash in slow motion, while a predictable tragedy unfolded before their eyes." The hopes of countless parents were dashed when it was shown that the "facilitators" had been - consciously or unconsciously - directing the children's hands on the right keys. It should have been obvious that spreading such news before the technique had been properly scientifically examined was dangerously irresponsible - and it gets worse. During some "faciliation" sessions, children "reported" having been sexually abused by their parents, and were removed from their homes as a result. (Though they were eventually returned.)
End up dead. After 9/11, people became afraid of flying and started doing so less. Instead, they began driving more. Unfortunately, car travel has a much higher chance of death than air travel. Researchers have estimated that over 300 more people died in the last months of 2001 because they drove instead of flying. Another group calculated that for flying to be as dangerous as driving, there would have to be an incident on the scale of 9/11 once a month!
Have your society collapse. Possibly even more horrifying is the tale of Albania, which had previously been a communist dictatorship but had made considerable financial progress from 1992 to 1997. In 1997, however, one half of the adult population had fallen victim to Ponzi schemes. In a Ponzi scheme, the investment itself isn't actually making any money, but rather early investors are paid off with the money from late investors, and eventually the system has to collapse when no new investors can be recruited. But when schemes offering a 30 percent monthly return began to become popular in Albania, competitors offering a 50-60 or even a 100 percent monthly return soon showed up, and people couldn't resist the temptation. Eventually both the government and economy of Albania collapsed. Stanovich describes:
The estimated death toll was between 1,700 and 2,000.