Jiro comments on What Cost for Irrationality? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (113)
Regarding the "status quo bias" example with the utility company, I think it's fallacious, or at least misleading. For realistic typical humans with all their intellectual limitations, it is rational to favor the status quo when someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences. (And not to mention the swindles that might be hiding in the fine print.)
Moreover, if the utility company had actually started selling different deals rather than just conducting a survey about hypotheticals, it's not like typical folks would have stubbornly held to unfavorable deals for years. What happens in such situations is that a clever minority figures out that the new deal is indeed more favorable and switches -- and word about their good experience quickly spreads and soon becomes conventional wisdom, which everyone else then follows.
This is how human society works normally -- what you call "status quo bias" is a highly beneficial heuristic that prevents people from ruining their lives. It makes them stick to what's worked well so far instead of embarking on attractive-looking, but potentially dangerous innovations. When this mechanism breaks down, all kinds of collective madness can follow (speculative bubbles and Ponzi schemes being the prime examples). Generally, it is completely rational to favor a tolerably good status quo even if some calculation tells you that an unconventional change might be beneficial, unless you're very confident in your competence to do that calculation, or you know of other people's experiences that have confirmed it.
Replying to old post...
I would suggest something even stronger: the people exhibiting the "status quo bias" in the utility example are correct. The fact that a deal has worked out tolerably well in the real world is information and indicates that the deal has no hidden gotchas that the alternative might have. Bayseianism demands considering this information.
Where this gets confusing is the comparison between the two groups of customers, each starting out with the opposite plan. However, the customers don't have the same information--one group of customers knows that one plan is tolerable, and the other group knows that the other plan is tolerable. Given this difference in information, it is rational for each group to stick with the plan that they have. It is true, of course, that both groups of customers cannot actually be better off than the other, but all that that means is that if you make a decision that is probabilistically best for you, you can still get unlucky--each customer rationally concluded that the other plan had a higher chance of having a gotcha than a plan they know about, and that does not become irrational just because it turns out the other plan didn't have a gotcha after all.