A couple years ago, Aaron Swartz blogged about what he called the "percentage fallacy":

There’s one bit of irrationality that seems like it ought to be in behavioral economics introduction but mysteriously isn’t. For lack of a better term, let’s call it the percentage fallacy. The idea is simple:

One day I find I need a blender. I see a particularly nice one at the store for $40, so I purchase it and head home. But on the way home, I see the exact same blender on sale at a different store for $20. Now I feel ripped off, so I drive back to the first store, return the blender, drive back to the second store, and buy it for $20.

The next day I find I need a laptop. I see a particularly nice one at the store for $2500, so I purchase it and head home. But on the way home, I see the exact same laptop for $2480. “Pff, well, it’s only $20,” I say, and continue home with the original laptop.

I’m sure all of you have done something similar — maybe the issue wasn’t having to return something, but spending more time looking for a cheaper model, or fiddling with coupons and rebates, or buying something of inferior quality. But the basic point is consistent: we’ll do things to save 50% that we’d never do to save 1%.

He recently followed up with a speculation that this may explain some irrational behaviour normally attributed to hyperbolic discounting:

In a famous experiment, some people are asked to choose between $100 today or $120 tomorrow. Many choose the first. Meanwhile, some people are asked to choose between $100 sixty days from now or $120 sixty-one days from now. Almost everyone choose the laster. The puzzle is this: why are people willing to sacrifice $20 to avoid waiting a day right now but not in the future?

The standard explanation is hyperbolic discounting: humans tend to weigh immediate effects much more strongly than distant ones. But I think the actual psychological effect at work here is just the percentage fallacy. If I ask for the money now, I may have to wait 60 seconds. But if I get it tomorrow I have to wait 143900% more. By contrast, waiting 61 days is only 1.6% worse than waiting 6 days. Why not wait an extra 2% when you get 16% more money for it?

Has anyone done a test confirming the percentage fallacy? A good test would be to show people treat the $100 vs. $120 tradeoff as equivalent to the $1000 to $1200 tradeoff.

Is this a real thing? Is there any such research? Is there existing evidence that does especially support the usual hyperbolic discounting explanation over this?

New Comment
29 comments, sorted by Click to highlight new comments since: Today at 4:57 PM

The "percentage fallacy" is a real thing, and it does get covered in standard JDM/behavioral economics. See the classic Kahneman & Tversky jacket-calculator study, for instance. K&T also cite a study which shows that if you compare the prices of a consumer good at various stores, the standard deviation in price is roughly proportional to the price of the good.

I don't think it has a handy name, but the money version ($20 out of $2500 or $40) usually gets covered under mental accounting, as reflecting the shape of the value function (diminishing marginal sensitivity). It also gets connected more broadly to Weber's Law in psychophysics, which basically says that people's perceptions of a continuous quantity (including weight, brightness, etc.) follow a log scale - e.g., if someone is holding a weight and you add more weight to it, the smallest change that they'll notice is proportional to the total amount of weight that they're holding.

There has been some research applying this to temporal discounting, including a recent paper by Zauberman & colleagues (pdf).

Zauberman, G. B., Kim, K., Malkoc, S., & Bettman, J. R. (2009). Discounting time and time discounting: Subjective time perception and intertemporal preferences. Journal of Marketing Research, 46, 543–556.

I don't think it has a handy name

I usually refer to this as "thinking logarithmically".

It seems to me it's more important to save a larger percent on the $40 items than a much smaller percent on $2500 items since in the long run you'll buy many more of the first type of items than the second. So it looks like a useful heuristic.

This is a really excellent point, and ties into the recent post about how some alleged fallacies are actually just necessary heuristics ... which I can't seem to find now. I was reading about that on LW, wasn't I?

Vladimir_M made that point in the comments here.

Yeah, and it's a good one there too, but not what I was thinking of--I hadn't seen that post.

[-]ata13y00

I was reading about that on LW, wasn't I?

It sounds like something one would read on LW, but I don't recall any particular post on that subject. It's not a terribly non-obvious point, though; it's not like evolution is stupid enough to invent fallacies and biases for no reason at all. We wouldn't have the original heuristics in the first place if they didn't work often enough to be more useful than detrimental in the ancestral environment.

Hunh. I wonder where that was, then. And yeah, the reasons for it are clear enough, but I think it's a helpful reminder just the same. Evolution has had a lot more time and evidence to work with than rationalists have; it behooves us to be cautious when denouncing its conclusions.

It's not as if going back to the store to save $20 on a blender will save you $20 on every $40 purchase. You'd have to do it for each purchase. If it's worth doing a hundred times, it's worth doing once.

This has been discussed before.

My first thought is that you buy $40 items much more often than $2500 items; saving $20 on each $40 dollar purchase will save more money than the same savings on $2500 purchases. Especially given costs of driving around town and negotiating in high value purchases. I'll agree that if you include these hidden costs and convert to a utility calculation people are probably not perfectly efficient, but doing the calculations themselves can be expensive.

In the example, the costs of making the savings are the same. Saving $20 on each $40 purchase will save you more money, but saving $20 on every purchase will save more still. If it's worth going back to the store to save $20, it's worth doing no matter how many times you do it.

Sure; on the other hand, I have one type of algorithm for dealing with $40 purchases; I do less than a day's worth of research and think about whether I really want it or not. I make $20-$40 purchases at least once a week, maybe more. Changing my algorithm to save money on them will clearly give me a great increase in utility.

I have a separate algorithm for judging $2500 purchases. I think about them for days or weeks or maybe more, and don't make the purchases until I'm certain that they're sound. Some of the things I'll need to decide on will be "will I use this" and "does it have the features I want," which will be factors worth far more than $20. If I extend the set of things that I'm looking for to be "any factor which is worth at least $20" then I may end up spending too much time on analysis and end up not making a purchase when I could be using the the laptop already, or missing out on a sale and having the laptop end up costing $500 more. That last part has happened to me multiple times, quibbling over much more important factors than $20 in comparative cost.

Yes if you can go back to the store, return your laptop, and buy it again for $20 less, it's probably worth it to do so. But this situation is much rarer and much less important than saving money on a $40 purchase.

One way in which I've observed some very smart and numerate people falling for this fallacy is when they run out of cash in bars and are forced to take money out of those rip-off ATMs that charge $3 or so per transaction. People will often take a larger amount of money than necessary, rationalizing that the rip-off isn't that bad if it's only a small percentage of the amount (I've heard this "reasoning" expressed loudly several times). This despite the fact that tomorrow they'll walk by their own bank's ATM from which they can take money without any fees, so there's absolutely no benefit from taking more money than necessary from the expensive one.

I am myself not immune to this feeling, even though I'm perfectly aware it's completely irrational. I would feel awful if I paid $3 to take out a twenty and then took $100 without a fee next day, but paying $3 to take $120 feels much less bad.

Well, if you aren't sure exactly how much money you need, you'd want to err on the side of taking too much so you don't risk making two transactions.

Yes, but on several occasions, I have heard people explicitly say that they would take more cash just because it's supposedly less of a rip-off to pay the same fee on a much larger amount. So it's not a hypothesis about their observed behavior, but their clearly expressed reasoning.

(Plus, you probably don't want to take and carry around a significant amount of extra cash when you're half-drunk in a bar, and it's just a short time before last call, so you can't possibly need more than the cost of one or two more drinks and perhaps a cab ride home.)

[-][anonymous]13y10

So it's not a hypothesis about their observed behavior, but their clearly expressed reasoning.

But expressed reasons are often not true confession of real reasons. It is risky to put a great deal of store in people's explanations of their actions. Edit: to clarify, by "real reasons" I don't even necessarily mean conscious or mentally present reasons. Our real reasons may be "reasons" in the sense that the "reason" we have eyes is to see. This is of course an evolutionary "reason". Similarly, our behavior may have evolutionary "reasons" which we have no access to whatsoever, forcing us to make something up in order to fill in the gaps. We do seem to have a strong tendency to do that - to fill in the gaps, to explain ourselves, both to others and to ourselves.

Taking out the larger amount of money saves the small amount of effort/time needed to get money from the free ATM the next day.

The question is whether it's worth $3 to you at that moment to avoid a) starting a tab, b) walking to the nearest no-fee ATM, or c) not drinking for the rest of the night.

Sometimes it is and sometimes it isn't, but you're bang on that the amount of cash you get out doesn't make a difference.

And the sad thing is people doing that is probably part of the reason why it costs $3 per transaction, although it can't be that much if they don't add a percentage on to it.

Thinking about some of the examples people have posted, I think people might be making decisions based on an assumption of "what if I did this many times?". For example, if you spent all your money on blenders, you'd waste half of it if you didn't go out of your way to get the cheaper ones, whereas if you spent all your money on laptops, you'd only waste a small percentage. Likewise with the ripoff ATMs - if you withdrew all your money from them, you'd be much better off withdrawing as much as possible each time.

There was an article in Scientific American a few years ago about the Traveler's Dilemma and how human beings make more money than the Nash Equilibrium tells them to. Edit: Wikipedia summary

It occurred to me that the percentage fallacy might explain why people give high numbers in this version -- the Nash equilibrium is pocket change compared to the max payoff. The same is true for the reward for undercutting; you might not be so motivated to low-ball if your reward for doing so is 2% of the max payoff.

It would be interesting to see an experiment where the payoff for giving the low estimate varied. If you were playing the game with a $10 bonus for lowballing, would you give the Nash equilibrium of $10? Or would you maybe go for the $40s or $50s hoping the other person would go even higher? My guess would be that as the reward for undercutting as a percentage of the max reward increases people get more and more vicious, and at some percentage people will default to the Nash equilibrium.

Random thoughts:

If you make a bad investment and lose 50% of your net worth, you now need to increase your net worth by 100%, not 50%, in order to get back to where you started from.

Expressing probabilities as percents gets a bit weird, because subtraction doesn't really work like it should. The difference between a 1% chance and a 2% chance is bigger than the difference between a 2% chance and a 3% chance. (1% is 1 in 100, 2% is 1 in 50, and 3% is about 1 in 33.)

Expressing probabilities as percents gets a bit weird, because subtraction doesn't really work like it should. I don't understand. The expected value is what you'd get by straight subtraction.

Are you implying that the difference between 2% and 3% should be 1 in 50-33=17, or 5.9%? In that case, the difference between 100% and 99.99999% would be 100%, when they're really almost exactly the same.

On the waiting thing, my intuitions say to take the $120 tomorrow, so I might be unrepresentative - but couldn't this be explained just as well in terms of whether you have to wait at all, not necessarily just the specific amount of a day? Would you get similar results if you offered $100 tomorrow or $120 the day after? That way you have to wait either way.

Not waiting at all is just the extreme form of this, as waiting would increase the time by infinity percent.

I assume that's the point of the thirty-days-out version of the experiment--imposing a delay on either amount. Or were you wondering if it comes into play even for so small a delay? That is, where's the cutoff at which people will switch between the two behaviors?

I would also take $120 tomorrow, but I'm also poor enough that an extra $20 is a big deal. There's another way the percentage thing comes into play: the ratio of the potential gain to one's current funds.

Or were you wondering if it comes into play even for so small a delay? That is, where's the cutoff at which people will switch between the two behaviors?

Yes, this is about what I had in mind.

I'd guess that any delay that gives the other party a chance to back out would be sufficient. When determining the expected utility of each offer, there should be a term for the probability of the deal actually going through. That's very close to 1 when you take the $100 now and less if you have to wait a day for $120, which might tip the balance toward the $100. But the probabilities are nearly identical for 30 and 31 days, so $120 is the better choice there.

Good point. It might be interesting to try to find a money delta (ie, $100 vs. $200 or whatever) where someone would take the earlier one at 30 vs. 31 days, but the larger one at 90 vs. 91 days. But I'm not sure how much that would prove.