Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Is Sunk Cost Fallacy a Fallacy?

18 Post author: gwern 04 February 2012 04:33AM

I just finished the first draft of my essay, "Are Sunk Costs Fallacies?"; there is still material I need to go through, but the bulk of the material is now there. The formatting is too gnarly to post here, so I ask everyone's forgiveness in clicking through.

To summarize:

  1. sunk costs are probably issues in big organizations
    • but maybe not ones that can be helped
  2. sunk costs are not issues in animals
  3. they appear to be in children & adults
    • but many apparent problems can be explained as part of a learning strategy
  4. there are few clear indications sunk costs are genuine problems
  5. much of what we call 'sunk cost' looks like simple carelessness & thoughtlessness

(If any of that seems unlikely or absurd to you, click through. I've worked very hard to provide multiple citations where possible, and fulltext for practically everything.)

I started this a while ago; but Luke/SIAI paid for much of the work, and that motivation plus academic library access made this essay more comprehensive than it would have been and finished months in advance.

 

Comments (79)

Comment author: Morendil 04 February 2012 11:03:01AM 9 points [-]

There are interesting examples of this in Go, where pro play commentary often discusses tensions between "cutting your losses" and "being strategically consistent".

If things in Go aren't as clear-cut as the classic utilitarian example of "teleporting into the present situation" (which is typically the way Go programs are written, and they nevertheless lose to top human players), then maybe we can expect that they aren't clear-cut in complex life situations either.

This doesn't detract from the value of teaching people the sunk-cost fallacy: novice Go players do things such as adding stones to an already dead group which are clearly identifiable as instances of the sunk cost fallacy, and improvement reliably follows from helping them identify this as thinking that leads to lost games. Similarly, improvement at life reliably results from improving your ability to tell it's time to cut your losses.

Comment author: Prismattic 04 February 2012 05:34:29PM 3 points [-]

novice Go players do things such as adding stones to an already dead group which are clearly identifiable as instances of the sunk cost fallacy,

I don't think this is correct. Novice players keep adding stones because they don't realize the group is dead, not because they can't give up on it.

Comment author: Morendil 04 February 2012 06:12:10PM 6 points [-]

That's probably right at higher kyu levels, when you really have no good grasp of group status.

When you ask a novice "what is the status of this group", though, there is typically a time when they can correctly answer "dead" in exercise settings, but fail to draw the appropriate conclusion in a game by cutting their losses, and that's where I want to draw a parallel with the sunk cost fallacy.

This is similar to life situations where if you'd just ask yourself the question "is this a sunk cost, and should I abandon it" you'd answer yes in the abstract, but you fail to ask that question.

In high-pressure or blitz games this even happens to higher level novice players - you strongly suspect the group is dead, but you keep adding stones to it, playing the situation out: the underlying reasoning is that your opponent has to respond to any move that might save the group, so you're no worse off, you've played one more move and they've played one more.

This is in fact wrong - by making the situation more settled you're in fact wasting the potential to use these plays later as ko threats.

Comment author: AnnaSalamon 06 February 2012 01:27:40AM 6 points [-]

Any idea whether Go beginners' tendency to "throw good stones after bad" results from sunk cost fallacy in particular, or from wishful thinking in general?

Like, is the thought "I don't want my stones to have been wasted" or "I really want to have that corner of the board"?

Comment author: Morendil 06 February 2012 09:52:28AM 6 points [-]

I'd have to look at actual evidence to answer that question with any degree of authority, and that would take more time than I have right now, but I can sketch an answer...

My source of empirical evidence would be the Go Teaching Ladder, where you get a chance to see higher level players commenting on the inferred thought processes of more novice players. (And more rarely, novice players providing direct evidence of their own thought processes.)

Higher level players tend to recommend "light" play, over "heavy" play: a typical expression is "treat this stone lightly".

Unpacked, this means something like "don't treat this stone as an investment that you must then protect by playing further moves reinforcing your conception of this stone as a living group that must be defended; instead, treat this stone as bait that you gladly abandon to your opponent while you consolidate your strength elsewhere".

"Heavy" play sounds a lot like treating a sunk cost as a commitment to a less valuable course of action. It is play that overlooks the strategic value of sacrifice. See here for some discussion.

However, this is usually expressed from an outside perspective - a better player commenting on the style of a more novice player. I don't know for sure what goes on in the mind of a novice player when making a heavy play - it might well be a mixture of defending sunk costs, wishful thinking, heuristic-inspired play, etc.

Comment author: gwern 06 February 2012 01:47:44AM *  1 point [-]

It may be an example of a different bias at play, specifically confirmation bias: they don't realize that the stones are being wasted and can't be retrieved. For example, chess masters commit confirmation bias less than weaker players.

(It's not that the players explicitly realize that there are better moves elsewhere but decide to keep playing the suboptimal moves anyway, because of sunk costs which would be sunk cost bias; it's that they don't think of what the opponent might do - which is closer to 'thoughtlessness'.)

Comment author: gwern 04 February 2012 08:01:07PM 1 point [-]

If things in Go aren't as clear-cut as the classic utilitarian example of "teleporting into the present situation" (which is typically the way Go programs are written, and they nevertheless lose to top human players), then maybe we can expect that they aren't clear-cut in complex life situations either.

That's more a fact about Go programs, I think; reading the Riis material recently on the Rybka case, I had the strong impression that modern top-tier chess programs do not do anything at all like building a model or examining the game history, but instead do very fine-tuned evaluations of individual board positions as they evaluate plys deep into the game tree. So you could teleport a copy of Shredder into a game against Kramnik played up to that point by Shredder, and expect the performance to be identical.

(If there were any research on sunk cost in Go, I'd expect it to follow the learning pattern: high initially followed by steady decline with feedback. I looked in Google Scholar for '("wei qi" OR "weiqi" OR "wei-chi" OR "igo" OR "baduk" OR "baeduk") "sunk cost" game' but didn't turn up anything. GS doesn't respect capitalization so "Go" is useless to search for.)

Comment author: kilobug 04 February 2012 10:25:54AM 5 points [-]

Two remarks :

  1. Be careful with the Concorde example. As a French citizen, I was told that the goal of the Concorde never was to be profitable as a passenger service, but it served two goals : public relation/advertising to demonstrate the world the technical ability of french engineering and therefore sell french-made technology (civilian and military planes for example, but also through halo effect, trains or cars or nuclear power plants), and stimulating research and development that could then lead to other benefits (a bit like military research or space program does lead to civilian technology later on). Maybe it was just rationalization and not admitting they felt to the sunk cost fallacy, but as long as I remember, that was the official stance on the Concorde - and on that side, I don't really think it was sunk cost.

  2. I agree with your analysis that sunk cost is useful to counter other biases. I didn't think about the part of young children not committing it, but now that you pointed to studies showing it, it makes perfect sense (and is compatible with my own personal observation of young relatives). So, yes, sunk cost fallacy is useful because it helps us lower the damages done by the planning fallacy and our tendency to be too optimist. But I wouldn't go as far as saying it's not a bias. It's a bias, a "perfect rationalist" shouldn't have it. A bug that partially negates the effects of another bug, but sometimes create problems of its own, is still a bug. So I wouldn't say "sunk cost is not a fallacy" but "sunk cost is a fallacy but it does help us overcome other fallacies, so be careful".

Comment author: gwern 04 February 2012 07:51:12PM 1 point [-]

IMO, the Concorde justifications are transparent rationalizations - if you want research, buy research. It'd be pretty odd if you could buy more research by not buying research but commercial products... In any case, I mention Concorde because it's such a famous example and because a bunch of papers call it the Concorde effect.

I agree with your analysis that sunk cost is useful to counter other biases.

I'm not terribly confident in that claim; it might be that one suffers them both simultaneously. I had to resort to anecdotes and speculation for that section; it's intuitively appealing, but we all know that means little without hard data.

I didn't think about the part of young children not committing it, but now that you pointed to studies showing it, it makes perfect sense (and is compatible with my own personal observation of young relatives).

Yeah. I was quite surprised when I ran into Arkes's claim - it certainly didn't match my memories of being a kid! - and kept a close eye out thenceforth for studies which might bear on it.

Comment author: Strange7 26 October 2014 02:03:14AM 2 points [-]

if you want research, buy research

Focusing money too closely on the research itself runs the risk that you'll end up paying for a lot of hot air dressed up to look like research. Cool-but-useless real-world applications are the costly signalling mechanism which demonstrates an underlying theory's validity to nonspecialists. You can't fly to the moon by tacking more and more epicycles onto the crystalline-sphere theory of celestial mechanics.

Comment author: gwern 26 October 2014 04:21:51PM 0 points [-]

If you want to fly to the moon, buy flying to the moon. X-prizes etc. You still haven't shown that indirect mechanisms which happen to coincide with the status quo are the optimal way of achieving goals.

Comment author: Strange7 28 October 2014 01:11:05PM *  0 points [-]

"Modern-day best-practices industrial engineering works pretty well at it's stated goals, and motivates theoretical progress as a result of subgoals" is not a particularly controversial claim. If you think there's a way to do more with less, or somehow immunize the market for pure research against adverse selection due to frauds and crackpots, feel free to prove it.

Comment author: gwern 28 October 2014 03:58:26PM 1 point [-]

is not a particularly controversial claim.

I disagree. I don't think there's any consensus on this. The success of prizes/contests for motivating research shows that grand follies like the Concorde or Apollo project are far from the only effective funding mechanism, and most of the arguments for grand follies come from those with highly vested interests in them or conflicts of interest - the US government and affiliated academics are certainly happy to make 'the Tang argument' but I don't see why one would trust them.

Comment author: Strange7 01 November 2014 11:50:01AM 0 points [-]

I didn't say it was the only effective funding mechanism. I didn't say it was the best. Please respond to the argument I actually made.

Comment author: gwern 01 November 2014 04:36:52PM 1 point [-]

You haven't made an argument that indirect funding is the best way to go and you've made baseless claims. There's nothing to respond to: the burden of proof is on anyone who claims that bizarrely indirect mechanisms through flawed actors with considerable incentive to overstate efficacy and do said indirect mechanism (suppose funding the Apollo Project was an almost complete waste of money compared to the normal grant process; would NASA ever under any circumstances admit this?) is the best or even a good way to go compared to directly incentivizing the goal through contests or grants.

Comment author: Strange7 02 November 2014 05:11:57PM 0 points [-]

You haven't made an argument that indirect funding is the best way to go

On this point we are in agreement. I'm not making any assertions about what the absolute best way is to fund research.

and you've made baseless claims.

Please be more specific.

There's nothing to respond to: the burden of proof is on anyone who claims that bizarrely indirect mechanisms through flawed actors

All humans are flawed. Were you perhaps under the impression that research grant applications get approved or denied by a gleaming crystalline logic-engine handed down to us by the Precursors?

Here is the 'bizarrely indirect' mechanism by which I am claiming industrial engineering motivates basic research. First, somebody approaches some engineers with a set of requirements that, at a glance, to someone familiar with the current state of the art, seems impossible or at least unreasonably difficult. Money is piled up, made available to the engineers conditional on them solving the problem, until they grudgingly admit that it might be possible after all.

The problem is broken down into smaller pieces: for example, to put a man on the moon, we need some machinery to keep him alive, and a big rocket to get him and the machinery back to Earth, and an even bigger rocket to send the man and the machinery and the return rocket out there in the first place. The Tsiolkovsky rocket equation puts some heavy constraints on the design in terms of mass ratios, so minimizing the mass of the life-support machinery is important.

To minimize life-support mass while fulfilling the original requirement of actually keeping the man alive, the engineers need to understand what exactly the man might otherwise die of. No previous studies on the subject have been done, so they take a batch of laboratory-grade hamsters, pay someone to expose the hamsters to cosmic radiation in a systematic and controlled way, and carefully observe how sick or dead the hamsters become as a result. Basic research, in other words, but focused on a specific goal.

would NASA ever under any circumstances admit this?

They seem to be capable of acknowledging errors, yes. Are you?

"It turns out what we did in Apollo was probably the worst way we could have handled it operationally," says Kriss Kennedy, project leader for architecture, habitability and integration at NASA's Johnson Space Center in Houston, Texas, US.

http://www.newscientist.com/article/dn11326

Comment author: Jiro 01 November 2014 06:14:40PM 0 points [-]

That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?" You can't just flip a bit in the world setting Homeopathy_Works to TRUE and keep everything else the same. If homeopathy worked and yet doctors still didn't accept it, that would imply that doctors are very different than they are now, and that difference would manifest itself in lots of other ways than just doctors' opinion on homeopathy.

If funding the Apollo Project was a complete waste of money compared to the normal grant process, the world would be a different place, because that would require levels of incompetency on NASA's part so great that it would get noticed.

Or for another example: if psi was real, would James Randi believe it?

Comment author: gwern 02 November 2014 02:55:20PM *  2 points [-]

That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?"

No; it's like asking "If homeopathy didn't work and all the homeopaths were wrong, would they admit it?" You can find plenty of critics of Big Science and/or government spending on prestige projects, just like you can find plenty of critics of homeopathy.

If funding the Apollo Project was a complete waste of money compared to the normal grant process, the world would be a different place, because that would require levels of incompetency on NASA's part so great that it would get noticed.

If homeopathy was a complete waste of money compared to normal medicine implying 'great' levels of incompetency on homeopaths, how would the world look different than it does?

Comment author: ChristianKl 01 November 2014 07:12:42PM -1 points [-]

That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?" You can't just flip a bit in the world setting Homeopathy_Works to TRUE and keep everything else the same.

You can look at cases like chiropractors. Over a long time there was a general belief that chiropractors didn't provide any good for patients because they theory based on which chiropractors practice is in substantial conflict with the theories used by Western medicine.

Suddenly in 2008 Cochrane comes out with the claim that chiropractors actually do provide comparable health benefits for patients with back pain as conventional treatment for backpain.

A lot of the opposition to homeopathy is based on the fact that the theory base of homeopathy is in conflict with standard Western knowledge about how things are supposed to work.

People often fail to notice things for bad reasons.

Comment author: Lumifer 28 October 2014 04:18:09PM -1 points [-]

"works pretty well" is not a controversial claim, but "motivates theoretical progress" is more iffy.

Offhand, I would say that it motivates incremental progress and applied aspects. I don't think it motivates attempts at breakthroughs and basic science.

Comment author: Strange7 01 November 2014 12:00:31PM 1 point [-]

'Breakthroughs and basic science' seem to be running in to diminishing returns lately. As a policy matter, I think we (human civilization) should focus more on applying what we already know about the basics, to do what we're already doing more efficiently.

Comment author: ChristianKl 26 October 2014 06:15:54PM 0 points [-]

IMO, the Concorde justifications are transparent rationalizations - if you want research, buy research. It'd be pretty odd if you could buy more research by not buying research but commercial products... In any case, I mention Concorde because it's such a famous example and because a bunch of papers call it the Concorde effect.

It really depends on your view of academics. If you think that if you hand them a pile of money they just invest it into playing status games with each other, giving them a clear measurable outcome to provides feedback around which they have to structure their research could be helpful.

Comment author: Unnamed 08 February 2012 03:31:44AM 3 points [-]

A few brief comments:

The study in footnote 6 seems to show the opposite of what you say about it. The study found that diffusion of responsibility reduced the effect of sunk costs while you say "responsibility is diffused, which encourages sunk cost."

In the "subtleties" section, it's unclear what is meant by saying that "trying to still prove themselves right" is "an understandable and rational choice." After someone has made a decision and it is either right or wrong, it does not seem rational to try to prove it right (unless you just mean that it can be instrumentally rational to try to persuade others that you made the right decision).

The study quoted in fn 26 doesn't seem to match your description of it ("sunk costs were supported more when subjects were given justifications about learning to make better decisions"). The studies did not vary whether or not participants were given the learn-a-lesson justification. All participants were given that justification, and the DV was how highly they rated it.

There are few places where you downplay evidence of sunk cost effects by saying that the effects were small, but it's not clear what standard you're using for whether an effect is large or small. If an NBA player plays an extra 10-20 minutes per game based on sunk cost thinking that seems to me like an enormous effect (superstars only play about 25 minutes per game more than backups).

Comment author: orthonormal 06 February 2012 05:42:43PM 5 points [-]

Wait a second:

Arkes & Ayton cite 2 studies finding that committing sunk cost bias increases with age - as in, children do not commit it.

Information is worth most to those who have the least: as we previously saw, the young commit sunk cost more than the old

These are in direct contradiction with each other. What gives?

Comment author: gwern 06 February 2012 11:15:30PM *  0 points [-]

They are in contradiction, but the latter claim is supported by the large second paragraph in the children (the section that 'previously saw' was linking to) where I quote the criticism of the 2 studies and then list 5 studies which find either that children do commit it on questions or that avoidance increases over lifetimes, which to me seem to override the 2 studies.

Comment author: orthonormal 07 February 2012 01:56:31AM *  0 points [-]

Ah. Can I suggest you re-write that section to make it clearer? I admit I wasn't reading closely, but I assumed that a two-line statement before a quote from a paper was going to be the conclusion of the section.

Also, given that the evidence there is far from unidirectional, I'd rather you didn't cite it as the first piece of supporting evidence for the "gaining information" hypothesis. I expect an argument to start with its strongest pieces of evidence first.

P.S. I'm not sure I agree with your argument, but thanks for putting this together!

Comment author: gwern 07 February 2012 04:20:10AM *  0 points [-]

I already modified it; hopefully the new version is clearer.

Also, given that the evidence there is far from unidirectional, I'd rather you didn't cite it as the first piece of supporting evidence for the "gaining information" hypothesis. I expect an argument to start with its strongest pieces of evidence first.

I was going in what I thought was logical implication order of the learning hypothesis.

Comment author: Morendil 04 February 2012 11:21:42AM *  5 points [-]

when one engages in spring-cleaning, one may wind up throwing or giving away a great many things which one has owned for months or years but had not disposed of before; is this an instance of sunk cost where you over-valued them simply because you had held onto them for X months, or is this an instance of you simply never before devoting a few seconds to pondering whether you genuinely liked that checkered scarf?

If (during spring cleaning) you balk at throwing away something simply because it's sat so long in your basement, you are tempted to justify holding on to it a little bit more, then that's an instance of SCF.

If you balk at even doing spring cleaning (as I personally know some of my friends do) because the outcome is going to be reconsideration of your ownership of some items that you don't really value, but that you have "invested" in by keeping them in storage - then that is again an instance of SCF.

Spring cleaning itself, when it involves throwing things away, is an instance of cutting your losses. Storing items in anticipation of future use is not SCF (though it may be an instance of fooling yourself). Ergo, that you allow months or years to pass between storing items and spring cleaning is not per se an instance of SCF, even though past use of your storage space does represent a sunk cost.

ETA: on the other hand, balking at throwing something away because of emotional attachment does not necessarily qualify as SCF. For instance, throwing away kid toys that you know your now-grown kids are never going to use again, and that putative grandchildren are unlikely to use, but you would like to retain the option of using these items later to bring back happy memories.

Comment author: Prismattic 04 February 2012 05:32:47PM 2 points [-]

Balking at getting rid of things you own may sometimes be more about the endowment effect than the sunk cost fallacy.

Comment author: Unnamed 08 February 2012 02:41:33AM 2 points [-]

About the “Learning” section:

I think I understand the basic argument here: sometimes an escalation of commitment can be rational as a way to learn more from a project by continuing it for longer. But it seems like this only applies to some cases of sunk cost thinking and not others. Take Thaler's example: I don't see why a desire to learn would motivate someone to go to a football game in a blizzard (or, more specifically, how you'd learn more if you had paid for your ticket than if you hadn't).

And in some cases it seems like an escalation of commitment can hinder learning. Learning from a failed project often requires admitting that you made a mistake. One of the motivations for continuing a failing project is to avoid admitting you made a mistake (I believe that's called the “self-justification” explanation of the sunk cost fallacy). If you finish the project you can pretend that there was no mistake, but if you stop it prematurely you have to admit your error which can allow you to learn more. For example, if you throw out food from your plate then it's clear that you cooked too much (learning!), but if you eat everything on your plate to avoid wasting it then the error's less clear.

At the end of that section there's a list findings that “snap into focus” if escalation of commitment is for learning, but with many of them I don't see a clear connection to the learning hypothesis. For instance, “in situations where participants can learn and update, we should expect sunk cost to be attenuated or disappear” seems consistent with many different theories of sunk costs, including the theory that it's a bias which people can learn to avoid as they gain more experience with a type of decision. Is there something specific about the cited studies that points to the hypothesis that escalation of commitment is for learning?

Comment author: gwern 09 February 2012 01:26:11AM 1 point [-]

Take Thaler's example: I don't see why a desire to learn would motivate someone to go to a football game in a blizzard

You'd learn more what it's like to go in a blizzard - maybe it's not so bad. (Personally, I've gone to football games in non-blizzards and learned that it is bad.) If you knew in this specific instance, drawn from all the incidents in your life, that you wouldn't learn anything, then you've already learned what you can and sunk cost oughtn't enter into it. It's hard to conclude very much from answers to hypothetical questions.

seems consistent with many different theories of sunk costs, including the theory that it's a bias which people can learn to avoid as they gain more experience with a type of decision.

Any result is consistent with an indefinite number of theories, as we all know. The results fit very neatly with a learning theory, and much more uncomfortably with things like self-justification.

Comment author: malthrin 07 February 2012 04:06:28PM 2 points [-]

Good point. My interpretation of what you're saying is that the error is actually failure to re-plan at all, not bad math while re-planning.

Comment author: Douglas_Knight 08 February 2012 02:28:24AM 0 points [-]

I find that a very helpful formulation. I could not tell where Gwern was drawing distinctions.

Comment author: JoachimSchipper 04 February 2012 10:06:27AM 2 points [-]

I had serious trouble understanding the paragraph "COUNTERING HYPERBOLIC DISCOUNTING?" beyond "sunk costs probably counter other biases".

Also, I'd like to point out that, if sunk costs are indeed a significant problem in large organizations, they are indeed a significant problem; large organizations are (unfortunately?) rather important to modern life.

Comment author: gwern 04 February 2012 07:47:02PM 0 points [-]

What's not clear about it? That's the idea.

they are indeed a significant problem

Only if there are better equilibriums which can be moved to by attacking sunk cost - otherwise they are simply the price of doing business.

(I only found two studies bearing on it, neither of which were optimistic: the study finding sunk costs encouraged coordination and the bank study finding attacking sunk cost resulted in deception and falsification of internal metrics.)

Comment author: wedrifid 04 February 2012 09:12:26AM *  6 points [-]

Is Sunk Cost Fallacy a Fallacy?

Yes, it is. Roughly speaking it is when you reason that you should persist in following a choice of actions that doesn't give the best expected payoff because you (mistakenly) treat already spent resources as if they are a future cost of abandoning the path. If your essay is about "Is the sunk cost fallacy a problem in humans?" then the answer is not so trivial.

It is not clever or deep to title things as though you are overturning a basic principle when you are not. As far as I am concerned a (connotatively) false title - and the implicit conclusion conveyed thereby - significantly undermines any potential benefit the details of the essay may provide. I strongly suggest renaming it.

Comment author: gwern 04 February 2012 07:40:57PM 16 points [-]

If your essay is about "Is the sunk cost fallacy a problem in humans?" then the answer is not so trivial.

And if it isn't, as I conclude (after an introduction discussing the difference between being valid in a simplified artificial model and the real world!), then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious. Sheesh. I feel as if I were discussing someone's credibility and someone said 'but that's an ad hominem!'. Yes. Yes, it is.

(Notice your Wikipedia link is full of hypotheticals and description, and not real world evidence.)

It is not clever or deep to title things as though you are overturning a basic principle when you are not.

People do not discuss sunk cost because it is a theorem in some mathematical model or a theoretical way possible agents might fail to maximize utility; they discuss it because they think it is real and serious. If I conclude that it isn't serious, then in what sense am I not trying to overturn a basic principle?

Finally, your criticism of the title or what overreaching you perceive in it aside, did you have any actual criticism like missing refs or anything?

Comment author: Sniffnoy 05 February 2012 12:53:10AM 4 points [-]

And if it isn't, as I conclude (after an introduction discussing the difference between being valid in a simplified artificial model and the real world!), then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious.

But none of this changes the fact that the title is still misleading. Even if accusations of sunk cost fallacy are themselves often fallacious, this doesn't change the fact that you are arguing that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is actually valid. Claiming that it is not serious may indeed be overturning a basic principle, but it is not the basic principle the title claims you may be overturning. Sensationalize if you like, but there's no need to be unclear.

Comment author: paper-machine 05 February 2012 02:25:31AM 1 point [-]

Even if accusations of sunk cost fallacy are themselves often fallacious, this doesn't change the fact that you are arguing that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is actually valid.

I don't know how you got that from the essay. To quote, with added emphasis:

We can and must do the same thing in economics. In simple models, sunk cost is clearly a valid fallacy to be avoided. But is the real world compliant enough to make the fallacy sound? Notice the assumptions we had to make: we wish away issues of risk (and risk aversion), long-delayed consequences, changes in options as a result of past investment, and so on.

Comment author: wedrifid 05 February 2012 06:05:18AM 0 points [-]

I don't know how you got that from the essay.

I believe Sniffnoy, like myself, gave the author the benefit of the doubt and assumed that he was not actually trying to argue against a fundamental principle of logic and decision theory but rather claiming that the principle applies to humans far less than often assumed. If this interpretation is not valid then it would suggest that the body of the post is outright false (and logically incoherent) rather than merely non-sequitur with respect to the title and implied conclusion.

Comment author: paper-machine 05 February 2012 06:23:58AM *  1 point [-]

Sniffnoy claims that gwern has argued "that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is valid."

Actually, what gwern has argued is that while the sunk cost fallacy is often used as an heuristic there is little evidence that it is sound to do so in real world situations. This also seems to be what you've said, but it is not what Sniffnoy has said.

Hence my confusion.

On a side note, I don't really understand your qualms with the title, but that's less important to me.

Comment author: wedrifid 05 February 2012 06:26:04AM *  1 point [-]

(Notice your Wikipedia link is full of hypotheticals and description, and not real world evidence.)

Precisely. The wikipedia article set out to explain what the Sunk Cost Fallacy is and did it. It did not set out to answer any of the dozens of questions which would make sense as titles to your post (such as "Is the sunk cost fallacy a problem in humans?") and so real world 'evidence' wouldn't make much sense. Just like filling up the article on No True Scottsman with evidence about whether True Scottsman actually do like haggis would be rather missing the point! (The hypothetical is built right into the name for the informal fallacy!)

then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious.

And with a slight tweak that is another thing that you could make your post about that wouldn't necessitate dismissing it out of hand. Please consider renaming along these lines.

  • Are most accusation of the Sunk Cost Fallacy fallacious?
  • Fallacious thinking about Sunk Costs
  • Sunk Costs - not a big deal
  • Accusations of Sunk Cost Fallacy Often Fallacious?
  • Fallacious thinking about Sunk Costs - a problem in the real world?

Finally, your criticism of the title or what overreaching you perceive in it aside, did you have any actual criticism like missing refs or anything?

Without implicitly accepting the connotations here by responding - No, your article seems to be quite thorough with making references. In particular all the dot points in the summary seem to be supported by at least one academic source.

Comment author: Eugine_Nier 06 February 2012 02:24:42AM 1 point [-]
Comment author: gwern 06 February 2012 02:27:11AM 0 points [-]

Linked in a footnote, BTW.

Comment author: Grognor 11 February 2012 07:35:06PM -1 points [-]

Also related: Sunk Cost Fallacy by Zachary M. Davis

Comment author: Psychohistorian 04 February 2012 04:36:29PM *  0 points [-]

Content aside, you should generally avoid the first person as well as qualifiers and you should definitely avoid both, e.g. "I think it is interesting." Where some qualifiers are appropriate, you often phrase them too informally, e.g. "perhaps it is more like," would read much better as, "It is possible that," or, "a possible explanation is." Some first person pronouns are acceptable, but they should really only be used when the only alternative is an awkward or passive sentence.

The beginning paragraph of each subsection should give the reader a clear idea of the ultimate point of that subsection, and you would do well to include a roadmap of everything you plan to cover at the beginning.

I don't know if this is the feedback you're searching for or if the writing style is purposeful, just my two cents.

Comment author: grouchymusicologist 04 February 2012 08:09:02PM 4 points [-]

I think how important these criticisms are depends on who the intended audience of the essay is -- which Gwern doesn't really make clear. If it's basically for SIAI's internal research use (as you might think, since they paid for it), tone probably hardly matters at all. The same is largely the case if the intended audience is LW users -- our preference for accessibly, informally written scholarly essays is revealed by our being LW readers. If it's meant as a more outward-facing thing, and meant to impress academics who aren't familiar with SIAI or LW and who judge writings based on their adherence to their own disciplinary norms, then sure. (Incidentally, I do think this would be a worthwhile thing to do, so I'm not disagreeing.) Perhaps Gwern or Luke would care to say who the intended beneficiaries of this article are.

For myself, I prefer scholarly writing that's as full of first-person statements as the writer cares to make it. I feel like this tends to provide the clearest picture of the writer's actual thought process, and makes it easier to spot where any errors in thinking actually occurred. I rarely think the accuracy of an article would be improved if the writer went back after writing it and edited out all the first-person statements to make them sound more neutral or universal.

Comment author: gwern 04 February 2012 08:13:13PM 0 points [-]

Well, style wasn't really what I had in mind since it's already so non-academic in style, but your points are well taken. I've fixed some of that.

Comment author: Dmytry 13 February 2012 02:37:12PM *  0 points [-]

I came up with example of how sunk cost fallacy could helps increase the income for 2 competing agents.

Consider two corporations that each sunk considerable sum of money into two interchangeable competing IP-heavy products. Digital cameras for example. They need to recover that cost, which they would be unable to if they start price-cutting each other while ignoring the sunk costs. If they both act as not to price cut beyond the point where the sunk costs are not recovered, they settle at a price that permits to recover the software development costs. If they ignore sunk costs they can price cut to point where they don't recover development expenses. Effectively the fallacy results in a price-fixing behaviour.

Note: on the second thought, the digital cameras, being a luxury item, may be a poor choice for that example. Corporate goods, such as e.g. network hardware, may be a better choice. The luxury goods keep selling ok even if someone is price cutting you, as the luxuries attain some of the value from the price itself;

Comment author: Vladimir_Nesov 13 February 2012 02:48:03PM 1 point [-]

There are better ways of making credible commitments than having a tendency to commit sunk cost fallacy.

Comment author: gwern 13 February 2012 06:42:08PM *  3 points [-]

For ideal agents, absolutely. For things like humans... Have you looked at the models in "Do Sunk Costs Matter?", McAfee et al 2007?

EDIT: I've incorporated all the relevant bits of McAfee now, and there are one or two other papers looking at sunk cost-like models where the behavior is useful or leads to better equilibria.

Comment author: Dmytry 13 February 2012 02:54:06PM 0 points [-]

Of course. But what works, works; you'd cripple an agent by dispelling it's fallacies without providing alternatives.

Comment author: PhilGoetz 27 February 2012 11:05:01PM 0 points [-]

While that may be true, I don't see how it has any consequences.

Comment author: NCoppedge 13 February 2012 07:00:18PM 0 points [-]

I would like to argue that it is less important to determine IF it is a fallacy, than what kind it is.

One view is that this is a "deliberation" fallacy, along the lines of a failed thought experiment; e.g. 'something went wrong because conditions weren't met.' Another view is that this fallacy, which relates if I am correct to "resource shortages" or "debt crises" is in fact a more serious 'systems error' such as a method fallacy involving recursivity or logic gates.

To some extent at this point I am prone to take the view that the extent of the problem is proportionistic, leading to a kind of quantitative rather than qualitative perspective, which makes me think in my own reasoning that it is not true logic, and therefore not a true logical problem.

For example, it can be argued modal-realistically that in some contingent or arbitrarily divergent context or world, debt might be a functional or conducive phenomenon that is incorporated in a functional framework.

I would be interested to know if this kind of reasoning is or is not actually helpful in determining about a debt crisis. Perhaps as might be expected, the solution lies in some kind of "technologism," and not a traditional philosophical debate per se.

Comment author: adamisom 04 February 2012 08:21:06PM *  0 points [-]

Well, I always thought it was obvious that "sunk cost" has one advantage going for it.

Placing a single incident of a "sunk cost" in a larger context, "sunk costs" can serve as a deterrent against abandoning projects. I wonder if the virtue of persistence isn't maligned. After all, as limited rationality machines, 1) we hardly ever can look at the full space of possible alternatives, and 2) probably underestimate the virtue of persistence. Pretty much every success story I've ever read is of someone who persisted beyond what you might call "the frustration barrier".

As I think about the error in forecasting expected payoff, it seems to me that unless we have a lot of experience with pushing projects through to the end, we're likely to underestimate the value of persistence, due to compounding effects and comparative advantage (if few people gain some skill).

Comment author: gwern 04 February 2012 08:30:10PM 0 points [-]

Placing a single incident of a "sunk cost" in a larger context, "sunk costs" can serve as a deterrent against abandoning projects.

Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing. ('Why do people need sunk cost as a deterrent? Well, it's because they abandon projects too easily.' 'But why do they abandon projects too easily?' 'Heck I dunno. Same way opium produces sleep maybe, by virtue of a dormitive fallacy.')

This line of thought is why I was looking into hyperbolic discounting, which seems like a perfect candidate for causing that sort of easily-abandonment behavior.

Pretty much every success story I've ever read is of someone who persisted beyond what you might call "the frustration barrier".

Which doesn't necessarily prove anything; we could just be seeing the winner's curse writ large. To win any auction is easy, you just need to be willing to bid more than anyone else... Persistence beyond 'the frustration barrier' may lead to outcomes like 'I am the Japanese Pog-collecting champion of the world.' Well, OK, but don't tell me that's something I should aspire to as a model of rationality!

Comment author: adamisom 05 February 2012 12:58:08AM 1 point [-]

"Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing."

Because we aren't psychic and can only guess expected payoffs. Why would I hypothesize that we underestimate expected payoffs for persistence rather than the reverse? Two reasons--or assumptions, I suppose. 1. Most skills compound--the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).

"Persistence beyond 'the frustration barrier' may lead to outcomes like 'I am the Japanese Pog-collecting champion of the world.'" Yes, but the activity one persists in/with is a completely separate issue, so I feel you can just assume 'for activities that reasonably seem likely to yield large benefit'.

On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.

Oh, sure, if you're extra careful, you would take that into account in your utility function. You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don't occur to us.

I mean, consider morality. It's so easy to say that moral rules have plenty of exceptions and so arrive at a decision that breaks one or more of these rules (and not for simple reason of internal inconsistency). But this may be bad overall for society. You might arrive at a local maximum of overall good, but a global maximum would require strict adherence to moral rules. I believe this is the common "objection" to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn't include the nuances that a complete function would.

Comment author: gwern 05 February 2012 01:37:04AM 0 points [-]
  1. Most skills compound--the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).

The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it's a lot easier to master the basics than to become an expert. (Question: did Kasparov's chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don't see how that could possibly cause systematic underestimation and compensating sunk cost bias.

On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.

Mentioned in essay.

I believe this is the common "objection" to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn't include the nuances that a complete function would.

One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.

Comment author: adamisom 05 February 2012 05:30:11PM 0 points [-]

Thanks

Comment author: PhilGoetz 27 February 2012 11:07:08PM 0 points [-]

I'm impressed with the thoroughness that went into this review, and with its objectivity and lack of premature commitment to an answer.

Comment author: lessdazed 11 February 2012 12:34:06AM -2 points [-]

Is the sunk cost fallacy a fallacy?

I ask myself about many statements: would this have the same meaning if the word "really" were inserted? As far as my imagination can project, any sentence that can have "really" inserted into it without changing the sentence's meaning is at least somewhat a wrong question, one based on an unnatural category or an argument by definition.

If a tree falls in the forest, does it make a sound? --> If a tree falls in the forest, does it really make a sound?

Is Terry Schiavo alive? --> Is Terry Schiavo really alive?

Is the sunk cost fallacy a fallacy? --> Is the sunk cost fallacy really a fallacy?

Comment author: DSimon 15 February 2012 07:40:04PM 3 points [-]

As far as I can tell you can do that with any sentence.

Comment author: gwern 15 February 2012 08:01:53PM 4 points [-]

Can you really do that with any sentence?

Comment author: Jiro 11 November 2013 09:23:33PM *  -1 points [-]

"Really" in this context means that an answer has already been provided by someone but you object to the rationale given for this provided answer, particularly because it's too shallow. In other words, it's not a description of the problem the question asks you to solve, it's a description of the context in which the problem is to be solved. So the fact that it can be done with any sentence doesn't mean that it provides no information, just like "Like I was discussing with Joe last week, is the sunk cost fallacy a fallacy?" doesn't provide no information.

Comment author: [deleted] 15 February 2012 08:37:40PM 2 points [-]

Did you really mean “that can have” rather than “that can't have”?

Comment author: thomblake 15 February 2012 08:13:22PM 0 points [-]

Do you really ask yourself that about many statements?

Would this really have the same meaning if the word "really" were inserted?

Is any sentence that can have "really" inserted into it without changing the sentence's meaning really at least somewhat a wrong question, one based on an unnatural category or an argument by definition?