Summary

How should a rational agent handle the Sunk Cost Dilemma?

Introduction

You have a goal, and set out to achieve it.  Step by step, iteration by iteration, you make steady progress towards completion - but never actually get any closer.  You're deliberately not engaging in the sunk cost fallacy - at no point does the perceived cost of completion get higher.  But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

You're rational.  You know you shouldn't count sunk costs in the total cost of the project.  But you're now into twice as much effort as you would have originally invested, and have done everything you originally thought you'd need to do, but have just as much work ahead of you as when you started.

Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19.  And after step 35, when you discovered another step, you updated your priors with your incorrect original estimate - and the project is still worth completing.  Over and over.  All you can conclude is that your original priors were unreliable.  Each update to your priors, however, doesn't change the fact that the remaining cost is always worth paying to complete the project.

You are starting to feel like you are caught in a penny auction for your time.

When do you give up your original goal as a mirage?  At what point do you give up entirely?

Solutions

The trivial option is to just keep going.  Sometimes this is the only viable strategy; if your goal is mandatory, and there are no alternative solutions to consider.  There's no guarantee you'll finish in any finite amount of time, however.

One option is to precommit; set a specific level of effort you're willing to engage in before stopping progress, and possibly starting over from scratch if relevant.  When bugfixing someone else's code on a deadline, my personal policy is to set aside enough time at the end of the deadline to write the code from scratch and debug that (the code I write is not nearly as buggy as that which I'm usually working on).  Commitment of this sort can work in situations in which there are alternative solutions or when the goal is disposable.

Another option is to discount sunk costs, but include them; updating your priors is one way of doing this, but isn't guaranteed to successfully navigate you through the dilemma.

Unfortunately, there isn't a general solution.  If there were, IT would be a very different industry.

Summary

The Sunk Cost Fallacy is best described as a frequently-faulty heuristic.  There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT - frequent requirement changes to fixed-cost projects are a good example (which can go both ways, actually, depending on how the contract and requirements are structured).  It is best to always have an exit policy prepared.

Related Less Wrong Post Links

http://lesswrong.com/lw/at/sunk_cost_fallacy/ - A description of the Sunk Cost Fallacy

http://lesswrong.com/lw/9si/is_sunk_cost_fallacy_a_fallacy/ - Arguments that the Sunk Cost Fallacy may be misrepresented

http://lesswrong.com/lw/9jy/sunk_costs_fallacy_fallacy/ - The Sunk Cost Fallacy can be easily used to rationalize giving up

ETA: Post Mortem

Since somebody has figured out the game now, an explanation: Everybody who spent time writing a comment insisting you -could- get the calculations correct, and the imaginary calculations were simply incorrect?  I mugged you.  The problem is in doing the calculations -instead of- trying to figure out what was actually going on.  You forgot there was another agent in the system with different objectives from your own.  Here, I mugged you for a few seconds or maybe minutes of your time; in real life, that would be hours, weeks, months, or your money, as you keep assuming that it's your own mistake.

Maybe it is a buggy open-source library that has a bug-free proprietary version you pay for - get you in the door, then charge you money when it's more expensive to back out than to continue.  Maybe it's somebody who silently and continually moves work to your side of the fence on a collaborative project, when it's more expensive to back out than to continue.  Not counting all your costs opens you up to exploitative behaviors which add costs at the back-end.

In this case I was able to mug you in part because you didn't like the hypothetical, and fought it.  Fighting the hypothetical will always reveal something about yourself - in this case, fighting the hypothetical revealed that you were exploitable.

In real life I'd be able to mug you because you'd assume someone had fallen prone to the Planning Fallacy, as you assumed must have happened in the hypothetical.  In the case of the hypothetical, an evil god - me - was deliberately manipulating events so that the project would never be completed (Notice what role the -author- of that hypothetical played in that hypothetical, and what role -you- played?).  In real life, you don't need evil gods - just other people who see you as an exploitable resource, and will keep mugging you until you catch on to what they're doing.

New Comment
68 comments, sorted by Click to highlight new comments since:
[-]gjm130

I think the objections raised by (e.g.) Unknowns, Lumifer and shminux are basically correct but they aren't (I think) phrased so that they exactly match the scenario OrphanWilde is proposing. Let me try to reduce the impedance mismatch a little.

OrphanWilde's scenario -- where your schedule keeps slipping but even with perfectly rational updating continuing always looks like a win -- is possible. But: it's really weird and I don't think it actually occurs in real life; that is, in reality, the scenarios that most resemble OrphanWilde's are ones in which the updating isn't perfectly rational and you would do well to cut your losses and reflect on your cognitive errors.

What would a real OrphanWilde scenario look like? Something like this.

  • You begin a project to (let's say) build a bridge. You think it should be done in six months.
  • After four months of work, it's clear that you underestimated and it's now going to take longer. Your new estimate is another six months.
  • After another four months, it's now looking like it will take only three months more -- so you're still going to be late, but not very. You no longer trust your prediction abilities, though (you were wrong the last two times), so you adjust your estimate: another six months.
  • After another four months, you've slipped further. Your error bars are getting large now, but you get a message from God telling you it'll definitely be done in another six months.
  • After another four months, you've lost your faith and now there's probably nothing that could (rationally) convince you to be confident of completion in 6 months. But now you get information indicating that completing the bridge is more valuable than you'd thought. So even though it's likely to be 9 months now, it's still worth it because extra traffic from the new stadium being built on the other side makes the bridge more important.
  • After another six months, you're wearily conceding that you've got very little idea how long the bridge is going to take to complete. Maybe a year? But now they're planning a whole new town on the other side of the bridge and you really need it.
  • After another nine months, it seems like it might be better just to tell the townspeople to swim if they want to get across. But now you're receiving credible terrorist threats saying that if you cancel the bridge project the Bad Guys are going to blow up half the city. Better carry on, I guess...

What we need here is constant escalation of evidence for timely completion (despite the contrary evidence of the slippage so far) and/or of expected value of completing the project even if it's really late -- perhaps, after enough slippage, this needs to be escalating evidence of the value of pursuing the project even if it's never finished. One can keep that up for a while, but you can see how the escalation had to get more and more extreme.

OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation? If so, how? If not, isn't this going to be rare enough that we can safely ignore it in favour of the much commoner scenarios where the project keeps looking worthwhile because we're not looking at it rationally?

[-][anonymous]40

The 'even if never finished' part resembles childrearing:)

[-]gjm20

A nice example of a task whose value (1) is partly attached to the work rather than its goal and (2) doesn't depend on completing anything.

But at each step, you discover another step you didn't originally anticipate

That is the core of your problem. Since it's happening repeatedly, you should stop assuming that you know the distance to completion and assign a probability distribution to the number of step (or time) needed to get the project done, likely with a long right tail.

If you constantly encounter the unexpected, you should acknowledge that your expectations are faulty and start to expect the unexpected.

[-]gjm70

I don't understand the point of this.

I mean, I get that OrphanWilde is feeling very smug at having been able to "mug" some other people in the discussion here, and that this mugging is meant to be analogous both to the situation (deliberately incoherently) described in the article and to things that happen in real life.

But ... so what? Are we meant to be startled by the revelation that sometimes people exploit other people? Hardly.

And what seems to be one of the points you say you're trying to make -- that when this happens we are liable to assume it's our own fault rather than the other person's malice -- seems to me to be very ill supported by anything that's happened here. (1) I don't see other people assuming that the confusion here is their own fault, I see them trying to be tactful about the fact that it's yours. (2) I would expect posters here to be more willing to give benefit of the doubt than, e.g., in a business situation where they and the other party are literally competing for money. (3) You say "Here, I mugged you for a few seconds or maybe minutes [...] in real life, that would be hours, weeks, months" -- but I see no reason to expect people to be orders of magnitude slower in "real life" than here.

Further, you didn't in fact exploit anyone because (unless you're really malicious and actually enjoy seeing people waste time to no purpose, in which case fuck you) you didn't gain anything. You (at most) just wasted some people's time. Congratulations, but it's not like that's terribly hard to do. And, perhaps, you just made a bunch of people that little bit less inclined to be helpful and understanding to some confused-seeming guy on Less Wrong in the future.

I'm downvoting your post here and your replies in the comments, and would encourage other readers to do likewise. Making Less Wrong incrementally less useful in order to be able to preen about how you exploited people is not behaviour I wish to encourage here, and I see no actual insight (either overtly expressed or implicit) that counterbalances your act of defection.

[EDITED to add: OH HA HA I JUST MUGGED YOU AREN'T I CLEVER]

at having been able to "mug" some other people in the discussion here

The usual verb is "to troll".

[-]gjm40

I know, but OrphanWilde chose "mug" and I played along.

...and I played along.

Clearly, the lesson didn't take :-P

Also, in practice this only happens when someone is procrastinating and the supposedly additional steps are just ways of avoiding the more difficult steps, so a reasonable estimate of the remaining time to completion is that person is simply not going to complete the task, ever.

This is the Planning Fallacy, for which the remedy is the Outside View: ask how similar projects have turned out in the past. That will likely be more accurate than imagining how this particular project will go.

I have heard (but do not have personal experience) of a rule of thumb for software developers when quoting a number of days work for a project. Imagine the longest it could possibly take, then multiply by three.

But perhaps you have not taken an outside view at the start, and got into a project that is multiplying like a hydra? Then take the outside view now, avoid the Sunk Cost fallacy, and ask, is the difference in payoff from completing vs. abandoning the project worth the difference in costs, now realistically estimated, that will be incurred from here on?

This was an interesting article. I've been involved in software consulting in the past, and this sort of mugging does sometimes occur in fixed-price projects. I think that there are several take-aways from this:

  • fixed-price projects are a lot higher risk (to the service provider) than are time and materials projects. This is true even if the service-provider is good at estimation and applies appropriate buffer in the schedule/pricing.

  • fixed-price projects require a skilled project manager who can recognize and manage scope creep (intentional and otherwise)

  • fixed-price projects require diligence up-front in crafting an unambiguous statement of work or contract

  • one-person or small project teams without a dedicated project manager should think twice before accepting fixed-price assignments

The last bullet is worth emphasizing; some technical people, wishing to stay focused on the work, will acquiesce to scope creep (particularly if introduced incrementally) to avoid getting involved in time-consuming and potentially adversarial discussions with the client. This can make manager-less teams particularly vulnerable to this type of mugging. An experienced project manager can often mitigate this danger.

I've seen the mugging go the other direction as well on fixed-cost, particularly in extremely large contracts; companies put low-bids in, then charge exorbitant rates on even the smallest changes to the requirements (and there are always changes). And with non-fixed price projects, the mugging in the other direction is even easier. People in IT don't pay nearly enough attention to reputation.

But yeah. It's very easy for individuals and small companies to get caught in this, especially if, say, your mortgage payment is due soon.

Yes, it can happen in the other direction too.

[-]Shmi20

But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

if this is the case, your issue is unrelated to sunk cost, it is the Planning Fallacy. You've failed to perform the proper risk analysis and mitigation. The excuse "had no priors for anticipating" is only valid for black swan events, but not for your run-of-the-mill problems every project has.

So, when faced with the situation you describe, one should stop barking up the wrong tree and and do a pre-mortem.

Hopefully, I'm not just feeding the troll, but: just what exactly do you think "the sunk cost fallacy" is? Because it appears to me that you believe that it refers to the practice of adding expenses already paid to future expected expenses in a cost-benefit analysis, when in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

The Sunk Cost Fallacy is the fallacy considering sunk costs (expenses already paid) when calculating expected returns. I/e, if I've already spent $100, and my expected returns are $50, then it would be the sunk cost fallacy to say it is no longer worth continuing, since my expected return is negative - I should instead, to avoid the fallacy, only consider the -remaining- expenses to get that return.

Which is to say, to avoid the fallacy, sunk costs must be ignored.

The post is about the scenario when prior-cost insensitivity (avoiding the sunk cost fallacy) opens you up to getting "mugged", a situation referred to as the Sunk Cost Dilemma, of which surprisingly little has been written; one hostile agent can extract additional value from another, sunk-cost-insensitive, agent by adding additional costs at the back-end.

(There was no "trolling". Indeed, I wasn't even tricking anybody - my "mugging" of other people was conceptual, referring to the fact that any "victim" agent who continued to reason the way the people here were reasoning would continue to get mugged in a real-life analogue, again and again for each time they refused to update their approach or understanding of the problem.)

As I said, that is not what the sunk cost fallacy is. If you've spent $100, and your expected net returns are -$50, then the sunk cost fallacy would be to say "If I stop now, that $100 will be wasted. Therefore, I should keep going so that my $100 won't be wasted."

While it is a fallacy to just add sunk costs to future costs, it's not a fallacy to take them into account, as your scenario illustrates. I don't know of anyone who recommends completely ignoring sunk costs; as far as I can tell you are arguing against a straw man in that sense.

Also, it's "i.e.", rather than "i/e".

Taking them into account is exactly what the sunk cost fallacy is; including sunk costs with prospective costs for the purposes of making decisions.

I think you confuse the most commonly used examples of the sunk cost fallacy with the sunk cost fallacy itself.

(And it would be e.g. there, strictly speaking.)

ETA: So if I'm arguing against a straw man, it's because everybody is silently ignoring what the fallacy actually refers to in favor of something related to the fallacy but not the fallacy entire.

If you think that everyone is using a term for something other than what it refers to, then you don't understand how language works. And a discussion of labels isn't really relevant to the question of whether it's a straw man. Also, your example shows that what you're referring to as a sunk cost fallacy is not, in fact, a fallacy.

Wait. You paid a karma toll to comment on one of my most unpopular posts yet to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"? Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

Let's pull up the definition you offered.

in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

You're not even getting the -populist- definition of the fallacy right. Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Why didn't I bring that up before? Because I'm not here to score points in an argument. Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument, a trait which I think is overemphasized and over-rewarded on Less Wrong. I can't fix that, but I can express my disdain for the behavior: Your games of trivial social dominance bore me.

I believe it's your turn. You're slated to deny that you're playing any such games. Since I've called your turn, I've changed it, of course; it's a chaotic system, after all. I believe the next standard response is to insult me. Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do, and then to say that by calling attention to it, I've left you in confusion as to what game you're even supposed to be playing against me.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

You paid a karma toll to comment on one of my most unpopular posts yet

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"?

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as you yourself said, "everybody" uses a different definition, then that is the definition. You said "everybody is silently ignoring what the fallacy actually refers to". But what a term "refers to" is, by definition, what people mean when they say it. The literal meaning (and I don't take kindly to people engaging in wild hyperbole and then accusing me of being hyperliteral when I take them at their word, in case you're thinking of trying that gambit) of your post is that in the entire world, you are the only person who knows the "true meaning" of the phrase. That's absurd. At the very least, your use is nonstandard, and you should acknowledge that.

Now, as to "moving the goalposts", the thing that I suspected you of not knowing what you were talking about was knowing the standard meaning of the phrase "sunk cost fallacy", so the goalposts are pretty much where they were in the beginning, with the only difference being that I have gone from strongly suspecting that you don't know what you're talking about to being pretty much certain.

Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

I don't know of any mainstream references defining evolution that way. If you see a parallel between these two cases, you should explain what it is.

You're not even getting the -populist- definition of the fallacy right.

Ideally, if you are going to make claims, you would actually explain what basis you see for those claims.

Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Presumably, your line of thought is that what you just presented is absurd, and therefore it must be wrong. I have two issues with that. The first is that you didn't actually present what your thinking was. That shows a lack of rigorous thought, as you failed to make explicit what your argument is. This leaves me with both articulating your argument and mine, which is rather rude. The second problem is that your syllogism "This is absurd, therefore it is false" is severely flawed. It's called the Sunk Cost Fallacy. The fact that it is illogical doesn't disqualify it from being a fallacy; being illogical is what makes it a fallacy.

Typical thinking is, indeed, that if one has a ticket for X that is priced at $5, then doing X is worth $5. For the typical mind, failing to do X would mean immediately realizing a $5 loss, while doing X would avoid realizing that loss (at least, not immediately). Therefore, when contemplating X, the $5 is considered as being positive, with respect to not doing X (that is, doing X is valued higher than not doing X, and the sunk cost is the cause of the differential).

Why didn't I bring that up before? Because I'm not here to score points in an argument.

And if you were here to score points, you would think that "You just described X as being a fallacy, and yet X doesn't make sense. Hah! Got you there!" would be a good way of doing so? I am quite befuddled.

Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument

I sincerely believe that you are using the phrase "sunk cost fallacy" that is contrary to the standard usage, and that your usage impedes communication. I attempted to inform you of my concerns, and you responded by accusing me of simply trying "score points". I do not think that I have been particularly rude, and absent prioritizing your feelings over clear communication, I don't see how I could avoid you accusing me of playing "games of trivial social dominance".

"Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do"

Perceiving an assertion of error as being a dominance display is indeed something that the primate brain engages in. Such discussions cannot help but activate our social brains, but I don't think that means that we should avoid ever expressing disagreement.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

My immediate motive is to impart knowledge. I suppose if one follows the causal chain down, it's quite possible that humans' desire to impart knowledge stems from our evolution as social beings, but that strikes me as overly reductionist.

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

You could be correct there.

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as you yourself said, "everybody" uses a different definition, then that is the definition. You said "everybody is silently ignoring what the fallacy actually refers to". But what a term "refers to" is, by definition, what people mean when they say it. The literal meaning (and I don't take kindly to people engaging in wild hyperbole and then accusing me of being hyperliteral when I take them at their word, in case you're thinking of trying that gambit) of your post is that in the entire world, you are the only person who knows the "true meaning" of the phrase. That's absurd. At the very least, your use is nonstandard, and you should acknowledge that.

There's a conditional in the sentence that specifies "everybody". "So if I'm arguing against a straw man..."

I don't think I -am- arguing against a straw man. As I wrote directly above that, I think your understanding is drawn entirely from the examples you've seen, rather than the definition, as written on various sites - you could try Wikipedia, if you like, but it's what I checked to verify that the definition I used was correct when you suggested it wasn't. I will note that the "Sunk Cost Dilemma" is not my own invention, and was noted as a potential issue with the fallacy as it pertains to game theory long before I wrote this post - and, indeed, shows up in the aforementioned Wikipedia. I can't actually hunt down the referenced paper, granted, so whether or not the author did a good job elaborating the problem is a matter I'm uninformed about.

Presumably, your line of thought is that what you just presented is absurd, and therefore it must be wrong. I have two issues with that. The first is that you didn't actually present what your thinking was. That shows a lack of rigorous thought, as you failed to make explicit what your argument is. This leaves me with both articulating your argument and mine, which is rather rude. The second problem is that your syllogism "This is absurd, therefore it is false" is severely flawed. It's called the Sunk Cost Fallacy. The fact that it is illogical doesn't disqualify it from being a fallacy; being illogical is what makes it a fallacy.

"Illogical" and "Absurd" are distinct, which is what permits common fallacies in the first place.

I sincerely believe that you are using the phrase "sunk cost fallacy" that is contrary to the standard usage, and that your usage impedes communication. I attempted to inform you of my concerns, and you responded by accusing me of simply trying "score points". I do not think that I have been particularly rude, and absent prioritizing your feelings over clear communication, I don't see how I could avoid you accusing me of playing "games of trivial social dominance".

Are you attempting to dissect what went wrong with this post?

Well, initially, the fact that everybody fought the hypothetical. That was not unexpected. Indeed, if I include a hypothetical, odds are it anticipates being fought.

It was still positive karma at that point, albeit modest.

The negative karma came about because I built the post in such a way as to utilize the tendency on Less Wrong to fight hypotheticals, and then I called them out on it in a very rude and condescending way, and also because at least one individual came to the conclusion that I was actively attempting to make people less rational. Shrug It's not something I'm terribly concerned with, on account that, in spite of the way it went, I'm willing to bet those who participated learned more from this post than they otherwise would have.

Perceiving an assertion of error as being a dominance display is indeed something that the primate brain engages in. Such discussions cannot help but activate our social brains, but I don't think that means that we should avoid ever expressing disagreement.

I'll merely note that your behavior changed. You shifted from a hit-and-run style of implication to over-specific elaboration and in-depth responses. This post appears designed to prove to yourself that your disagreement has a rational basis. Does it?

My immediate motive is to impart knowledge. I suppose if one follows the causal chain down, it's quite possible that humans' desire to impart knowledge stems from our evolution as social beings, but that strikes me as overly reductionist.

Case in point.

Let's suppose that is your motive. What knowledge have you imparted? Given that you're concerned that I don't know what it is, where's the correct definition of the Sunk Cost Fallacy, and how does my usage deviate from it? I'd expect to find that somewhere in here in your quest to impart knowledge on me.

Your stated motive doesn't align with your behavior. It still doesn't; you've dressed the same behavior up in nicer clothes, but you're still just scoring points in an argument.

So - and this time I want you to answer to -yourself-, not to me, because I don't matter in this respect - what exactly do you actually want out of this conversation?

[-]Jiro00

The negative karma came about because I built the post in such a way as to utilize the tendency on Less Wrong to fight hypotheticals, and then I called them out on it in a very rude and condescending way, and also because at least one individual came to the conclusion that I was actively attempting to make people less rational. Shrug It's not something I'm terribly concerned with, on account that, in spite of the way it went, I'm willing to bet those who participated learned more from this post than they otherwise would have.

Is that "the end justifies the means"?

The means, in this case, don't violate any of my ethics checks, so I don't see any need to justify them, and nobody suggested my ethics in this case were off. The sole accusation of defection was on a misinterpretation of my behavior, that I was trying to make people less rational.

It's more a statement that I think the post was effective for its intended purposes, so I'm not too concerned about re-evaluating my methodology.

I should have separated that out into two paragraphs for clarity, I suppose.

It seems that you are expecting a situation somewhat like this:

Day 1: I expect to be done in 5 days. Day 2: I expect to be done in 5 days. Day 10: I expect to be done in 7 days. Day 20: I expect to be done in 4 days. Day 30: I expect to be done in 5 days.

Basically, this cannot happen if I am updating rationally. You say, "Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19." But in fact, it does add something: namely that this task that I am trying to accomplish is very long and unpredictable, and the more such steps are added, the longer and more unpredictable I should assume it to be, even in the remaining portion of the task. So by day 30, I should be expecting about another month, not another 5 days. And if I do this, at some point it will become clear that it is not worth finishing the task, at least assuming that it is not simply the process itself that is worth doing.