Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Readiness Heuristics

19 Post author: jimrandomh 15 June 2009 01:53AM

Followup to: How Much Thought

A trolley is hurtling towards three people. It will kill them, unless you pull a lever that diverts it onto a different track. However, if you do this, it will hit a small child and kill her. Do you pull the lever, and kill a child, or do nothing, and let three adults die? This question is used to test moral systems and theories; an answer reveals how you value lives and culpability. Or at least, it's supposed to. It's hard to get a straight answer, because everyone wants to take a third option. Why waste time thinking about whose life is more valuable, when you could be looking for a way to save everyone?

In philosophy, decisions are hardened by saying that there are no other options. The real world doesn't work that way. Every decision has an implied extra option: don't decide. Instead, put it off, gather more information, ask a friend, or think more. You might come up with new information that affects the decision or a new option that's better than the old ones. It could be that there is nothing to find, but it takes a lot of thought and investigation to be sure. Or you could find the perfect solution, if only you wait a few more seconds before deciding.

We can't think about both a decision and a meta-decision at the same time, so we have a set of readiness heuristics to tell us whether we're ready to call our current-best option a final decision. Normal heuristics determine what we decide; if they go awry, we choose poorly. The readiness heuristics determine when and whether we decide. If they go awry, we choose hastily or not at all. Broken readiness heuristics cause decision paralysis, writer's block, and procrastination.


Given a set of known options, decision theory tells us that we can crunch some numbers to get an expected outcome value for each choice, and choose whichever gives the best expectation. We can treat putting off the decision as an extra option, and estimate its utility like we do for the other known options. However, it so different from the other options that it makes more sense to treat it separately. Instead, we split every decision into two: Decide now? And decide what? Formally, you should put off making a decision if the probability of changing your mind times the expected improvement from doing so is greater than the cost of indecision; that is, if

    P(change) * E(improvement) > E(indecision cost)

To decide how long to run our decision-making processes, we need to consider both the decision itself and the state of our decision-making process. We have almost as many built-in heuristics for this meta-decision as we do for actual decisions. However, note that this is a slight oversimplification; by talking about the probability of changing your mind to a different decision and the expected improvement from doing so, we presuppose that a best candidate has already been selected, which means putting the main decision strictly before the meta-decision of whether to finalize it. It's not possible to finalize a decision without selecting a best candidate, but we may decide a decision is urgent or inconsequential enough to choose a random option.


Indecision cost is normally equal to the time spent, so it can be treated as a constant, except when something is likely to happen soon. Our heuristic representation of 'something is likely to happen soon' is urgency. Failing to finalize a decision normally means doing nothing, which would be very bad if the decision is what to do about an approaching tiger, or what to write in a paper that's coming due. The urgency heuristic is good at dealing with tigers, but bad at dealing with papers. Fortunately, we can hijack the urgency heuristic with thoughts and stimuli that we associate with urgency, such as counting down.

Our brain's estimate of P(change), the probability that we'll think of or find something that makes us change our mind, comes out as bewilderment. We feel bewildered when a model doesn't fit in our working memory, or leaves important questions unanswered, indicating a high probability of available simplifications, confusion, and missed distinctions, all of which suggest that it's not yet time to decide. However, this heuristic keeps us from reaching any decision when we're given too many options. For example, in a study by Sheena Itengar and Mark Lepper, shoppers were presented with samples of 6 or 24 flavors of jam; of the shoppers who saw 6 flavors, 30% later bought one, while only 3% the shoppers who saw 24 flavors did. Many people take an extremely long time to order in restaurants, for much the same reason; and if you ever start considering the pros, cons and relative priorities of items on a lengthy todo list, then this heuristic will keep you from actually doing any of them for some time.

To estimate E(improvement), the amount there is to gain by finding a new option or angle, we look at how good the currently available options are. The more readily we can generate objections to the current best option, the more room there is for improvement, and the more likely it is that we have something to gain by resolving those objections. The affect heuristic stands in for the amount of room for improvement; negative affect yields indecision, positive affect yields decisiveness. Unlike bewilderment, however, negative affect doesn't go away with time, and this can lead to hangups. First, when all available options are genuinely bad, it becomes hard to finalize a decision. Even when the options are okay, the halo effect can contaminate our intuitive judgment; negative feelings towards the decision itself, or anything related to it, can hijack the mechanisms meant to make us keep looking for better options.


So far, we've considered readiness heuristics in the context of decisions with multiple options, where there is some chance of finding a hidden option or decision-changing insight. For many decisions, like whether to start working on an assignment, there are only two options: the apparent best option (start now) and the default option (procrastinate). In this case, it is tempting to say that the readiness heuristics ought not to apply, since there's no possible benefit to waiting. But this is not quite true; it is possible, for example, that later events might render the assignment moot, or change its parameters. In any case, regardless of whether or not the readiness heuristics ought to apply, we can pretty easily observe that they do apply.

One consequence of this is that negative affect towards a task, or anything related to it, not only induces procrastination but ought to induce procrastination. (This connection seemed so counterintuitive and so often harmful that I had trouble accepting that it could evolve without a deep theory to explain it. The connection between affect and procrastination, and how to modify the affect heuristic's output, is the central theme of PJ Eby's writings.)

Heuristics don't just affect what we decide, but which decisions we make at all, and how long it takes us to make them. There are, almost certainly, not only more heuristics, but more heuristic types, specialized to particular kinds of decisions and meta-decisions, waiting to be discovered.

Comments (7)

Comment author: RichardKennaway 15 June 2009 03:17:49PM 9 points [-]

In philosophy, decisions are hardened by saying that there are no other options.

Also in war. In Japanese prisoner of war camps in WWII (I have heard) they would take an officer captured along with his men, line the men up, and make the officer choose one. That one, they would shoot. If he refused to choose, they would shoot all of them.

It takes overwhelming force to cut off all other options from a hard problem, but not unattainable force.

Comment author: Yvain 15 June 2009 09:36:50AM *  8 points [-]

Annoyance didn't mention this explicitly in his post about frontal syndrome, but these readiness heuristics are related to the frontal lobe of the brain in some way, and can be damaged with interesting results. Damasio describes a patient with a frontal stroke who lost the ability to terminate the decision-making process. He told the story of (I think I remember this right) trying to schedule an appointment with this patient, and told him that either Tuesday or Thursday would work, and the patient spent the next several minutes listing all the possible reasons why one day might be better than the other and balancing them out. When it became overwhelming, Damasio finally just said "What about Thursday?" and the patient immediately agreed.

Comment author: hrishimittal 15 June 2009 11:35:18AM *  0 points [-]

The True Trolley Dilemma would be where the child is Eliezer Yudkowsky.

Then what would you do?

EDIT: Sorry if that sounds trollish, but I meant it as a serious question.

Comment author: AndrewKemendo 16 June 2009 02:44:27AM *  6 points [-]

The Yudkowski worship is getting pretty thick around here:


Lets not turn this into a fandom

Comment author: SoullessAutomaton 15 June 2009 09:23:25PM 3 points [-]

Perhaps you should clarify what angle you're trying to get at with this question.

I expect you're raising some version of the "do you value some lives more than others" issue. There are likely at least some people here who would pick Yudkowsky over three unknown people, based on a rational evaluation of expected utility of continued existence. The same issue could be presented by replacing the child with any other person who is expected to have a large positive contribution to the world, such as a promising young surgeon who could potentially save many more than three lives over the course of his career.

Or did you have something else in mind?

Comment author: hrishimittal 15 June 2009 09:32:52PM 0 points [-]

Yes that's how I meant it.

Comment author: robzahra 16 June 2009 04:42:15PM 1 point [-]

Shutting up and multiplying, answer is clearly to save eliezer...and do so versus a lot more people than just three...question is more interesting if you ask people what n (probably greater than 3) is their cut off point.