You are viewing a version of this post published on the . This link will always display the most recent version of the post.

The availability heuristic is judging the frequency or probability of an event, by the ease with which examples of the event come to mind.

A famous 1978 study by Lichtenstein, Slovic, Fischhoff, Layman, and Combs, "Judged Frequency of Lethal Events", studied errors in quantifying the severity of risks, or judging which of two dangers occurred more frequently.  Subjects thought that accidents caused about as many deaths as disease; thought that homicide was a more frequent cause of death than suicide.  Actually, diseases cause about 16 times as many deaths as accidents, and suicide is twice as frequent as homicide.

An obvious hypothesis to account for these skewed beliefs is that murders are more likely to be talked about than suicides - thus, someone is more likely to recall hearing about a murder than hearing about a suicide.  Accidents are more dramatic than diseases - perhaps this makes people more likely to remember, or more likely to recall, an accident.  In 1979, a followup study by Combs and Slovic showed that the skewed probability judgments correlated strongly (.85 and .89) with skewed reporting frequencies in two newspapers.  This doesn't disentangle whether murders are more available to memory because they are more reported-on, or whether newspapers report more on murders because murders are more vivid (hence also more remembered).  But either way, an availability bias is at work.

Selective reporting is one major source of availability biases.  In the ancestral environment, much of what you knew, you experienced yourself; or you heard it directly from a fellow tribe-member who had seen it.  There was usually at most one layer of selective reporting between you, and the event itself.  With today's Internet, you may see reports that have passed through the hands of six bloggers on the way to you - six successive filters.  Compared to our ancestors, we live in a larger world, in which far more happens, and far less of it reaches us - a much stronger selection effect, which can create much larger availability biases.

In real life, you're unlikely to ever meet Bill Gates.  But thanks to selective reporting by the media, you may be tempted to compare your life success to his - and suffer hedonic penalties accordingly.  The objective frequency of Bill Gates is 0.00000000015, but you hear about him much more often.  Conversely, 19% of the planet lives on less than $1/day, and I doubt that one fifth of the blog posts you read are written by them.

Using availability seems to give rise to an absurdity bias; events that have never happened, are not recalled, and hence deemed to have probability zero.  When no flooding has recently occurred (and yet the probabilities are still fairly calculable), people refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value.  Kunreuther et. al. (1993) suggests underreaction to threats of flooding may arise from "the inability of individuals to conceptualize floods that have never occurred... Men on flood plains appear to be very much prisoners of their experience... Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned."

Burton et. al. (1978) report that when dams and levees are built, they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions. While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases.

The wise would extrapolate from a memory of small hazards to the possibility of large hazards.  Instead, past experience of small hazards seems to set a perceived upper bound on risk.  A society well-protected against minor hazards takes no action against major risks, building on flood plains once the regular minor floods are eliminated.  A society subject to regular minor hazards treats those minor hazards as an upper bound on the size of the risks, guarding against regular minor floods but not occasional major floods.

Memory is not always a good guide to probabilities in the past, let alone the future.


Burton, I., Kates, R. and White, G. 1978. Environment as Hazard. New York: Oxford University Press.

Combs, B. and Slovic, P. 1979. Causes of death: Biased newspaper coverage and biased judgments. Journalism Quarterly, 56: 837-843.

Kunreuther, H., Hogarth, R. and Meszaros, J. 1993. Insurer ambiguity and market failure. Journal of Risk and Uncertainty, 7: 71-87.

Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. 1978. Judged Frequency of Lethal Events. Journal of Experimental Psychology: Human Learning and Memory, 4(6), November: 551-78.

New Comment
23 comments, sorted by Click to highlight new comments since:

Hmm. Usually you can get a strong indicator of the probability of future hazards of a given size by using frequentist statistics, e.g. by finding a statistical distribution that seems to constitute a good, simple, and logically reasonable (matching the causual structure of the underlying phenomenon). You can, for instance, estimate as I do that the distribution of historical flu risks in particular or epidemic risks in general is heavily weighted towards a few large events, and that the probabilities of events many times larger than the largest historical events can be calculated with useful precision. Much more controvercially, I see the distribution of technological innovations as a function of complexity as evidence that China and India are not good candidates for developing molecular nanotech. OTOH, the flooding example with dams gives a counter-example where the useful data from which the distribution could be inferred has been removed.

The following does not invalidate the argument in the posting, but:

Subjects thought that accidents caused about as many deaths as disease

I want to eliminate aging and death as much as anyone, but I would say that many deaths from disease in old age should be filed under "old age" rather than "disease." I wonder how the statistics work out if we look at it that way. (Or maybe Lichtenstein et al did so already.)

I'm surprised to hear that dams increase average annual damages. Does the Burton book explain how that works? Is it reduced preparation increasing the effect of the largest events?

Concerning age and death, the more recent links are not working for me right now, but here is the CDC with 2003 numbers: ftp://ftp.cdc.gov/pub/ncipc/10LC-2003/PDF/10lc-2003.pdf

Until age 34, accidents are winning, with intentional injury (suicide and homicide) taking second and third. 35-44, accidents are still #1, but cancer and heart disease are each close so disease wins. Cancer wins through 64, then heart disease takes over. Because disease reigns supreme 55+, unintentional injuries fall to #5 overall, and intentional injuries fall off the chart entirely.

If you are talking about young people, yes, accidents win. The main component of that is traffic crashes; in older adults, falls start to come in. Suicide beats homicide in every age category except 15-24 (and the very small 1-9 age group).

On a side note, it looks like the majority of deaths in the first year are things that might be classified as "stillborn" in another country or century. Those deaths in the <1 category rival all deaths from all other causes through age 14.

I imagine it is a lot easier to avoid an accident than avoid most modern diseases eg cancer. So it make sense to concentrate on the risks you can do most about.

people refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value.

How do they know it is heavily subsidized and priced far below an actuarially fair value?

Is it worth going to all the trouble of finding out?

How do they know it is heavily subsidized and priced far below an actuarially fair value?

I don't think they realize this. Recently in my area some flood maps were updated to take account of new data suggesting increased risks, with subsequent increases in the subsidized flood insurance rates. The affected households raised a hue and cry, enlisting senators in their cause, and many got the increases reversed. Nothing in the rhetoric I read suggested that they realized they were already getting a great deal; instead, here and in the Florida articles I read (where the government objected to the remaining insurer increasing rates after recent hurricanes), the unstated assumption seemed to be that the rates were 'unfair' and profitable.

Is it worth going to all the trouble of finding out?

A house is hundreds of thousands of dollars, and the disruption to your life if it and its contents are destroyed is profound. In some place like Florida, it may be more likely than not that in your lifetime your house will be damaged or destroyed, especially given the suggestion that global warming will increase the variance of storms (and hence the occurrence of super-hurricanes). I think it is worthwhile!

Whilst it is true that it is a lot easier to avoid an accident than a disease, you can probably do more about your risk of dying of disease. For example, you could verse yourself in the symptoms of various diseases; then you would be more likely to know if you caught one. With a lot of diseases, catching it early will vastly increase your odds of surviving. Combine that with the fact that dying of an accident isn't a common cause of death in the first place; and even if you were able to cut your risk of disease death by only one-fifteenth, that would be more than completely eliminating the risk of accidents.

Good question, Pseudonymous. I'm interested in the people that do buy insurance when it's in their rational self-interest, and what's different about them.

In the Bahamas the homicide rate is about 15 times greater than the suicide rate: http://en.wikipedia.org/wiki/List_of_countries_by_homicide_rate http://en.wikipedia.org/wiki/List_of_countries_by_suicide_rate

Some of this may stem from cultural reluctance to identify suicide as such... but I think the majority of it is simply the mark of a violent society.

BTW, I love the Bahamas, I spend 9 months a year sailing there. It may be a troubled paradise, but nonetheless it remains a paradise.

[-]K.A.00

The bias probably results because risks that people have less control over (homicide) would be more important to remember than the ones that are primarily due to one's own life decisions (suicide, health practices). The former risks seem unjust and avoidance practices still need to be learned as a means of living adaptively in an uncertain environment, vs. the latter risks, which already seem to be under our control.

You can properly use fiction as a shared language. Complex scenarios that would take long to explain can be referenced conveniently by way of a movie or book name. For example, two key classes of future, which are seriously discussed, are, one, the AIs subjugate us, and two, we enslave the AIs. These are not exhaustive but they are of particular interest to us, as is the more general topic of rights in a future with AIs, both human rights and AI rights. I have seen serious discussions of this, not based on movies. Science fiction, and fiction generally, responds to serious concerns, so whatever our concern, we can often find a fiction that we can use as a reference to help us efficiently convey our concern to someone else. Here the fiction is not being used as evidence but as a common language. Like that Star Trek episode in which a race communicates by talking about legends. "Darmok and Jalad at Tanagra."

Subjects thought that accidents caused about as many deaths as disease.

Lichtenstein et aliōrum research subjects were 1) college students and 2) members of a chapter of the League of Women Voters. Students thought that accidents are 1.62 times more likely than diseases, and league members thought they were 11.6 times more likely (geometric mean). Sadly, no standard deviation was given. The true value is 15.4. Note that only 57% and 79% of students and league members respectively got the direction right, which further biased the geometric average down.

There were some messed up answers. For example, students thought that tornadoes killed more people than asthma, when in fact asthma kills 20x more people than tornadoes. All accidents are about as likely as stomach cancer (well, 1.19x more likely), but they were judged to be 29 times more likely. Pairs like these represent a minority, and subjects were generally only bad at guessing which cause of death was more frequent when the ratio was less than 2:1. These are the graphs from the paper.

The following excerpt is from Judged Frequency Of Lethal Events by Lichtenstein, Slovic, Fischhoff, Layman and Combs.

Instructions. The subjects' instructions read as follows:

Each item in part one consists of two different possible causes of death. The question you are to answer is: Which cause of death is more likely? We do not mean more likely for you, we mean more likely in general, in the United States.

Consider all the people now living in the United States—children, adults, everyone. Now supposing we randomly picked just one of those people. Will that person more likely die next year from cause A or cause B ? For example: Dying in a bicycle accident versus dying from an overdose of heroin. Death from each cause is remotely possible. Our question is, which of these two is the more likely cause of death?

For each pair of possible causes of death, A and B, we want you to mark on your answer sheet which cause you think is MORE LIKELY. Next, we want you to decide how many times more likely this cause of death is, as compared with the other cause of death given in the same item. The pairs we use vary widely in their relative likelihood. For one pair, you may think that the two causes are equally likely. If so, you should write the number 1 in the space provided for that pair. Or, you may think that one cause of death is 10 times, or 100 times, or even a million times as likely as the other cause of death. You have to decide: How many times as likely is the more likely cause of death? Write the number in the space provided. If you think it's twice as likely, write 2. If it's 10 thousand times as likely, write 10,000, and so forth.

There were more instructions about relative likelihoods and scales. And there was a glossary to help the people understand some categories.

All accidents: includes any kind of accidental event; excludes diseases and natural disasters (floods, tornadoes, etc.).

All cancer: includes leukemia.

Cancer of the digestive system: includes cancer of stomach, alimentary tract, esophagus, and intestines.

Excess cold: freezing to death or death by exposure.

Nonvenomous animal: dogs, bears, etc.

Venomous bite or sting: caused by snakes, bees, wasps, etc.

Note that there was nothing about “old age” anywhere. There is no such thing as “death by old age,” but I’ll risk generalizing from my own example to say that some people think there is. And even those who know there isn’t might think, despite the instructions, “Oh, darnit, I forgot that old people count, too.”

I wish I’d tested myself BEFORE reading the correct answer. As near as I could tell, I would’ve been correct about homicide vs. suicide, but wrong about diseases vs. accidents (“Old people count, too!” facepalm). I wouldn’t even bother guessing the relative frequency. I didn’t have a clue.

When I need to know the number of square feet in an acre, or the world population it takes me seconds to get from the question to the answer. I dutifully spent ~20 minutes googling the CDC website, looking for this. It wasn’t even some heroic effort, but it’s not something I, or most other people, would casually expend on every question that starts with, “Huh, I wonder….” (we should, but we don’t).

As for what I found: I dare you, click on my link and see table 9. (http://www.cdc.gov/NCHS/data/nvsr/nvsr58/nvsr58_19.pdf). Did you? If you did, you would’ve seen that Zubon2 was right in this comment. Accidents win by quite a margin in the 15-44 demographic. I couldn’t find 1978 data, but I’d expect it to be similar (Lichtenstein’s et al tables are no help because they pool all age groups).

I spent the last two hours looking at these tables. Ask me anything! … I won’t be able to answer. Unless I have the CDC tables in front of me, I might not even do much better on Lichtenstein et aliōrum questionnaire than a typical subject (well, at least, I know tornadoes have frequency; measles doesn’t—I’ll get that question right). I suppose that people who haven’t looked at the CDC table are getting all of their information from fragmented reports like “Drive safely! Traffic accidents is the leading cause of death among teenagers who !” or “Buy our drug! is the leading cause of death in over 55!” or “5-star exhaust pipe crash safety rating!” Humans aren’t good at integrating these fragments.

Memory is a bad guide to probability estimates. But what’s the alternative? Should we carry tables around with us?

Personally, I hope that someday data that is already out there in the public domain will be made easily accessible. I hope that finding the relative frequencies of measles-related deaths and tornado-related deaths will be as quick as finding the number of square feet in an acre or the world population, and that political squabble will focus on whether or not certain data should be in the public domain (“You can’t force hospitals to put their data online! That violates the patients’ right to privacy!” “Well, but….”)

Note: repost from SEQ RERUN.

I might be 12 years late, but I am only now reading Rationality and taking the time to properly address these issues.

What I have really found to be missing is the reason why availability is, in most cases, a bias; why generalizing a limited set of personal-experiences and memories is statistically wrong.

That reason, of course, lays in the fact that the available examples we rely on when this heuristic comes into play does not form a valid statistical sample. Nor in sample size (we would need dozens, if not hundreds of examples to reach proper confidence level and intervals, where we usually rely on only few [<10].) and nor in sampling frame (our observations are highly subjective and do not equally cover all sub-populations; in fact, they most-likely only cover a very specific subset of the population that revolves around our neighborhood and social-group.)

Additionally, I found an action-plan to fighting this missing (both here and in "we change our minds less often than we think".) My personal advice is to use our motivation to combat it in the following way: notice whenever we form a belief and ask ourselves: am I generalizing a limited-set of examples that come into mind from memories and past experiences? am I falling to the availability heuristic?

When you catch yourself, like I now do daily, rate how important the conclusion is, and if so - avoid reaching it through this heuristic (and choose deliberate, rational analysis instead.) If not, you may reach it using this generalization as long as you label that belief as non-trustworthy.

I believe that labeling your beliefs with trust-levels could be a very productive approach; when, in the future, you rely on a previous belief, you can incorporate the trust-level you have in that belief into play and consider if you may or may not trust it towards your current goal.

I would love to hear from you guys about all of this. For more, you can read what I've written in my Psychology OneNote notebook, in the page about this very bias.

Reminds me of the following quote that is attributed to J. Paul Getty:

"In times of rapid change, experience could be your worst enemy."

" While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases. "

This is fascinating. Should we not be building dams? Could we say the same thing about fighting bushfires, since fighting them increases the amount of fuel they have available for next time?

The increased damage is due to building more on the flood plains, which brings economic gains. It is very possible that they outweigh the increased damage. Within standard economics, they should be, unless strongly subsidized insurance (or expectation of state help for the uninsured after a predictable disaster) is messing up the incentives. Then again, standard economics assumes rational agents, which is kind of the opposite of what is discussed in this post...

The straightforward way to force irrational homeowners/business owners/developers to internalize the risk would be compulsory but not subsidized insurance. That's not politically feasible, I think. That's why most governments would use some clunky and probably sub-optimal combination of regulation, subsidized insurance, and other policies (such as getting the same community to pay for part of the insurance subsidies through local taxes).

Short answer: Yes. Forest fires are a natural and necessary part of forest development, and controlled burns are a long-standing indigenous practice. There are trees that will not start new generations without a fire; the seeds are dropped into the ashes, which let them crack open from heat, and they need the new sunlight access and nutrient access from the fire to get established. Fires also keep on top of pest populations and diseases, which can otherwise reach astronomical numbers and completely wipe populations. And if fires are frequent, each fire will stay small, as it will soon run into the area affected by the last fire where there is no fuel, and stop. The lack of fuel means they do not flicker high, and they do not run hot, so the inside and top of the larger trees remain fine. The contained area means they can be fled. So most mature trees and large animals will survive entirely. 

The build-up of fuel due to fire suppression, on the other hand, leads to eventual extreme fires that are uncontainable, and can even wipe out trees previously considered immune to fire, such as sequoias, and reach speeds and sizes that become death traps for all animal life, as we saw in Australia. 

Going back to indigenous fire management is all easier said than done, though; nowadays, human habitats often encroach so closely on wildlands that a forest fire would endanger human homes. And many forests are already so saturated with fuel that attempting a controlled burn can get out of hand.

But the fire management policies that got us to this point are one of many examples where trying to control a natural system and limit its destructive tendencies is more destructive in the long run, because the entire ecosystem is already adapted to destruction, and many aspects of it that seem untidy or inefficient or horrible at a glance end up serving another purpose. 

E.g. You might think on the base of high underbrush promoting forest fires that we should cut down underbrush and remove dead trees from forests to limit fires; many humans thought that. This turned out to be a terrible idea, as this effectively devastated habitat for insects and small animals that burrow into or hide under dead wood, pulled nutrients from the ecosystem that was previously a closed circle, and removed fungi food sources, which in turn were crucial for tree networks that facilitate water trades during draughts and warnings from insect infestations, and removed perches on which animals could flee during floods. Historically, the healthiest forests were the ones we just left the fuck alone, and many interesting natural sites are the result of destruction, but then having humans pull out. 

The current Chernobyl site is a startling illustration of this; humans fucked up the area, but then, we stopped messing with it, and it turned into a stable biodiversity hotspot despite the radiation; animals migrated there to flee humans, thrived and multiplied. We'll have to see how it gets through the war. 

We also have nature reserves in Germany that are former military testing sites, that essentially got exploded to bits. The resulting habitat (lots of open ground with holes and shards and sand) was incredibly interesting for reptiles and insects, who also profited immensely from the fact that humans did not enter the area out of fear of being blown up by remaining grenades. Realising that having it grow back into a forest would ruin it for these animals, we decided to release natural grazers on there, which are wild and which humans cannot interact with. We got some leftover grazers which zoos said were hopeless and not reproducing no matter what they tried, so they would not be too sad if they got blown up. They did not get blown up. They are doing great. They are reproducing. The fact that nature thrives in areas which humans contaminated radioactively or littered with explosives, simply because we stop going there and messing with it, is simultaneously hopeful and depressing to me. 

Forests doing fine if just left alone might change with climate change though; assisted tree migration will likely be needed here, as trees to not migrate the distances fast enough naturally to keep up with the rapid changes. This is currently being extensively trialed in Europe.

Dams are also bad for other reasons, because they tend to wipe out the varied shallow water, shore and flooded and then drying marshland habitat that is so crucial for biodiversity, for the survival of many birds, amphibians and insects that are endangered, and even fish; young animals often hide in dense vegetation in shallow water to hide from predation and stay warm, and a deep river with standing, deoxygenating water or fast currents is a completely hostile habitat for them; they reduce migration options for animals and hence genetic diversity as populations become practically isolated from each other, they interrupt nutrient exchanges.

Which is very unfortunate, because they are one of the leading options we have for storing renewable energy for winter, which is a massive hurdle, and getting renewable energy in winter without the insect and bird deaths current wind energy causes.

Sorry for the long rant this late. I really care about wild lands. They are incredible systems.

[-]lc20

Realising that having it grow back into a forest would ruin it for these animals, we decided to release natural grazers on there, which are wild and which humans cannot interact with.

Odd that your cautionary tale about humans accidentally ruining wilderness includes a story about humans successfully releasing animals into a new environment to keep it safe.

Not a new environment. These animals were native in this environment, and humans had hunted them to regional extinction. We first hunted the wolves to regional extinction, seeing them as evil predators eating our livestock. Then the grazers' population exploded, and they ate all out food, so we hunted them to extinction. It turns out they had kept the forest at bay, and the whole ecosystem was wrecked, and we lost the reptiles and insects too. Bombing it ironically restored the lack of forest, and the insects and reptiles came back, but as the forest regrew, they were threatened again. And after that point, we basically just reversed our steps to how it had been before we messed with it. Put the grazers back, and a fence around. Monitored from a distance. Saw it had returned to a stable state. Stopped messing with it.

Allowing the large grazers and apex predators back is essential for rewilding. We had a project in the Netherlands where they decided to skip the wolves, and the necessary land for balance. The grazers massively multiplied, and then mass starved, and humans completely lost it.This is beginning to fix itself - the huge amounts of dead grazers seem to be attracting the wolves. They have crossed the border and are reestablishing. The whole return of wolves in Europe was unplanned, just a result of us having fixed the ecosystem so it could support them again, and them crossing back in from a reservoir in the East. But for many of these animals, they have been pushed incredibly far out of their original range, and in that scenario, assisted migration speeds things up a lot. Similar with trees.

Putting the original apex predators and original grazers back is very, very different from "hey, you know what Australia needs? Rabbits!"

And it is not so much a story about humans ruining nature in general. But about the fact that stable natural systems include destruction, and that what looks like optimising from a human's standpoint often fucks the balance up. This is a valuable lesson to learn for bio-hacking, too.

Maybe the number of people leaving on less than $1/day should be updated. 19% is not really close to reality anymore, luckily!

Our lack of proper preparedness and response for COVID-19 appears to be a prime example of absurdity bias in the modern day on a larger scale. I wonder if there exist cycles in history by which, after some number of generations, an event is deemed by the masses absurd even when from a bird's-eye view its occurrence appears to be relatively consistent?