Edit: Following mixed reception, I decided to split this part out of the latest post in my sequence on reinforcement learning. It wasn't clear enough, and anyway didn't belong there.

I'm posting this hopefully better version to Discussion, and welcome further comments on content and style.

 


 

The availability heuristic seems to be a mechanism inside our brains which uses the ease with which images, events and concepts come to mind as evidence for their prevalence or probability of occurrence. For this heuristic to be worth the trouble it causes, there needs to be a counterpart, a second mechanism which actually makes things available to the first one in correlation with their likelihood. In this post I discuss why having such an internal availability mechanism can be a good idea, and outline some of the ways it can fail.

 


 

You're playing Texas Hold'em poker against another player, and she has just bet all her chips on the flop (the 2nd of 4 betting rounds, when there are 2 more shared cards to draw). You estimate that with high probability she has a low pair (say, under 9) with a high kicker (A or K, hoping to hit a second pair). You hold Q-J off-suit. Do you call?

One question this depends on is: what's the probability p that you will win this hand? An experienced player will know that your best hope is to hit a pair, without the other player hitting anything better than her low pair. This has probability of slightly less than 25%.

We could compute or remember a better estimate if we notice the probability of runner-runner outs, but is it worth it? It won't help us pin down p with amazing accuracy - we could be wrong about the opponent's hand to begin with. And anyway, the decision of whether to actually call depends on many other factors: the sizes of your stack, her stack, the blinds and so on. A 1% error in the estimate of the win probability is unlikely to change your decision.

So instead of pointlessly trying to predict the future to an impossible and useless degree of accuracy, what we did was tell ourselves a bunch of likely stories about what might happen, then combine these scenarios into a simple probabilistic prediction of the future, and plan as best we can given this prediction.

This may be the mechanism that makes the availability heuristic a smart choice. The main observed effect of this heuristic is that past (subjective) prevalence seems to be highly linked to future predictions. Patches of stories we've heard may even work their way into the stories we tell of the future. The missing link is an internal availability mechanism which chooses which patches to make available for retelling. We seem to use such a mechanism to identify likely outcomes; before we forward them to the more commonly discussed process which integrates these stories of the future into a usable prediction.

What events would be good candidates for becoming available? One thing to notice is that evaluation of the expected value of our actions depends both on the probability and on the impact of their results; but for each specific future we don't need both these numbers, only their product. If the main function of the internal availability mechanism is to predict value, rather than probability, it stands to reason that high-impact but improbable outcomes will become as available as mundane probable ones. Yes, concepts which were encountered most often in the past, in a context similar to the current one, come to mind easily. But one-in-a-hundred or -thousand outcomes should also become available if they are very important. One-in-a-million ones, on the other hand, are almost never worth the trouble.

If something similar is indeed going on in our brains, then it seems to be working pretty well, usually. When I walk down the street, I give no mind to the possibility that there are invisible obstacles in my way. It is so remote, that even if I took it into account with adequately small probability, my actions would probably be roughly the same. It is therefore wise not to occupy my precious processing power with such nonsense.

Even when the internal availability mechanism is working properly, it generates unavoidable errors in prediction. Strictly speaking, ignoring some unlikely and unimportant possibilities is wrong, however practical. And while it makes noticing things evidence for their higher probability, this heuristic could sometimes fail, particularly if the internal availability mechanism is built for utility but used for probability.

The mechanism itself can also fail. Availability doesn't seem to be binary, so one type of failure is to make certain scenarios over- or under-available, marking them as more or less likely and important than they are. There also appears to be some threshold, some minimal value for non-zero availability. Another type of failure is when an important outcome fails to meet this threshold, not becoming available.

Or perhaps an unlikely future becomes available even though it shouldn't. This may explain why people are unable to estimate low probabilities. In their mind, the prospect of winning the lottery and becoming millionaires creates a vivid image of an exciting future. It's so immersive, that it really appears to be a real possibility - it could actually happen!

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 7:38 AM

Consider writing a summary. I have no idea what the point of this article is, after trying and failing to read it a couple of times.

I hope you'll take a crack at the extent to which people manipulate each other's availability bias (propaganda, advertising, religion, applause and boo lights, negative and positive emotional influence between individuals). Much (perhaps most) what what's most available in people's minds these days isn't from individual experience.

You're playing Texas Hold'em poker against another player, and she has just bet all her chips on the flop (the 2nd of 4 betting rounds, when there are 2 more shared cards to draw). You estimate that with high probability she has a low pair (say, under 9) with a high kicker (A or K, hoping to hit a second pair). You hold Q-J off-suit. Do you call?

It really depends on what the flop was. All in implies either a desperate player who's either bluffing (~30% of the time the player is desperate in my experience) or who has a good hand (the remaining 70% of the time the player is desperate) or, if the player still has a good-sized stack/this is a cash game, the player is bluffing a fairly small amount of the time (not enough to justify going in with a Q-J). In fact, after missing the flop with Q-J off, I can't think of any situation where I'd be likely to call a bet (well, unless there were a huge pot, but beyond that).

Poker analysis aside, I'm not quite sure what the point of this article was. If it was just saying that the availability heuristic is there for a reason and we should be careful about adjusting for it, I fully agree. If you were trying to say something more, I'm afraid I at least didn't get that point.

I was trying to give a specific reason that the availability heuristic is there: it's coupled with another mechanism that actually generates the availability; and then to say a few things about this other mechanism.

Does anyone have specific advice on how I could convey this better?

I was trying to give a specific reason that the availability heuristic is there: it's coupled with another mechanism that actually generates the availability; and then to say a few things about this other mechanism.

It seems obvious why the availability heuristic is there. The ease with which images, events and concepts come to mind is correlated with how frequently they have been observed, which in turn is correlated with how likely they are to happen again. So, the heuristic is a reasonably-good one which just happens to have some associated false positives.

The ease with which images, events and concepts come to mind is correlated with how frequently they have been observed, which in turn is correlated with how likely they are to happen again.

Yes, and I was trying to make this description one level more concrete.

Things never happen the exact same way twice. The way that past observations are correlated with what may happen again is complicated - in a way, that's exactly what "concepts" capture.

So we don't just recall something that happened and predict that it will happen again. Rather, we compose a prediction based on an integration of bits and patches from past experiences. Recalling these bits and patches as relevant for the context of the prediction - and of each other - is a complicated task, and I propose that an "internal availability" mechanism is needed to perform it.

I'm still unsure of what you're actually saying. Perhaps you're talking about some sort of a "plausibility heuristic", where we look for instances of something in our model of the world, not just our experiences. That seems trivial, but that's not necessarily a bad thing (I would prefer to see more stuff here that seems really obvious to people, because those few times it's not obvious to everyone tend to be very valuable). If you're saying something else, I'm still not getting it.

Take for example your analysis of the poker hand I partially described. You give 3 possibilities for what the truth of it may be. Are there any other possibilities? Maybe the player is bluffing to gain the reputation of a bluffer? Maybe she mistook a 4 for an ace (it happened to me once...)? Maybe aliens hijacked her brain?

It would be impossible to enumerate or notice all the possibilities, but fortunately we don't have to. We make only the most likely and important ones available.

I have no idea why people don't love this post.