by [anonymous]
6 min read

21

From Kahneman and Tversky:

"A person is said to employ the availability heuristic whenever he estimates frequency or probability by the ease with which instances or associations could be brought to mind"
I doubt that's news to any LessWrong readers - the availability heuristic isn't exactly cutting edge psychology. But the degree to which our minds rely on mental availability goes far beyond estimating probabilities. In a sense, every conscious thought is determined by how available it is - by whether it pops into our heads or not. And it's not something we even have the illusion of control over - if we knew where we were going, we'd already be there. (If you spend time actually looking directly at how thoughts proceed through your head, the idea of free will becomes more and more unrealistic. But I digress). What does and doesn't come to mind has an enormous impact on our mental functioning - our higher brain functions can only process what enters our working memory.
Whether it's called  salience, availability, or vividness, marking certain things important relative to other things is key to the proper functioning of our brains. Schizophrenia (specifically psychosis) has been described as "a state of aberrant salience", where the brain incorrectly assigns importance to what it processes. And a quick perusal of the  list of cognitive biases  reveals a large number directly tied to mental availability - there's the obvious availability heuristic, but there's also the simulation heuristic (a special case of the availability heuristic), base-rate neglect (abstract probabilities aren't salient, so aren't taken into account), hyperbolic discounting (the present is more salient than the future), the conjunction fallacy (ornate, specific descriptions make something less likely but more salient), the primacy/recency bias, the false consensus effect, the halo effect, projection bias, etc etc etc. Even consciousness seems to be based on things hitting a certain availability threshold and entering working memory. And it's not particularly surprising that A) we tend to process only what we mark as important and B) our marking system is flawed. Our minds are efficient, not perfect - a "good enough" solution to the problem of finding food and avoiding lions.
That doesn't mean that availability isn't a complex system. It doesn't seem to be a fixed number that gets assigned when a memory is written - it's highly dependent on the context of the situation. A perfect example of this is  priming. Simply seeing a picture is enough to make certain things more available, and that small change in availability is all that's needed to change how you vote. In  state-dependent memory, information that's been absorbed while under a certain state can only be retrieved under that same state - the context of the situation is needed for activation. It's why students are told to study under the same conditions that the test will be taken, and why musicians are told not to always practice sitting in the same position, to avoid inadvertently setting up a context dependent state. And anecdotally, I notice that my mind tends to slide between memories that make me happy when I'm happy, and memories that make me upset when I'm angry (moods are thought to be important context cues). In general, the more available something is, the less context is needed to activate it, and the less available, the more context dependent it becomes. Frequency, prototypicality, and abstractness also contribute to availability. Some things are so available that they're activated in improper contexts - this is how figurative language is  thought to work. But some context is always required, or our minds would be nothing a but a greatest hits of our most salient thoughts, on a continuous loop.
The problems with this approach is that availability isn't always assigned the way we'd prefer it. If I'm at a bar and want to tell a hilarious story, I can't just think of "funny stories" and activate all my great bar stories - they have to be triggered by some memory. More perniciously, it's possible (and in my experience, all too likely) to have a thought or take an action without having access to the beliefs that produced it. If, for example, I'm playing a videogame, I find it almost impossible to tell someone a sequence of buttons for something unless I'm holding the controller in my hands. Or I might avoid seeing a movie because I think it's awful, but I won't be able to recall why I think it's awful. Or I'll get into an argument with someone because he disagrees with something I think is obvious, but I won't immediately be able to summon the reasons that generated that obviousness. And this lack of availability can go beyond simple bad memory.
From  Block 2008:
There is a type of brain injury which causes a syndrome known as 'visuo-spatial extinction' If the patient sees a single object on either side, the patient can identify it, but if there are objects on both sides, the patient can identify only the one on the right and claims not to see the one on the left. However as Geraint Rees has shown in two fMRI studies of one patient (known as 'GK'), when GK claims not to see a face on the left, his fusiform face area (on the right - which is fed by the left side of space) lights up almost as much as - and in overlapping areas involving the fusiform face area - when he reports seeing the face.
The brain can detect a face without passing that information to our working memory. What's more, when subjects with visuo-spatial extinction are asked to compare objects on both sides - as either 'the same' or 'different' - they're more than 88% accurate, despite not being able to 'see' the object on the left. Judgements can be made based on something that we have no cognitive access to.
In Landman et al. (2003), subjects were shown a circle of eight rectangles for a short period, then a blank screen, then the same circle where a line points to one of the rectangles, which may or may not have rotated 90 degrees. The number of correct answers suggest subjects could track about four different rectangles, in line with data suggesting that our working memory for visual perceptions is about four. Subjects were then given the same test, except the line pointing to the rectangle appeared on the blank screen, between when the first circle and the second circle of rectangles is shown. On this test, subjects were able to track between six and seven rectangles by keeping the first circle of rectangles in memory, and comparing the second circle to it (according to the subjects). They're able to do this despite the fact that they're unable to access the shape of each individual rectangle. The suggested reason for this is that our memory for visual perceptions exceeds what we're capable of fitting into working memory - that we process the information without it entering our conscious minds[1]. It seems perfectly possible for us to know something without knowing we know it, and to believe something without having access to why we believe it.
This, of course, isn't good. If you're building complex interconnected structures of beliefs (and you are), you need to be sure the ground below you is sturdy. And there's a strong possibility that if you can't recall the reason behind something, your brain will  invent one for you. The good news is that memories don't seem to be deleted - we can lose access, but the memory itself doesn't fade[2]. The problem is one of keeping access open. One way is to simply keep your memory sharp - and the web is  full of tips on that. A better way might be to leverage your mind's propensity for habituation - force yourself to trace your chain of belief down to the base, and eventually it will start to become something you do automatically. This isn't perfect either - it's not something you can do for the fast pace of day-to-day life, and it in itself is probably subject ot a whole series of biases. It might even be worth it to write your beliefs down - this method has the dual benefits of creating a hard copy for you to reference later, and increasing the availability of each belief through the act of writing and recalling. There's no ideal solution - we're limited in what new mental structures we can create, and we're forced to rely on the same basic (and imperfect) set of cognitive tools. Since availability seems to be such an integral part of the brain, forcing availability on those things we want to come to mind might be our best bet.

Notes
1: If it's hard to understand this experiment, look at the linked Block paper - it provides diagrams.
2: It can, however, be re-written. Memory seems to work like a save-as function, being saved (and distorted slightly) every time it's recalled.
New Comment
12 comments, sorted by Click to highlight new comments since:

The problems with this approach is that availability isn't always assigned the way we'd prefer it. If I'm at a bar and want to tell a hilarious story, I can't just think of "funny stories" and activate all my great bar stories - they have to be triggered by some memory.

Thinking "funny stories" doesn't work because the information isn't filed that way. However, those stories likely are filed with connections to states of fun and humor, so if you get into that emotional state first, you'll likely find contextually-funny stories coming to mind... or simply make situational humor in the first place.

A better way might be to leverage your mind's propensity for habituation - force yourself to trace your chain of belief down to the base, and eventually it will start to become something you do automatically. This isn't perfect either - it's not something you can do for the fast pace of day-to-day life, and it in itself is probably subject ot a whole series of biases.

All of your strategies have the problem that they're focused on the wrong part of the brain: your abstract intellectual beliefs don't mean anything anyway, as they're stored in the part of your brain that was designed for bullshitting, rather than actually doing anything.

The belief systems that actually run your behavior have to be parsed out by engaging concrete imagination, not intellectual abstractions... in precisely the same way that your funny stories aren't going to come out when you think the words "funny stories", instead of the feeling of being funny.

[-]taw-10

Hyperbolic discounting is not a bias, this is how real world operates. Exponential discounting is based on models oversimplified to the point of being broken.

Hyperbolic discounting is an example of intransitive preferences. Yes, real people have intransitive preferences. You shouldn't model people as utility maximizers. But it's still an incoherent to the point that it can only be called an error.

[-]taw10

We might have a paradigm issue here, but I'd say bite the bullet and accept hyperbolic discounting. Lack of transitivity is just an artifact, and not a problem in the real world. There is an essential difference between cases where you can change your mind and cases where you cannot.

Here's a simple example. I'm claiming this is extremely typical, and scenarios under which exponential discounting arises are very much not.

When you lend some money to someone for 30 days you should charge interest at rate P. Due to some combination of opportunity cost and risk of the borrower running away with your money or dying and not being able to pay back etc.

When you lend for 60 days instead, the risk of bad things happening between days 31 and 60 is much less than between days 1 and 30. If the borrower wanted to run away, he would. If he survived the first 30 days, it's a proof that his lifestyle is probably not that dangerous, so he's more likely to survive the next 30. This decreases the rate to P' < P. (the same applied to opportunity costs before modern economy). Exactly as hyperbolic discounting predicts.

When you lend to one person for 30 days, and another for next 30 days, your proper interest rate due to risk is back to P. But this is completely different situation, and much less likely to be relevant. And in any case systemic risks of staying in business get lower with time, what should gradually reduce your P.

Usually the points of time where you can "change your mind" correspond to events which introduce all the new kinds of risks and transaction costs and are not neutral.

I'll understand if due to paradigm mismatch you will have hostile reaction towards this.

Your example doesn't involve discounting at all. Your nonlinearities are in the probabilities, not the payoffs.

Hyperbolic discounting says that you're willing to plan to give someone a loan at rate P starting a month from now, but when the time arrives you change your mind about what the fair rate is. Not change your mind in response to new evidence about the probability of defaulting, but predictably change your mind in the absence of any new arguments, just because event X a month from now has different utility to you than event X now.

Of course hyperbolic discounting is a useful heuristic. The paradigm I subscribe to is not just bias. We have these heuristics because they're the best that evolution or our developing minds could do. That is, they're pretty good in some other environment (ancestral or childhood), which might be very different. You singled out hyperbolic discounting, among all the biases, but it seems to me much more likely to be maladapted to the present than the other standard biases.

Most of your comment argues that it's a good heuristic, but your first paragraph ("bite the bullet and accept hyperbolic discounting") seems to make a stronger claim.

Usually the points of time where you can "change your mind" correspond to events which introduce all the new kinds of risks and transaction costs and are not neutral.

That is a different heuristic than I would call hyperbolic discounting. You can certainly produce situations in the lab where people apply a worse heuristic than that. I expect the two heuristics were more similar in EEA than today.

If you want to make a quick decision, go with your gut and trust hyperbolic discounting. But trust it for a decision on an action, not its intermediate output of utility. It mixes up probability and utility. "If you're building complex interconnected structures of beliefs" then you have to separate the two and you can't trust your gut model of yourself because of hyperbolic discounting. People screw up long term planning all the time because of hyperbolic discounting.

[-]taw00

By biting the bullet I meant using hyperbolic time as the first default approximation, instead of exponential time. I think exponential time is usually much more wrong in practice than hyperbolic time.

Can you give a concrete example of someone screwing up due to hyperbolic accounting in a case where there's an objective measure of utility to compare the person's estimates against ?

[-]tut00

There are no objective measures of utility. But just about everyone who has failed a diet or exercise schedule could be seen as failing beause of hyperbolic discounting.

Without objective measures of utility, what could it even mean to speak of someone's utility judgements as being biased or wrong ?

[-]tut00

I don't know. What I was referring to was that people's estimates of their future utility of some course of action are not constant. And they often vary in such a way that one choice (dieting, exercising, saving...) appear rational when you are planning for it, and when you evaluate it in retrospect, but is unappealing at the time that you actually do it.

Think about our evolutionary history. Presumably, life was less stable, deals less predictable than they are today. In that case it would have been better to have a strong hyperbolic discount rate, while now, when outcomes are increasingly reliable, then that rate should be dropped but it (presumably) hasn't.

Of course, our intuitive discount rate should never reach the exponential that a model would predict, because there are always new unforeseen factors, but I would contend that the uncertainties have dropped substantially. This would make the particular hyperbolic rate that we intuitively discount payoffs at today biased, while in our evolutionary past it presumably would have been a better approximation of a suitable discount rate.