Partial re-interpretation of: The Curse of Identity
Also related to: Humans Are Not Automatically Strategic, The Affect Heuristic, The Planning Fallacy, The Availability Heuristic, The Conjunction Fallacy, Urges vs. Goals, Your Inner Google, signaling, etc...
What are the best careers for making a lot of money?
Maybe you've thought about this question a lot, and have researched it enough to have a well-formed opinion. But the chances are that even if you hadn't, some sort of an answer popped into your mind right away. Doctors make a lot of money, maybe, or lawyers, or bankers. Rock stars, perhaps.
You probably realize that this is a difficult question. For one, there's the question of who we're talking about. One person's strengths and weaknesses might make them more suited for a particular career path, while for another person, another career is better. Second, the question is not clearly defined. Is a career with a small chance of making it rich and a large chance of remaining poor a better option than a career with a large chance of becoming wealthy but no chance of becoming rich? Third, whoever is asking this question probably does so because they are thinking about what to do with their lives. So you probably don't want to answer on the basis of what career lets you make a lot of money today, but on the basis of which one will do so in the near future. That requires tricky technological and social forecasting, which is quite difficult. And so on.
Yet, despite all of these uncertainties, some sort of an answer probably came to your mind as soon as you heard the question. And if you hadn't considered the question before, your answer probably didn't take any of the above complications into account. It's as if your brain, while generating an answer, never even considered them.
The thing is, it probably didn't.
Daniel Kahneman, in Thinking, Fast and Slow, extensively discusses what I call the Substitution Principle:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. (Kahneman, p. 97)
System 1, if you recall, is the quick, dirty and parallel part of our brains that renders instant judgements, without thinking about them in too much detail. In this case, the actual question that was asked was ”what are the best careers for making a lot of money”. The question that was actually answered was ”what careers have I come to associate with wealth”.
Here are some other examples of substitution that Kahneman gives:
- How much would you contribute to save an endangered species? becomes How much emotion do I feel when I think of dying dolphins?
- How happy are you with your life these days? becomes What is my mood right now?
- How popular will the president be six months from now? becomes How popular is the president right now?
- How should financial advisors who prey on the elderly be punished? becomes How much anger do I feel when I think of financial predators?
All things considered, this heuristic probably works pretty well most of the time. The easier questions are not meaningless: while not completely accurate, their answers are still generally correlated with the correct answer. And a lot of the time, that's good enough.
But I think that the Substitution Principle is also the mechanism by which most of our biases work. In The Curse of Identity, I wrote:
In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do).
As Anna correctly pointed out, I resorted to a signaling explanation here, but a signaling explanation may not be necessary. Let me reword that previous generalization: As soon as I set a goal, my brain asked itself how that goal might be achieved, realized that this was a difficult question, and substituted it with an easier one. So ”how could I advance X” became ”what are the kinds of behaviors that are commonly associated with advancing X”. That my brain happened to pick the most prestigious ways of advancing X might be simply because prestige is often correlated with achieving a lot.
Does this exclude the signaling explanation? Of course not. My behavior is probably still driven by signaling and status concerns. One of the mechanisms by which this works might be that such considerations get disproportionately taken into account when choosing a heuristic question. And a lot of the examples I gave in The Curse of Identity seem hard to justify without a signaling explanation. But signaling need not to be the sole explanation. Our brains may just resort to poor heuristics a lot.
Some other biases and how the Substitution Principle is related to them (many of these are again borrowed from Thinking, Fast and Slow):
The Planning Fallacy: ”How much time will this take” becomes something like ”How much time did it take for me to get this far, and many times should that be multiplied to get to completion.” (Doesn't take into account unexpected delays and interruptions, waning interest, etc.)
The Availability Heuristic: ”How common is this thing” or ”how frequently does this happen” becomes ”how easily do instances of this come to mind”.
Over-estimating your own share of household chores: ”What fraction of chores have I done” becomes ”how many chores do I remember doing, as compared to the amount of chores I remember my partner doing.” (You will naturally remember more of the things that you've done than that somebody else has done, possibly when you weren't even around.)
Being in an emotionally ”cool” state and over-estimating your degree of control in an emotionally ”hot” state (angry, hungry, sexually aroused, etc.): ”How well could I resist doing X in that state” becomes ”how easy does resisting X feel like now”.
The Conjunction Fallacy: ”What's the probability that Linda is a feminist” becomes ”how representative is Linda of my conception of feminists”.
People voting for politicians for seemingly irrelevant reasons: ”How well would this person do his job as a politician” becomes ”how much do I like this person.” (A better heuristic than you might think, considering that we like people who like us, owe us favors, resemble us, etc. - in the ancestral environment, supporting the leader you liked the most was probably a pretty good proxy for supporting the leader who was most likely to aid you in return.)
And so on.
The important point is to learn to recognize the situations where you're confronting a difficult problem, and your mind gives you an answer right away. If you don't have extensive expertise with the problem – or even if you do – it's likely that the answer you got wasn't actually the answer to the question you asked. So before you act, stop to consider what heuristic question your brain might actually have used, and whether it makes sense given the situation that you're thinking about.
This involves three skills: first recognizing a problem as a difficult one, then figuring out what heuristic you might have used, and finally coming up with a better solution. I intend to develop something on how to taskify those skills, but if you have any ideas for how that might be achieved, let's hear them.
Well... in principle, yes. But that seems a little uncharitable - do you think I actually committed that mistake in any of the examples in gave? I.e. do you actually think that it seems like a wrong explanation for any of those cases? If yes, you should just point them out. If not, well, obviously any explanation can be rationalized and overextended beyond its domain. But it being possible to misapply something doesn't make the thing itself bad.
You are right in the sense that the substitution principle provides, by itself, rather little information. It only says that an exact analysis is being replaced with an easier, heuristic analysis, but doesn't say anything about what that heuristic is. To figure out that requires separate study.
But the thing that distinguishes a fake explanation from a real explanation is that a fake explanation isn't actually useful for anything, for it doesn't constrain your anticipations. That isn't the case here. The substitution principle provides us with the (obvious in retrospect) anticipation that "if your brain instantly returns an answer to a difficult problem, the answer is most likely a simplified one", which provides a rule of thumb which is useful for noticing when you might be mistaken. (This anticipation could be disproven if it turned out that all such answers were actually answers to the questions being asked.) While learning about specific biases/heuristics might be more effective in teaching you to notice them, such learning is also more narrow in scope. The substitution principle, in contrast, requires more thought but also makes it easier to spot heuristics you haven't learned about before.
I'm claiming that the substitution principle isn't so much a model as a (non-information-adding) rephrasing of the existence of heuristics, so there isn't much of a "mistake" to be made - the rephrasing isn't actually wrong, just unhelpful.
Unless of course you actually think the new question is explicitly represented in the brain in a similar way that a question read or heard would be, in which case I think you've made that mistake in every single one of your examples, unless you have data to back... (read more)