Partial re-interpretation of: The Curse of Identity
Also related to: Humans Are Not Automatically Strategic, The Affect Heuristic, The Planning Fallacy, The Availability Heuristic, The Conjunction Fallacy, Urges vs. Goals, Your Inner Google, signaling, etc...
What are the best careers for making a lot of money?
Maybe you've thought about this question a lot, and have researched it enough to have a well-formed opinion. But the chances are that even if you hadn't, some sort of an answer popped into your mind right away. Doctors make a lot of money, maybe, or lawyers, or bankers. Rock stars, perhaps.
You probably realize that this is a difficult question. For one, there's the question of who we're talking about. One person's strengths and weaknesses might make them more suited for a particular career path, while for another person, another career is better. Second, the question is not clearly defined. Is a career with a small chance of making it rich and a large chance of remaining poor a better option than a career with a large chance of becoming wealthy but no chance of becoming rich? Third, whoever is asking this question probably does so because they are thinking about what to do with their lives. So you probably don't want to answer on the basis of what career lets you make a lot of money today, but on the basis of which one will do so in the near future. That requires tricky technological and social forecasting, which is quite difficult. And so on.
Yet, despite all of these uncertainties, some sort of an answer probably came to your mind as soon as you heard the question. And if you hadn't considered the question before, your answer probably didn't take any of the above complications into account. It's as if your brain, while generating an answer, never even considered them.
The thing is, it probably didn't.
Daniel Kahneman, in Thinking, Fast and Slow, extensively discusses what I call the Substitution Principle:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. (Kahneman, p. 97)
System 1, if you recall, is the quick, dirty and parallel part of our brains that renders instant judgements, without thinking about them in too much detail. In this case, the actual question that was asked was ”what are the best careers for making a lot of money”. The question that was actually answered was ”what careers have I come to associate with wealth”.
Here are some other examples of substitution that Kahneman gives:
- How much would you contribute to save an endangered species? becomes How much emotion do I feel when I think of dying dolphins?
- How happy are you with your life these days? becomes What is my mood right now?
- How popular will the president be six months from now? becomes How popular is the president right now?
- How should financial advisors who prey on the elderly be punished? becomes How much anger do I feel when I think of financial predators?
All things considered, this heuristic probably works pretty well most of the time. The easier questions are not meaningless: while not completely accurate, their answers are still generally correlated with the correct answer. And a lot of the time, that's good enough.
But I think that the Substitution Principle is also the mechanism by which most of our biases work. In The Curse of Identity, I wrote:
In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do).
As Anna correctly pointed out, I resorted to a signaling explanation here, but a signaling explanation may not be necessary. Let me reword that previous generalization: As soon as I set a goal, my brain asked itself how that goal might be achieved, realized that this was a difficult question, and substituted it with an easier one. So ”how could I advance X” became ”what are the kinds of behaviors that are commonly associated with advancing X”. That my brain happened to pick the most prestigious ways of advancing X might be simply because prestige is often correlated with achieving a lot.
Does this exclude the signaling explanation? Of course not. My behavior is probably still driven by signaling and status concerns. One of the mechanisms by which this works might be that such considerations get disproportionately taken into account when choosing a heuristic question. And a lot of the examples I gave in The Curse of Identity seem hard to justify without a signaling explanation. But signaling need not to be the sole explanation. Our brains may just resort to poor heuristics a lot.
Some other biases and how the Substitution Principle is related to them (many of these are again borrowed from Thinking, Fast and Slow):
The Planning Fallacy: ”How much time will this take” becomes something like ”How much time did it take for me to get this far, and many times should that be multiplied to get to completion.” (Doesn't take into account unexpected delays and interruptions, waning interest, etc.)
The Availability Heuristic: ”How common is this thing” or ”how frequently does this happen” becomes ”how easily do instances of this come to mind”.
Over-estimating your own share of household chores: ”What fraction of chores have I done” becomes ”how many chores do I remember doing, as compared to the amount of chores I remember my partner doing.” (You will naturally remember more of the things that you've done than that somebody else has done, possibly when you weren't even around.)
Being in an emotionally ”cool” state and over-estimating your degree of control in an emotionally ”hot” state (angry, hungry, sexually aroused, etc.): ”How well could I resist doing X in that state” becomes ”how easy does resisting X feel like now”.
The Conjunction Fallacy: ”What's the probability that Linda is a feminist” becomes ”how representative is Linda of my conception of feminists”.
People voting for politicians for seemingly irrelevant reasons: ”How well would this person do his job as a politician” becomes ”how much do I like this person.” (A better heuristic than you might think, considering that we like people who like us, owe us favors, resemble us, etc. - in the ancestral environment, supporting the leader you liked the most was probably a pretty good proxy for supporting the leader who was most likely to aid you in return.)
And so on.
The important point is to learn to recognize the situations where you're confronting a difficult problem, and your mind gives you an answer right away. If you don't have extensive expertise with the problem – or even if you do – it's likely that the answer you got wasn't actually the answer to the question you asked. So before you act, stop to consider what heuristic question your brain might actually have used, and whether it makes sense given the situation that you're thinking about.
This involves three skills: first recognizing a problem as a difficult one, then figuring out what heuristic you might have used, and finally coming up with a better solution. I intend to develop something on how to taskify those skills, but if you have any ideas for how that might be achieved, let's hear them.
This post gives what could be called an "epistemic Hansonian explanation". A normal ("instrumental") Hansonian explanation treats humans as agents that possess hidden goals, whose actions follow closely from those goals, and explains their actual actions in terms of these hypothetical goals. People don't respond to easily available information about quality of healthcare, but (hypothetically) do respond to information about how prestigious a hospital is. Which goal does this behavior optimize for? Affiliation with prestigious institutions, apparently. Therefore, humans don't really care about health, they care about prestige instead. As Anna's recent post discusses, the problem with this explanation is that human behavior doesn't closely follow any coherent goals at all, so even if we posit that humans have goals, these goals can't be found by asking "What goals does the behavior optimize?"
Similarly in this instance, when you ask humans a question, you get an answer. Answers to the question "How happy are you with your life these days?" are (hypothetically) best explained by respondents' current mood. Which question are the responses good answers for? The question about the current mood. Therefore, the respondents don't really answer the question about their average happiness, they answer the question about their current mood instead.
The problem with these explanations seems to be the same: we try to fit the behavior (actions and responses to questions both) to the idea of humans as agents, whose behavior closely optimizes the goals they really pursue, and whose answers closely answer the questions they really consider. But there seems to be no reality to the (coherent) goals and beliefs (or questions one actually considers) that fall out of a descriptive model of humans as agents, even if there are coherent goals and beliefs somewhere, too loosely connected to actions and anticipations to be apparent in them.
I am probably not qualified to make good guesses about this, but as an avid reader of O.B., I think Hanson would be among the first people to agree with you that humans aren't subconsciously enacting coherent goals. The agent with hidden goals models, similar to many situations where Markov model-like formalism is adopted, is just an expedient tool that might offer some correlation with what an agent will do in a given future situation. Affiliation with prestigious institutions, while probably not a coherent goal held over time by many people, does seem to... (read more)