Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The most commonly used introduction to signaling, promoted both by Robin Hanson and in The Art of Strategy, starts with college degrees. Suppose, there are two kinds of people, smart people and stupid people; and suppose, with wild starry-eyed optimism, that the populace is split 50-50 between them. Smart people would add enough value to a company to be worth a $100,000 salary each year, but stupid people would only be worth $40,000. And employers, no matter how hard they try to come up with silly lateral-thinking interview questions like “How many ping-pong balls could fit in the Sistine Chapel?”, can't tell the difference between them.
Now suppose a certain college course, which costs $50,000, passes all smart people but flunks half the stupid people. A strategic employer might declare a policy of hiring (for a one year job; let's keep this model simple) graduates at $100,000 and non-graduates at $40,000.
Why? Consider the thought process of a smart person when deciding whether or not to take the course. She thinks “I am smart, so if I take the course, I will certainly pass. Then I will make an extra $60,000 at this job. So my costs are $50,000, and my benefits are $60,000. Sounds like a good deal.”
The stupid person, on the other hand, thinks: “As a stupid person, if I take the course, I have a 50% chance of passing and making $60,000 extra, and a 50% chance of failing and making $0 extra. My expected benefit is $30,000, but my expected cost is $50,000. I'll stay out of school and take the $40,000 salary for non-graduates.”
...assuming that stupid people all know they're stupid, and that they're all perfectly rational experts at game theory, to name two of several dubious premises here. Yet despite its flaws, this model does give some interesting results. For example, it suggests that rational employers will base decisions upon - and rational employees enroll in - college courses, even if those courses teach nothing of any value. So an investment bank might reject someone who had no college education, even while hiring someone who studied Art History, not known for its relevance to derivative trading.
We'll return to the specific example of education later, but for now it is more important to focus on the general definition that X signals Y if X is more likely to be true when Y is true than when Y is false. Amoral self-interested agents after the $60,000 salary bonus for intelligence, whether they are smart or stupid, will always say “Yes, I'm smart” if you ask them. So saying “I am smart” is not a signal of intelligence. Having a college degree is a signal of intelligence, because a smart person is more likely to get one than a stupid person.
Life frequently throws us into situations where we want to convince other people of something. If we are employees, we want to convince bosses we are skillful, honest, and hard-working. If we run the company, we want to convince customers we have superior products. If we are on the dating scene, we want to show potential mates that we are charming, funny, wealthy, interesting, you name it.
In some of these cases, mere assertion goes a long way. If I tell my employer at a job interview that I speak fluent Spanish, I'll probably get asked to talk to a Spanish-speaker at my job, will either succeed or fail, and if I fail will have a lot of questions to answer and probably get fired - or at the very least be in more trouble than if I'd just admitted I didn't speak Spanish to begin with. Here society and its system of reputational penalties help turn mere assertion into a credible signal: asserting I speak Spanish is costlier if I don't speak Spanish than if I do, and so is believable.
In other cases, mere assertion doesn't work. If I'm at a seedy bar looking for a one-night stand, I can tell a girl I'm totally a multimillionaire and feel relatively sure I won't be found out until after that one night - and so in this she would be naive to believe me, unless I did something only a real multimillionaire could, like give her an expensive diamond necklace.
How expensive a diamond necklace, exactly? To absolutely prove I am a millionaire, only a million dollars worth of diamonds will do; $10,000 worth of diamonds could in theory come from anyone with at least $10,000. But in practice, people only care so much about impressing a girl at a seedy bar; if everyone cares about the same amount, the amount they'll spend on the signal depends mostly on their marginal utility of money, which in turn depends mostly on how much they have. Both a millionaire and a tenthousandaire can afford to buy $10,000 worth of diamonds, but only the millionaire can afford to buy $10,000 worth of diamonds on a whim. If in general people are only willing to spend 1% of their money on an impulse gift, then $10,000 is sufficient evidence that I am a millionaire.
But when the stakes are high, signals can get prohibitively costly. If a dozen millionaires are wooing Helen of Troy, the most beautiful woman in the world, and willing to spend arbitrarily much money on her - and if they all believe Helen will choose the richest among them - then if I only spend $10,000 on her I'll be outshone by a millionaire who spends the full million. Thus, if I want any chance with her at all, then even if I am genuinely the richest man around I might have to squander my entire fortune on diamonds.
This raises an important point: signaling can be really horrible. What if none of us are entirely sure how much Helen's other suitors have? It might be rational for all of us to spend everything we have on diamonds for her. Then twelve millionaires lose their fortunes, eleven of them for nothing. And this isn't some kind of wealth transfer - for all we know, Helen might not even like diamonds; maybe she locks them in her jewelry box after the wedding and never thinks about them again. It's about as economically productive as digging a big hole and throwing money into it.
If all twelve millionaires could get together beforehand and compare their wealth, and agree that only the wealthiest one would woo Helen, then they could all save their fortunes and the result would be exactly the same: Helen marries the wealthiest. If all twelve millionaires are remarkably trustworthy, maybe they can pull it off. But if any of them believe the others might lie about their wealth, or that one of the poorer men might covertly break their pact and woo Helen with gifts, then they've got to go through with the whole awful “everyone wastes everything they have on shiny rocks” ordeal.
Examples of destructive signaling are not limited to hypotheticals. Even if one does not believe Jared Diamond's hypothesis that Easter Island civilization collapsed after chieftains expended all of their resources trying to out-signal each other by building larger and larger stone heads, one can look at Nikolai Roussanov's study on how the dynamics of signaling games in US minority communities encourage conspicuous consumption and prevent members of those communities from investing in education and other important goods.
The Art of Strategy even advances the surprising hypothesis that corporate advertising can be a form of signaling. When a company advertises during the Super Bowl or some other high-visibility event, it costs a lot of money. To be able to afford the commercial, the company must be pretty wealthy; which in turn means it probably sells popular products and isn't going to collapse and leave its customers in the lurch. And to want to afford the commercial, the company must be pretty confident in its product: advertising that you should shop at Wal-Mart is more profitable if you shop at Wal-Mart, love it, and keep coming back than if you're likely to go to Wal-Mart, hate it, and leave without buying anything. This signaling, too, can become destructive: if every other company in your industry is buying Super Bowl commercials, then none of them have a comparative advantage and they're in exactly the same relative position as if none of them bought Super Bowl commercials - throwing money away just as in the diamond example.
Most of us cannot afford a Super Bowl commercial or a diamond necklace, and less people may build giant stone heads than during Easter Island's golden age, but a surprising amount of everyday life can be explained by signaling. For example, why did about 50% of readers get a mental flinch and an overpowering urge to correct me when I used “less” instead of “fewer” in the sentence above? According to Paul Fussell's “Guide Through The American Class System” (ht SIAI mailing list), nitpicky attention to good grammar, even when a sentence is perfectly clear without it, can be a way to signal education, and hence intelligence and probably social class. I would not dare to summarize Fussell's guide here, but it shattered my illusion that I mostly avoid thinking about class signals, and instead convinced me that pretty much everything I do from waking up in the morning to going to bed at night is a class signal. On flowers:
Anyone imagining that just any sort of flowers can be presented in the front of a house without status jeopardy would be wrong. Upper-middle-class flowers are rhododendrons, tiger lilies, amaryllis, columbine, clematis, and roses, except for bright-red ones. One way to learn which flowers are vulgar is to notice the varieties favored on Sunday-morning TV religious programs like Rex Humbard's or Robert Schuller's. There you will see primarily geraniums (red are lower than pink), poinsettias, and chrysanthemums, and you will know instantly, without even attending to the quality of the discourse, that you are looking at a high-prole setup. Other prole flowers include anything too vividly red, like red tulips. Declassed also are phlox, zinnias, salvia, gladioli, begonias, dahlias, fuchsias, and petunias. Members of the middle class will sometimes hope to mitigate the vulgarity of bright-red flowers by planting them in a rotting wheelbarrow or rowboat displayed on the front lawn, but seldom with success.
Seriously, read the essay.
In conclusion, a signal is a method of conveying information among not-necessarily-trustworthy parties by performing an action which is more likely or less costly if the information is true than if it is not true. Because signals are often costly, they can sometimes lead to a depressing waste of resources, but in other cases they may be the only way to believably convey important information.
Partial re-interpretation of: The Curse of Identity
Also related to: Humans Are Not Automatically Strategic, The Affect Heuristic, The Planning Fallacy, The Availability Heuristic, The Conjunction Fallacy, Urges vs. Goals, Your Inner Google, signaling, etc...
What are the best careers for making a lot of money?
Maybe you've thought about this question a lot, and have researched it enough to have a well-formed opinion. But the chances are that even if you hadn't, some sort of an answer popped into your mind right away. Doctors make a lot of money, maybe, or lawyers, or bankers. Rock stars, perhaps.
You probably realize that this is a difficult question. For one, there's the question of who we're talking about. One person's strengths and weaknesses might make them more suited for a particular career path, while for another person, another career is better. Second, the question is not clearly defined. Is a career with a small chance of making it rich and a large chance of remaining poor a better option than a career with a large chance of becoming wealthy but no chance of becoming rich? Third, whoever is asking this question probably does so because they are thinking about what to do with their lives. So you probably don't want to answer on the basis of what career lets you make a lot of money today, but on the basis of which one will do so in the near future. That requires tricky technological and social forecasting, which is quite difficult. And so on.
Yet, despite all of these uncertainties, some sort of an answer probably came to your mind as soon as you heard the question. And if you hadn't considered the question before, your answer probably didn't take any of the above complications into account. It's as if your brain, while generating an answer, never even considered them.
The thing is, it probably didn't.
Daniel Kahneman, in Thinking, Fast and Slow, extensively discusses what I call the Substitution Principle:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. (Kahneman, p. 97)
System 1, if you recall, is the quick, dirty and parallel part of our brains that renders instant judgements, without thinking about them in too much detail. In this case, the actual question that was asked was ”what are the best careers for making a lot of money”. The question that was actually answered was ”what careers have I come to associate with wealth”.
Here are some other examples of substitution that Kahneman gives:
So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?
I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby
I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem.
Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern.
- When I began my university studies back in 2006, I felt strongly motivated to do something about Singularity matters. I genuinely believed that this was the most important thing facing humanity, and that it needed to be urgently taken care of. So in order to become able to contribute, I tried to study as much as possible. I had had troubles with procrastination, and so, in what has to be one of the most idiotic and ill-thought-out acts of self-sabotage possible, I taught myself to feel guilty whenever I was relaxing and not working. Combine an inability to properly relax with an attempted course load that was twice the university's recommended pace, and you can guess the results: after a year or two, I had an extended burnout that I still haven't fully recovered from. I ended up completing my Bachelor's degree in five years, which is the official target time for doing both your Bachelor's and your Master's.
- A few years later, I became one of the founding members of the Finnish Pirate Party, and on the basis of some writings the others thought were pretty good, got myself elected as the spokesman. Unfortunately – and as I should have known before taking up the post – I was a pretty bad choice for this job. I'm good at expressing myself in writing, and when I have the time to think. I hate talking with strangers on the phone, find it distracting to look people in the eyes when I'm talking with them, and have a tendency to start a sentence over two or three times before hitting on a formulation I like. I'm also bad at thinking quickly on my feet and coming up with snappy answers in live conversation. The spokesman task involved things like giving quick statements to reporters ten seconds after I'd been woken up by their phone call, and live interviews where I had to reply to criticisms so foreign to my thinking that they would never have occurred to me naturally. I was pretty terrible at the job, and finally delegated most of it to other people until my term ran out – though not before I'd already done noticeable damage to our cause.
- Last year, I was a Visiting Fellow at the Singularity Institute. At one point, I ended up helping Eliezer in writing his book. Mostly this involved me just sitting next to him and making sure he did get writing done while I surfed the Internet or played a computer game. Occasionally I would offer some suggestion if asked. Although I did not actually do much, the multitasking required still made me unable to spend this time productively myself, and for some reason it always left me tired the next day. I felt somewhat unhappy with this, in that I felt I was doing something that anyone could do. Eventually Anna Salamon pointed out to me that maybe this was something that I was more capable of doing than others, exactly because so many people would feel that ”anyone” could do this and thus would prefer to do something else.
It may not be immediately obvious, but all three examples have something in common. In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do). "Prestigious work" could also be translated as "work that really convinces others that you are doing something valuable for a cause".
This is the fourth part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.
In the previous post, Strategic ignorance and plausible deniability, we discussed some ways by which people might have modules designed to keep them away from certain kinds of information. These arguments were relatively straightforward.
The next step up is the hypothesis that our "press secretary module" might be designed to contain information that is useful for certain purposes, even if other modules have information that not only conflicts with this information, but is also more likely to be accurate. That is, some modules are designed to acquire systematically biased - i.e. false - information, including information that other modules "know" is wrong.
ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't. Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not. Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:
Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.
The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:
- Nihilism. Nothing matters, so it doesn't matter what you believe. This path is self-consistent, but it gives no guidance.
- Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
- Pancritical rationalism. You have taken the available criticisms for the belief into account and still feel comfortable with the belief. This path gives guidance about what to believe, although it does not uniquely determine one's beliefs. Pancritical rationalism can be criticized, so it is self-consistent in that sense.
Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.
Fifteen thousand years ago, our ancestors bred dogs to serve man. In merely 150 centuries, we shaped collies to herd our sheep and pekingese to sit in our emperor's sleeves. Wild wolves can't understand us, but we teach their domesticated counterparts tricks for fun. And, most importantly of all, dogs get emotional pleasure out of serving their master. When my family's terrier runs to the kennel, she does so with blissful, self-reinforcing obedience.
When I hear amateur philosophers ponder the meaning of life, I worry humans suffer from the same embarrassing shortcoming.
It's not enough to find a meaningful cause. These monkeys want to look in the stars and see their lives' purpose described in explicit detail. They expect to comb through ancient writings and suddenly discover an edict reading "the meaning of life is to collect as many paperclips as possible" and then happily go about their lives as imperfect, yet fulfilled paperclip maximizers.
I'd expect us to shout "life is without mandated meaning!" with lungs full of joy. There are no rules we have to follow, only the consequences we choose for us and our fellow humans. Huzzah!
But most humans want nothing more than to surrender to a powerful force. See Augustine's conception of freedom, the definition of the word Islam, or Popper's "The Open Society and Its Enemies." When they can't find one overwhelming enough, they furrow their brow and declare with frustration that life has no meaning.
This is part denunciation and part confession. At times, I've felt the same way. I worry man is a domesticated species.
WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky
Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool?
As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.
The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.
This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.
In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1
If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.
In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the "alien parasite”) trying to take over from an unsuspecting unconscious.
Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil's: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.
I want to propose an expansion and slight amendment of Wei's model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei's writing, I'll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.
The Signaling Theory of Consciousness
This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei's “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.
So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei's “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers "Only admirable things like compassion and honor, of course!" and no one detects a lie because the part of the mind that's moving your mouth isn't lying.
This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn't explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?
Summary: see title
Much effort is spent (arguably wasted) by humans in a zero-sum game of signaling that they hold good attributes. Because humans have strong incentive to fake these attributes, they cannot simply inform each other that:
I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.
Or, even better:
I would cooperate with you if and only if (you would cooperate with me if and only if I would cooperate with you).
The "status" hypothesis simply claims that we associate one another with a one-dimensional quantity: the perceived degree to which others' behavior can affect our well-being. And each of us behaves toward our peers according to our internally represented status mapping.
Imagine that, within your group, you're in a position where everyone wants to please you and no one can afford to challenge you. What does this mean for your behavior? It means you get to act selfish -- focusing on what makes you most pleased, and becoming less sensitive to lower-grade pleasure stimuli.
Now let's say you meet an outsider. They want to estimate your status, because it's a useful and efficient value to remember. And when they see you acting selfishly in front of others in your group, they will infer the lopsided balance of power.
In your own life, when you interact with someone who could affect your well-being, you do your best to act in a way that is valuable to them, hoping they will be motivated to reciprocate. The thing is, if an observer witnesses your unselfish behavior, it's a telltale sign of your lower status. And this scenario is so general, and so common, that most people learn to be very observant of others' deviations from selfishness.
View more: Next