[Link] Offense 101
From Julian Sanchez, a brilliant idea unlikely to be implemented:
American politics sometimes seems like a contest to see which group of partisans can take greater umbrage at the most recent outrageous remark from a member of the opposing tribe. As a mild countermeasure, I offer a modest proposal for American universities. All freshmen should be required to take a course called “Offense 101,” where the readings will consist of arguments from across the political and philosophical spectrum that some substantial proportion of the student body is likely to find offensive. Selections from The Bell Curve. Essays from one of the New Atheists and one of their opponents, and from hardcore pro-lifers and pro-choicers. Ward Churchill’s “little Eichmanns” monograph. Defenses of eugenics, torture, violent revolution, authoritarianism, aggressive censorship, and absolute free speech. Positive reviews of the Star Wars prequels. Assemble your own curriculum—there’s no shortage of material.
For each reading, students will have to make a good faith, unironic effort to reconstruct the offensive argument in its most persuasive form, marshaling additional supporting evidence and amending weak arguments to better support the author’s conclusion. Points deducted if an observer can tell the student doesn’t really agree with the position they’re defending.
Only after this phase is complete will students be allowed to begin rebutting the arguments. Anyone who thinks it’s relevant to point out that the argument is offensive (or bigoted, sexist, unpatriotic, fascistic, communistic, whatever) will receive a patronizing look from the professor that says: “Yes, obviously, did you not read the course title? Let’s move on.” Insofar as these labels are shorthand for an argument that certain categories of views are wrong and can be rejected as a class, the actual argument will have to be presented.
How To Have Things Correctly
I think people who are not made happier by having things either have the wrong things, or have them incorrectly. Here is how I get the most out of my stuff.
Money doesn't buy happiness. If you want to try throwing money at the problem anyway, you should buy experiences like vacations or services, rather than purchasing objects. If you have to buy objects, they should be absolute and not positional goods; positional goods just put you on a treadmill and you're never going to catch up.
Supposedly.
I think getting value out of spending money, owning objects, and having positional goods are all three of them skills, that people often don't have naturally but can develop. I'm going to focus mostly on the middle skill: how to have things correctly1.
Raising the forecasting waterline (part 1)
Previously: Raising the waterline, see also: 1001 PredictionBook Nights (LW copy), Techniques for probability estimates
Low waterlines imply that it's relatively easy for a novice to outperform the competition. (In poker, as discussed in Nate Silver's book, the "fish" are those who can't master basic techniques such as folding when they have a poor hand, or calculating even roughly the expected value of a pot.) Does this apply to the domain of making predictions? It's early days, but it looks as if a smallish set of tools - a conscious status quo bias, respecting probability axioms when considering alternatives, considering references classes, leaving yourself a line of retreat, detaching from sunk costs, and a few more - can at least place you in a good position.
New study on choice blindness in moral positions
Change blindness is the phenomenon whereby people fail to notice changes in scenery and whatnot if they're not directed to pay attention to it. There are countless videos online demonstrating this effect (one of my favorites here, by Richard Wiseman).
One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).
Subsequently a pair of Swedish researchers familiar with some sleight-of-hand magic conceived a new twist on this line of research, arguably even more audacious: have participants make a choice and quietly swap that choice with something else. People not only fail to notice the change, but confabulate reasons why they had preferred the counterfeit choice (video here). They called their new paradigm "Choice Blindness".
Just recently the same Swedish researchers published a new study that is even more shocking. Rather than demonstrating choice blindness by having participants choose between two photographs, they demonstrated the same effect with moral propositions. Participants completed a survey asking them to agree or disagree with statements such as "large scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism". When they reviewed their copy of the survey their responses had been covertly changed, but 69% failed to notice at least one of two changes, and when asked to explain their answers 53% argued in favor of what they falsely believed was their original choice, when they had previously indicated the opposite moral position (study here, video here).
Beyond the Reach of God
Followup to: The Magnitude of His Own Folly
Today's post is a tad gloomier than usual, as I measure such things. It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me. Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading. (Unless they have something to protect, including their own life.)
So! Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong. Not as the result of any explicit propositional verbal belief. More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.
Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.
But we don't live in that world. We live in the world beyond the reach of God.
Why Don't People Help Others More?
As Peter Singer writes in his book The Life You Can Save: "[t]he world would be a much simpler place if one could bring about social change merely by making a logically consistent moral argument". Many people one encounters might agree that a social change movement is noble yet not want to do anything to promote it, or want to give more money to a charity yet refrain from doing so. Additional moralizing doesn't seem to do the trick. ...So what does?
Motivating people to altruism is relevant for the optimal philanthropy movement. For a start on the answer, like many things, I turn to psychology. Specifically, the psychology Peter Singer catalogues in his book.
A Single, Identifiable Victim
One of the most well-known motivations behind helping others is a personal connection, which triggers empathy. When psychologists researching generosity paid participants to join a psychological experiment and then later gave these participants the opportunity to donate to a global poverty fighting organization Save the Children, two different kinds of information were given.
One random group of participants were told "Food shortages in Malawi are affecting more than three million children" and some additional information about how the need for donations was very strong, and these donations could help stop the food shortages.
Another random group of participants were instead shown the photo of Rokia, a seven-year-old Malawian girl who is desperately poor. The participants were told that "her life will be changed for the better by your gift".
Furthermore, a third random group of participants were shown the photo of Rokia, told about who she is and that "her life will be changed for the better", but ALSO told about the general information about the famine and told the same "food shortages [...] are affecting more than three million" -- a combination of both the previous groups.
Lastly, a fourth random group was shown the photo of Rokia, informed about her the same as the other groups, and then given information about another child, identified by name, and told that their donation would also affect this child too for the better.
It's All About the Person
Interestingly, the group who was told ONLY about Rokia gave the most money. The group who was told about both children reported feeling less overal emotion than those who only saw Rokia, and gave less money. The group who was told about both Rokia and the general famine information gave even less than that, followed by the group that only got the general famine information.1,2 It turns out that information about a single person was the most salient for creating an empathetic response to trigger a willingness to donate.1,2
This continues through additional studies. In another generosity experiment, one group of people was told that a single child needed a lifesaving medical treatment that costs $300K, and was given the opportunity to contribute towards this fund. A second random group of people was told that eight children needed a lifesaving treatment, and all of them would die unless $300K could be provided, and was given an opportunity to contribute. More people opted to donate toward the single child.3,4
This is the basis for why we're so willing to chase after lost miners or Baby Jessica no matter the monetary cost, but turn a blind eye to the mass unknown starving in the developing world. Indeed, the person doesn't even need to be particularly identified, though it does help. In another experiment, people asked by researchers to make a donation to Habitat for Humanity were more likely to do so if they were told that the family "has been selected" rather than that they "will be selected" -- even though all other parts of the pitch were the same, and the participants got no information about who the families actually were5.
The Deliberative and The Affective
Why is this the case? Researcher Paul Slovic thinks that humans have two different processes for deciding what to do. The first is an affective system that responds to emotion, rapidly processing images and stories and generating an intuitive feeling that leads to immediate action. The second is a deliberative system that draws on reasoning, and operates on words, numbers, and abstractions, which is much slower to generate action.6
To follow up, the Rokia experiment was done again, except yet another twist was added -- there were two groups, one told only about Rokia exactly as before, and one told only the generic famine information exactly as before. Within each group, half the group took a survey designed to arouse their emotions by asking them things like "When you hear the word 'baby' how do you feel?" The other half of both groups was given emotionally neutral questions, like math puzzles.
This time, the Rokia group gave far more, but those in the group who randomly had their emotions aroused gave even more than those who heard about Rokia but had finished math problems. On the other side, those who heard the generic famine information showed no increase in donation regardless of how heightened their emotions were.1
Futility and Making a Difference
Imagine you're told that there are 3000 refugees at risk in a camp in Rwanda, and you could donate towards aid that would save 1500 of them. Would you do it? And how much would you donate?
Now this time imagine that you can still save 1500 refugees with the same amount of money, but the camp has 10000 refugees. In an experiment where these two scenarios were presented not as a thought experiment but as realities to two separate random groups, the group that heard of only 3000 refugees were more likely to donate, and donated larger amounts.7,8
Enter another quirk of our giving psychology, right or wrong: futility thinking. We think that if we're not making a sizable difference, it's not worth making the difference at all -- it will only be a drop in the ocean and the problem will keep raging on.
Am I Responsible?
People are also far less likely to help if they're with other people. In this experiment, students were invited to participate in a market research survey. However, when the researcher gave the students their questionnaire to fill out, she went into a back room separated from the office only by a curtain. A few minutes later, noises strongly suggested that she had got on a chair to get something from a high shelf, and then fell off it, loudly complaining that she couldn't feel or move her foot.
With only one student taking the survey, 70% of them stopped what they were doing and offered assistance. However, when there were two students taking the survey, this number dropped down dramatically. Most noticeably, when the group was two students -- but one of the students was a stooge who was in on it and would always not respond, the response rate of the non-stooge participant was only 7%.9
This one is known as diffusion of responsibility, better known as the bystander effect -- we help more often when we think it is our responsibility to do so, and -- again for right or for wrong -- we naturally look to others to see if they're helping before doing so ourselves.
What's Fair In Help?
It's clear that people value fairness, even to their own detriment. In a game called "the Ultimatum Game", one participant is given a sum of money by the researcher, say $10, and told they can split this money with an anonymous second player in any proportion they choose -- give them $10, give them $7, give them $5, give them nothing, everything is fair game. The catch is, however, the second player, after hearing of the split anonymously, gets to vote to accept it or reject it. Should the split be accepted, both players walk away with the agreed amount. But should the split be rejected, both players walk away with nothing.
A Fair Split
The economist, expecting ideally rational and perfectly self-interested players, predicts that the second player would accept any split that gets them money, since anything is better than nothing. And the first player, understanding this, would naturally offer $1 and keep $9 for himself. At no point are identities revealed, so reputation and retribution are no issue.
But the results turn out to be quite different -- the vast majority offer an equal split. Yet, when an offer comes around that offers $2 or less, it is almost always rejected, even though $2 is better than nothing.10 And this effect persists even when played for thousands of dollars and persists across nearly all cultures.
Splitting and Anchoring in Charity
This sense of fairness persists into helping as well -- people generally have a strong tendency not to want to help more than the other people around them, and if they find themselves the only ones helping on a frequent basis, they start to feel a "sucker". On the flipside, if others are doing more, they will follow suit.11,12,13
Those told the average donation to a charity nearly always tend to give that amount, even if the average told to them is a lie, having secretly been increased or decreased. And it can be replicated even without lying -- those told about an above average gift were far more likely to donate more, even attempting to match that gift.14,15 Overall, we tend to match the behavior of our reference class -- those people we identify with -- and this includes how much we help. We donate more when we believe others are donating more, and donate less when we believe others are doing so.
Challenging the Self-Interest Norm
But there's a way to break this cycle of futility, responsibility, and fairness -- challenge the norm by openly communicating about helping others. While many religious and secular values insist that the best giving is anonymous giving, this turns out to not always be the case. While there may be other reasons to give anonymously, don't forget the benefits of giving openly -- being open about helping inspires others to help, and can help challenge the norms of the culture.
Indeed, many organizations now exist to help challenge the norms of donations and try to create a culture where they give more. GivingWhatWeCan is a community of 230 people (including me!) who have all pledged to donate at least 10% of their income to organizations working on ending extreme poverty, and submit statements proving so. BolderGiving has a bunch of inspiring stories of over 100 people who all give at least 20% of their income, with a dozen giving over 90%! And these aren't all rich people, some of them are even ordinary students.
Who's Willing to Be Altruistic?
While people are not saints, experiments have shown that people tend to grossly overestimate how self-interested other people are -- for one example, people estimated that males would overwhelmingly favor a piece of legislation to "slash research funding to a disease that affects only women", even while -- being male -- they themselves do not support such legislation.16
This also manifests itself in an expectation that people be "self-interested" in their philanthropic cause -- suggesting much stronger support for volunteers in Students Against Drunk Driving who themselves knew people killed in drunk driving accidents versus those people who had no such personal experiences but just thought it to be "a very important cause".17
Alex de Tocqueville, echoing the early economists who expected $9/$1 splits in the Ultimatum Game, wrote in 1835 that "Americans enjoy explaining almost every act of their lives on the principle of self-interest".18 But this isn't always the case, and in challenging the norm, people make it more acceptable to be altruistic. It's not just "goody two-shoes", and it's praiseworthy to be "too charitable".
A Bit of a Nudge
A somewhat pressing problem in getting people to help was in organ donation -- surely no one was inconvenienced by having their organs donated after they had died. Yet, why would people not sign up? And how could we get more people to sign up?
In Germany, only 12% of the population are registered organ donors. In nearby Austria, that number is 99.98%. Are people in Austria just less worried about what will happen to them after they die, or just that more altruistic? It turns out the answer is far more simple -- in Germany you must put yourself on the register to become a potential donor (opt-in), whereas in Austria you are a potential donor unless you object (opt-out). While people may be, for right or for wrong, worried about the fate of their body after it is dead, they appear less likely to express these reservations in opt-out systems.19
While Richard Thaler and Cass Sunstein argue in their book Nudge: Improving Decisions About Health, Wellness, and Happiness that we sometimes suck at making decisions in our own interest and all could do better with more favorable "defaults", such defaults are also pressing in helping people.
While opt-out organ donation is a huge deal, there's another similar idea -- opt-out philanthropy. Back before 2008 when the investment bank Bear Stearns still existed, Bear Stearns listed their guiding principle as philanthropy as fostering good citizenship and well-rounded individuals. To this effect, they required the top 1000 most highest paid employees to donate 4% of their salary and bonuses to non-profits, and prove it with their tax returns. This resulted in more than $45 million in donations during 2006. Many employees described the requirement as "getting themselves to do what they wanted to do anyway".
Conclusions
So, according to this bit of psychology, what could we do to get other people to help more, besides moralize? Well, we have five key take-aways:
(1) present these people with a single and highly identifiable victim that they can help
(2) nudge them with a default of opt-out philanthropy
(3) be more open about our willingness to be altruistic and encourage other people to help
(4) make sure people understand the average level of helping around them, and
(5) instill a responsibility to help and an understanding that doing so is not futile.
Hopefully, with these tips and more, helping people more can be come just one of those things we do.
References
(Note: Links are to PDF files.)
1: D. A. Small, G. Loewenstein, and P. Slovic. 2007. "Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims". Organizational Behavior and Human Decision Processes 102: p143-53
2: Paul Slovic. 2007. "If I Look at the Mass I Will Never Act: Psychic Numbing and Genocide". Judgment and Decision Making 2(2): p79-95.
3: T. Kogut and I. Ritov. 2005. "The 'Identified Victim' Effect: An Identified Group, or Just a Single Individual?". Journal of Behavioral Decision Making 18: p157-67.
4: T. Kogut and I. Ritov. 2005. "The Singularity of Identified Victims in Separate and Joint Evaluations". Organizational Behavior and Human Decision Processes 97: p106-116.
5: D. A. Small and G. Lowenstein. 2003. "Helping the Victim or Helping a Victim: Altruism and Identifiability". Journal of Risk and Uncertainty 26(1): p5-16.
6: Singer cites this from Paul Slovic, who in turn cites it from: Seymour Epstein. 1994. "Integration of the Cognitive and the Psychodynamic Unconscious". American Psychologist 49: p709-24. Slovic refers to the affective system as "experiential" and the deliberative system as "analytic". This is also related to Daniel Kahneman's popular book Thinking Fast and Slow.
7: D. Fetherstonhaugh, P. Slovic, S. M. Johnson, and J. Friedrich. 1997. "Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing". Journal of Risk and Uncertainty 14: p283-300.
8: Daniel Kahneman and Amos Tversky. 1979. "Prospect Theory: An Analysis of Decision Under Risk." Econometrica 47: p263-91.
9: Bib Lantané and John Darley. 1970. The Unresponsive Bystander: Why Doesn't He Help?. New York: Appleton-Century-Crofts, p58.
10: Martin Nowak, Karen Page, and Karl Sigmund. 2000. "Fairness Versus Reason in the Ultimatum Game". Science 289: p1183-75.
11: Lee Ross and Richard E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press, p27-46.
12: Robert Cialdini. 2001. Influence: Science and Practice, 4th Edition. Boston: Allyn and Bacon.
13: Judith Lichtenberg. 2004. "Absence and the Unfond Heart: Why People Are Less Giving Than They Might Be". in Deen Chatterjee, ed. The Ethics of Assistance: Morality and the Distant Needy. Cambridge, UK: Cambridge University Press.
14: Jen Shang and Rachel Croson. Forthcoming. "Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods". The Economic Journal.
15: Rachel Croson and Jen Shang. 2008. "The Impact of Downward Social Information on Contribution Decision". Experimental Economics 11: p221-33.
16: Dale Miller. 199. "The Norm of Self-Interest". American Psychologist 54: 1053-60.
17: Rebecca Ratner and Jennifer Clarke. Unpublished. "Negativity Conveyed to Social Actors Who Lack a Personal Connection to the Cause".
18: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. Democracy in America. Garden City, N.Y.: Anchor, p546.
19: Eric Johnson and Daniel Goldstein. 2003. "Do Defaults Save Lives?". Science 302: p1338-39.
(This is an updated version of an earlier draft from my blog.)
The limits of introspection
Related to: Inferring Our Desires
The last post in this series suggested that we make up goals and preference for other people as we go along, but ended with the suggestion that we do the same for ourselves. This deserves some evidence.
One of the most famous sets of investigations into this issue was Nisbett and Wilson's Verbal Reports on Mental Processes, the discovery of which I owe to another Less Wronger even though I can't remember who. The abstract says it all:
When people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit casual theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.
In short, people guess, and sometimes they get lucky. But where's the evidence?
Nisbett & Schachter, 1966. People were asked to get electric shocks to see how much shock they could stand (I myself would have waited to see if one of those see-how-much-free-candy-you'll-eat studies from the post last week was still open). Half the subjects were also given a placebo pill which they were told would cause heart palpitations, tremors, and breathing irregularities - the main problems people report when they get shocked. The hypothesis: people who took the pill would attribute much of the unpleasantness of the shock to the pill instead, and so tolerate more shock. This occurred right on schedule: people who took the pill tolerated four times as strong a shock as controls. When asked why they did so well, the twelve subjects in the experimental group came up with fabricated reasons; one example given was "I played with radios as a child, so I'm used to electricity." Only three of twelve subjects made a connection between the pill and their shock tolerance; when the researchers revealed the deception and their hypothesis, most subjects said it was an interesting idea and probably explained the other subjects, but it hadn't affected them personally.
Zimbardo et al, 1965. Participants in this experiment were probably pleased to learn there were no electric shocks involved, right up until the point where the researchers told them they had to eat bugs. In one condition, a friendly and polite researcher made the request; in another, a surly and arrogant researcher asked. Everyone ate the bug (experimenters can be pretty convincing), but only the group accosted by the unpleasant researcher claimed to have liked it. This confirmed the team's hypothesis: the nice-researcher group would know why they ate the bug - to please their new best friend - but the mean-researcher group would either have to admit it was because they're pushovers, or explain it by saying they liked eating bugs. When asked after the experiment why they were so willing to eat the bug, they said things like "Oh, it's just one bug, it's no big deal." When presented with the idea of cognitive dissonance, they once again agreed it was an interesting idea that probably affected some of the other subjects but of course not them.
Maier, 1931. Subjects were placed in a room with several interesting tools and asked to come up with as many solutions as possible to a puzzle about tying two cords together. One end of each cord was tied to the ceiling, and when the subject was holding on to one cord they couldn't reach the other. A few solutions were obvious, such as tying an extension cord to each, but the experiment involved a more complicated solution - tying a weight to a cord and using it as a pendulum to bring it into reach of the other. Subjects were generally unable to come up with this idea on their own in any reasonable amount of time, but when the experimenter, supposedly in the process of observing the subject, "accidentally" brushed up against one cord and set it swinging, most subjects were able to develop the solution within 45 seconds. However, when the experimenter asked immediately afterwards how they came up with the pendulum idea, the subjects were completely unable to recognize the experimenter's movement as the cue, and instead came up with completely unrelated ideas and invented thought processes, some rather complicated. After what the study calls "persistent probing", less than a third of the subjects mentioned the role of the experimenter.
Latane & Darley, 1970. This is the famous "bystander effect", where people are less likely to help when there are others present. The researchers asked subjects in bystander effect studies what factors influenced their decision not to help; the subjects gave many, but didn't mention the presence of other people.
Nisbett & Wilson, 1977. Subjects were primed with lists of words all relating to an unlisted word (eg "ocean" and "moon" to elicit "tide"), and then asked the name of a question, one possible answer to which involved the unlisted word (eg "What's your favorite detergent?" "Tide!"). The experimenters confirmed that many more people who had been primed with the lists gave the unlisted answer than control subjects (eg more people who had memorized "ocean" and "moon" gave Tide as their favorite detergent). Then they asked subjects why they had chosen their answer, and the subjects generally gave totally unrelated responses (eg "I love the color of the Tide box" or "My mother uses Tide"). When the experiment was explained to subjects, only a third admitted that the words might have affected their answer; the rest kept insisting that Tide was really their favorite. Then they repeated the process with several other words and questions, continuing to ask if the word lists influenced answer choice. The subjects' answers were effectively random - sometimes they believed the words didn't affect them when statistically they probably did, other times they believed the words did affect them when statistically they probably didn't.
Nisbett & Wilson, 1977. Subjects in a department store were asked to evaluate different articles of clothing in a line. As usually happens in this sort of task, people disproportionately chose the rightmost object (four times as often as the leftmost), no matter which object was on the right; this is technically referred to as a "position effect". The customers were asked to justify their choices and were happy to do so based on different qualities of the fabric et cetera; none said their choice had anything to do with position, and the experimenters dryly mention that when they asked the subjects if this was a possibility, "virtually all subjects denied it, usually with a worried glance at the interviewer suggesting they felt that they...were dealing with a madman".
Nisbett & Wilson, 1977. Subjects watched a video of a teacher with a foreign accent. In one group, the video showed the teacher acting kindly toward his students; in the other, it showed the teacher being strict and unfair. Subjects were asked to rate how much they liked the teacher, and also how much they liked his appearance and accent, which were the same across both groups. Because of the halo effect, students who saw the teacher acting nice thought he was attractive with a charming accent; people who saw the teacher acting mean thought he was ugly with a harsh accent. Then subjects were asked whether how much they liked the teacher had affected how much they liked the appearance and accent. They generally denied any halo effect, and in fact often insisted that part of the reason they hated the teacher so much was his awful clothes and annoying accent - the same clothes and accent which the nice-teacher group said were part of the reason they liked him so much!
There are about twice as many studies listed in the review article itself, but the trend is probably getting pretty clear. In some studies, like the bug-eating experiment, people perform behaviors and, when asked why they performed the behavior, guess wrong. Their true reasons for the behavior are unclear to them. In others, like the clothes position study, people make a choice, and when asked what preferences caused the choice, guess wrong. Again, their true reasons are unclear to them.
Nisbett and Wilson add that when they ask people to predict how they would react to the situations in their experiments, people "make predictions that in every case were similar to the erroneous reports given by the actual subjects." In the bystander effect experiment, outsiders predict the presence or absence of others wouldn't affect their ability to help, and subjects claim (wrongly) that the presence or absence of others didn't affect their ability to help.
In fact, it goes further than this. In the word-priming study (remember? The one with Tide detergent?) Nisbett and Wilson asked outsiders to predict which sets of words would change answers to which questions (would hearing "ocean" and "moon" make you pick Tide as your favorite detergent? Would hearing "Thanksgiving" make you pick Turkey as a vacation destination?). The outsiders' guesses correlated not at all with which words genuinely changed answers, but very much with which words the subjects guessed had changed their answers. Perhaps the subjects' answers looked a lot like the outsiders' answers because both were engaged in the same process: guessing blindly.
These studies suggest that people do not have introspective awareness to the processes that generate their behavior. They guess their preferences, justifications, and beliefs by inferring the most plausible rationale for their observed behavior, but are unable to make these guesses qualitatively better than outside observers. This supports the view presented in the last few posts: that mental processes are the results of opaque preferences, and that our own "introspected" goals and preferences are a product of the same machinery that infers goals and preferences in others in order to predict their behavior.
Bayes for Schizophrenics: Reasoning in Delusional Disorders
Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors
Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".
Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.
Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.
In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.
Bargaining and Auctions
Some people have things. Other people want them. Economists agree that the eventual price will be set by supply and demand, but both parties have tragically misplaced their copies of the Big Book Of Levels Of Supply And Demand For All Goods. They're going to have to decide on a price by themselves.
When the transaction can be modeled by the interaction of one seller and one buyer, this kind of decision usually looks like bargaining. When it's best modeled as one seller and multiple buyers (or vice versa), the decision usually looks like an auction. Many buyers and many sellers produce a marketplace, but this is complicated and we'll stick to bargains and auctions for now.
Simple bargains bear some similarity to the Ultimatum Game. Suppose an antique dealer has a table she values at $50, and I go to the antique store and fall in love with it, believing it will add $400 worth of classiness to my room. The dealer should never sell for less than $50, and I should never buy for more than $400, but any value in between would benefit both of us. More specifically, it would give us a combined $350 profit. The remaining question is how to divide that $350 pot.
If I make an offer to buy at $60, I'm proposing to split the pot "$10 for you, $340 for me". If the dealer makes a counter-offer of $225, she's offering "$175 for you, $175 for me" - or an even split.
Each round of bargaining resembles the Ultimatum Game because one player proposes to split a pot, and the other player accepts or rejects. If the other player rejects the offer (for example, the dealer refuses to sell it for $60) then the deal falls through and neither of us gets any money.
But bargaining is unlike the Ultimatum Game for several reasons. First, neither player is the designated "offer-maker"; either player may begin by making an offer. Second, the game doesn't end after one round; if the dealer rejects my offer, she can make a counter-offer of her own. Third, and maybe most important, neither player is exactly sure about the size of the pot: I don't walk in knowing that the dealer bought the table for $50, and I may not really be sure I value the table at $400.
Our intuition tells us that the fairest method is to split the profits evenly at a price of $225. This number forms a useful Schelling point (remember those?) that prevents the hassle of further bargaining.
The Art of Strategy (see the beginning of Ch. 11) includes a proof that an even split is the rational choice under certain artificial assumptions. Imagine a store selling souvenirs for the 2012 Olympics. They make $1000/day each of the sixteen days the Olympics are going on. Unfortunately, the day before the Olympics, the workers decide to strike; the store will make no money without workers, and they don't have enough time to hire scabs.
Suppose Britain has some very strange labor laws that mandate the following negotiation procedure: on each odd numbered day of the Olympics, the labor union representative will approach the boss and make an offer; the boss can either accept it or reject it. On each even numbered day, the boss makes the offer to the labor union.
So if the negotiations were to drag on to the sixteenth and last day of the Olympics, on that even-numbered day the boss would approach the labor union rep. They're both the sort of straw man rationalists who would take 99-1 splits on the Ultimatum Game, so she offers the labor union rep $1 of the $1000. Since it's the last day of the Olympics and she's a straw man rationalist, the rep accepts.
But on the fifteenth day of the Olympics, the labor union rep will approach the boss. She knows that if no deal is struck today, she'll end out with $1 and the boss will end out with $999. She has to convince the boss to accept a deal on the fifteenth day instead of waiting until the sixteenth. So she offers $1 of the profits from the fifteenth day to the boss, with the labor union keeping the rest; now their totals are $1000 for the workers, $1000 for the boss. Since $1000 is better than $999, the boss agrees to these terms and the strike is ended on the fifteenth day.
We can see by this logic that on odd numbered days the boss and workers get the same amount, and on even numbered days the boss gets more than the workers but the ratio converges to 1:1 as the length of the negotiations increase. If they were negotiating an indefinite contract, then even if the boss made the first move we might expect her to offer an even split.
So both some intuitive and some mathematical arguments lead us to converge on this idea of an even split of the sort that gives us the table for $225. But if I want to be a “hard bargainer” - the kind of person who manages to get the table for less than $225 - I have a couple of things I could try.
I could deceive the seller as to how much I valued the table. This is a pretty traditional bargaining tactic: “That old piece of junk? I'd be doing you a favor for taking it off your hands.” Here I'm implicitly claiming that the dealer must have paid less than $50, and that I would get less than $400 worth of value. If the dealer paid $20 and I'd only value it to the tune of $300, then splitting the profit evenly would mean a final price of $160. The dealer could then be expected to counter my move with his own claim as to the table's value: “$160? Do I look like I was born yesterday? This table was old in the time of the Norman Conquest! Its wood comes from a tree that grows on an enchanted island in the Freptane Sea which appears for only one day every seven years!” The final price might be determined by how plausible we each considered the other's claims.
Or I could rig the Ultimatum Game. Used car dealerships are notorious for adding on “extras” after you've agreed on a price over the phone (“Well yes, we agreed the car was $5999, but if you want a steering wheel, that costs another $200.”) Somebody (possibly an LWer?) proposed showing up to the car dealership without any cash or credit cards, just a check made out for the agreed-upon amount; the dealer now has no choice but to either take the money or forget about the whole deal. In theory, I could go to the antique dealer with a check made out for $60 and he wouldn't have a lot of options (though do remember that people usually reject ultimata of below about 70-30). The classic bargaining tactic of “I am but a poor chimney sweep with only a few dollars to my name and seven small children to feed and I could never afford a price above $60” seems closely related to this strategy.
And although we're still technically talking about transactions with only one buyer and seller, the mere threat of another seller can change the balance of power drastically. Suppose I tell the dealer I know of another dealer who sells modern art for a fixed price of $300, and that the modern art would add exactly as much classiness to my room as this antique table - that is, I only want one of the two and I'm indifferent between them. Now we're no longer talking about coming up with a price between $50 and $400 - anything over $300 and I'll reject it and go to the other guy. Now we're talking about splitting the $250 profit between $50 and $300, and if we split it evenly I should expect to pay $175.
(why not $299? After all, the dealer knows $299 is better than my other offer. Because we're still playing the Ultimatum Game, that's why. And if it was $299, then having a second option - art that I like as much as the table - would actually make my bargaining position worse - after all, I was getting it for $225 before.)
Negotiation gurus call this backup option the BATNA (“Best Alternative To Negotiated Agreement”) and consider it a useful thing to have. If only one participant in the negotiation has a BATNA greater than zero, that person is less desperate, needs the agreement less, and can hold out for a better deal - just as my $300 art allowed me to lower the asking price of the table from $225 to $175.
This “one buyer, one seller” model is artificial, but from here we can start to see how the real world existence of other buyers and sellers serve as BATNAs for both parties and how such negotiations eventually create the supply and demand of the marketplace.
The remaining case is one seller and multiple buyers (or vice versa). Here the seller's BATNA is “sell it to the other guy”, and so a successful buyer must beat the other guy's price. In practice, this takes the form of an auction (why is this different than the previous example? Partly because in the previous example, we were comparing a negotiable commodity - the table - to a fixed price commodity - the art.)
How much should you bid at an auction? In the so-called English auction (the classic auction where a crazy man stands at the front shouting “Eighty!!! Eighty!!! We have eighty!!! Do I hear eighty-five?!? Eighty-five?!? Eighty-five to the man in the straw hat!!! Do I hear ninety?!?) the answer should be pretty obvious: keep bidding infinitesmally more than the last guy until you reach your value for the product, then stop. For example, with the $400 table, keep bidding until the price approaches $400.
But what about a sealed-bid auction, where everyone hands the auctioneer their bid and the auctioneer gives the product to the highest? Or what about the so-called “Dutch auction” where the auctioneer starts high and goes lower until someone bites (“A hundred?!? Anyone for a hundred?!? No?!? Ninety-five?!? Anyone for...yes?!? Sold for ninety-five to the man in the straw hat!!!).
The rookie mistake is to bid the amount you value the product. Remember, economists define “the amount you value the product” as “the price at which you would be indifferent between having the product and just keeping the money”. If you go to an auction planning to bid your true value, you should expect to get absolutely zero benefit out of the experience. Instead, you should bid infinitesimally more than what you predict the next highest bidder will pay, as long as this is below your value.
Thus, the auction beloved by economists as perhaps the purest example of auction forms is the Vickrey, in which everyone submits a sealed bid, the highest bidder wins, and she pays the amount of the second-highest bid. This auction has a certain very elegant property, which is that here the dominant strategy is to bid your true value. Why?
Suppose you value a table at $400. If you try to game the system by bidding $350 instead of $400, you may lose out and can at best break even. Why? Because if the highest other bid was above $400, you wouldn't win the table in either case, and your ploy profits you nothing. And if the highest other bid was between $350 and $400 (let's say $375), now you lose the table and make $0 profit, as opposed to the $25 profit you would have made if you had bid your true value of $400, won, and paid the second-highest bid of $375. And if everyone else is below $350 (let's say $300) then you would have paid $300 in either case, and again your ploy profits you nothing. Bid above your true valuation (let's say $450) and you face similar consequences: either you wouldn't have gotten the table anyway, you get the table for the same amount as before, or you get the table for a value between $400 and $450 and now you're taking a loss.
In the real world, English, Dutch, sealed-bid and Vickrey auctions all differ a little in ways like how much information they give the bidders about each other, or whether people get caught up in the excitement of bidding, or what to do when you don't really know your true valuation. But in simplified rational models, they all end at an identical price: the true valuation of the second-highest bidder.
In conclusion, the gentlemanly way to bargain is to split the difference in profits between your and your partner's best alternative to an agreement, and gentlemanly auctions tend to end at the value of the second-highest participant. Some less gentlemanly alternatives are also available and will be discussed later.
Game Theory As A Dark Art
One of the most charming features of game theory is the almost limitless depths of evil to which it can sink.
Your garden-variety evils act against your values. Your better class of evil, like Voldemort and the folk-tale version of Satan, use your greed to trick you into acting against your own values, then grab away the promised reward at the last moment. But even demons and dark wizards can only do this once or twice before most victims wise up and decide that taking their advice is a bad idea. Game theory can force you to betray your deepest principles for no lasting benefit again and again, and still leave you convinced that your behavior was rational.
Some of the examples in this post probably wouldn't work in reality; they're more of a reductio ad absurdum of the so-called homo economicus who acts free from any feelings of altruism or trust. But others are lifted directly from real life where seemingly intelligent people genuinely fall for them. And even the ones that don't work with real people might be valuable in modeling institutions or governments.
Of the following examples, the first three are from The Art of Strategy; the second three are relatively classic problems taken from around the Internet. A few have been mentioned in the comments here already and are reposted for people who didn't catch them the first time.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)