Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Against picking up pennies

-1 [deleted] 13 December 2009 06:07AM

The eternally curious Tailsteak has written about how he always picks up pennies off the sidewalk. He's run a cost-benefit analysis and determined that it's better on average to pick up a penny than to pass it by. His mistake lies nowhere in the analysis itself; it's pretty much correct. His mistake is performing the analysis in the first place.

Pennies, you see, are easily the subject of scope insensitivity. When we come across a penny, we don't think, "Hey, that's something worth 0.05% of what I wish I had come across. I could buy a 25th of a gumball, a mouthful of an unhealthy carbonated beverage, a couple of seconds of footage on DVD, or enough gasoline to go a tenth of a mile." We think, "Hey, that's money," and we grab it.

The thing is, it's difficult to comprehend how little a penny is worth—we don't really have a separate concept for "mild happiness for a couple of seconds"—and we're likely to take risks that far outweigh the benefits. We don't think of bending over to pick up a penny as being a risky endeavour, but it's a penny. How much risk does it take to outweigh a penny? Surely the risk of "something unforeseen" easily does the job. Are you 99.999999% sure that picking up that penny won't kill you? You need a reason for every 9 (if you're ambivalent between using seven 9s and using nine 9s, you should use seven; the number of 9s is never arbitrary), and by the time you come up with eight reasons to pick up the penny, you'll have wasted several cents' worth of time. If you can reduce the probability of harm that far, I applaud you.

Of course, penny-grabbing doesn't have to involve actual pennies. Suppose that President Kodos of the Unified States of Somewhere (population 300 million) uses the word "idiot" in an important speech, causing the average citizen to scowl and ponder for one minute. Now, if a penny can buy you five seconds of happiness, and scowling and pondering brings the same amount of unhappiness, then that's twelve cents for every citizen, or 36 million dollars, of damage that Kodos just caused. Arguably, that's the value of a couple of human lives. As you can see, Kodos' decisions are extremely important. In this case, penny-grabbing would consist of anything less than trading precious seconds for precious human lives—if Kodos finds that he can save one life simply by going a few minutes out of his way, he should ignore it. (Photo ops and personal apologies are out of the question.) But keep in mind, of course, that avoiding saving someone's life because you have something better to do isn't rational unless you actually plan to do something better.

Zwicky's Trifecta of Illusions

18 thomblake 17 July 2009 04:59PM

Linguist Arnold Zwicky has named three linguistic 'illusions' which seem relevant to cognitive bias. They are:

  1. Frequency Illusion - Once you've noticed a phenomenon, it seems to happen a lot.
  2. Recency Illusion - The belief that something is a recent phenomenon, when it has actually existed a long time.
  3. Adolescent Illusion - The belief that adolescents are the cause of undesirable language trends.

Zwicky talks about them here, and in not so many words links them to the standard bias of selective perception.

As an example, here is an exerpt via Jerz's Literacy Weblog (originally via David Crystal), regarding text messages:

  • Text messages aren't full of abbreviations - typically less than ten percent of the words use them. [Frequency Illusion]
  • These abbreviations aren't a new language - they've been around for decades. [Recency Illusion]
  • They aren't just used by kids - adults of all ages and institutions are the leading texters these days. [Adolescent Illusion]

It is my conjecture that these illusions are notable in areas other than linguistics. For example, history is rife with allusions that the younger generation is corrupt, and such speakers are not merely referring to their use of language. Could this be the adolescent illusion in action?

So, are these notable biases to watch out for, or are they merely obvious instances of standard biases?

Bystander Apathy

25 Eliezer_Yudkowsky 13 April 2009 01:26AM

The bystander effect, also known as bystander apathy, is that larger groups are less likely to act in emergencies - not just individually, but collectively.  Put an experimental subject alone in a room and let smoke start coming up from under the door.  75% of the subjects will leave to report it.  Now put three subjects in the room - real subjects, none of whom know what's going on.  On only 38% of the occasions will anyone report the smoke.  Put the subject with two confederates who ignore the smoke, and they'll only report it 10% on the time - even staying in the room until it becomes hazy.  (Latane and Darley 1969.)

On the standard model, the two primary drivers of bystander apathy are:

  • Diffusion of responsibility - everyone hopes that someone else will be first to step up and incur any costs of acting.  When no one does act, being part of a crowd provides an excuse and reduces the chance of being held personally responsible for the results.
  • Pluralistic ignorance - people try to appear calm while looking for cues, and see... that the others appear calm.

Cialdini (2001):

Very often an emergency is not obviously an emergency.  Is the man lying in the alley a heart-attack victim or a drunk sleeping one off?  ...  In times of such uncertainty, the natural tendency is to look around at the actions of others for clues.  We can learn from the way the other witnesses are reacting whether the event is or is not an emergency.  What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too.  Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us.  Therefore everyone is likely to see everyone else looking unruffled and failing to act.

Cialdini suggests that if you're ever in emergency need of help, you point to one single bystander and ask them for help - making it very clear to whom you're referring.  Remember that the total group, combined, may have less chance of helping than one individual.

continue reading »

Sunk Cost Fallacy

30 Z_M_Davis 12 April 2009 05:30PM

Related to: Just Lose Hope Already, The Allais Paradox, Cached Selves

In economics we have this concept of sunk costs, referring to costs that have already been incurred, but which cannot be recouped. Sunk cost fallacy refers to the fallacy of honoring sunk costs, which decision-theoretically should just be ignored. The canonical example goes something like this: you have purchased a nonrefundable movie ticket in advance. (For the nitpickers in the audience, I will also specify that the ticket is nontransferable and that you weren't planning on meeting anyone.) When the night of the show comes, you notice that you don't actually feel like going out, and would actually enjoy yourself more at home. Do you go to the movie anyway?

A lot of people say yes, to avoid wasting the ticket. But on further consideration, it would seem that these people are simply getting it wrong. The ticket is a sunk cost: it's already paid for, and you can't do anything with it but go to the movie. But we've stipulated that you don't want to go to the movie. The theater owners don't care whether you go; they already have their money. The other theater-goers, insofar as they can be said to have a preference, would actually rather you stayed home, making the theater marginally less crowded. If you go to the movie to satisfy your intuition about not wasting the ticket, you're not actually helping anyone. Of course, you're entitled to your values, if not your belief. If you really do place terminal value on using something because you've paid for it, well, fine, I guess. But we should all try to notice exactly what it is we're doing, in case it turns out to not be what we want. Please, think it through.

Dearest reader, if you're now about to scrap your intuition against wasting things, I implore you: don't! The moral of the parable of the movie ticket is not that waste is okay; it's that you should implement your waste-reduction interventions at a time when they can actually help. If you can anticipate your enthusiasm waning on the night of the show, don't purchase the nonrefundable ticket in the first place!

continue reading »

Cached Selves

174 AnnaSalamon 22 March 2009 07:34PM

by Anna Salamon and Steve Rayhawk (joint authorship)

Related to: Beware identity

A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future

To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing.  So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards.  Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.  If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends.  If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.

All familiar phenomena, right?  You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas.  But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena.  And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence.  (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)

continue reading »

The Complete Idiot's Guide to Ad Hominem

8 Eliezer_Yudkowsky 25 November 2008 09:47PM

Stephen Bond writes the definitive word on ad hominem in "the ad hominem fallacy fallacy":

In reality, ad hominem is unrelated to sarcasm or personal abuse.  Argumentum ad hominem is the logical fallacy of attempting to undermine a speaker's argument by attacking the speaker instead of addressing the argument.  The mere presence of a personal attack does not indicate ad hominem: the attack must be used for the purpose of undermining the argument, or otherwise the logical fallacy isn't there.

[...]

A: "All rodents are mammals, but a weasel isn't a rodent, so it can't be a mammal."
B: "You evidently know nothing about logic. This does not logically follow."

B's argument is still not ad hominem.  B does not imply that A's sentence does not logically follow because A knows nothing about logic.  B is still addressing the substance of A's argument...

This is too beautiful, thorough, and precise to not post.  HT to sfk on HN.

Lawful Uncertainty

26 Eliezer_Yudkowsky 10 November 2008 09:06PM

Previously in seriesLawful Creativity

From Robyn Dawes, Rational Choice in an Uncertain World:

"Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability - for example, subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.

In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event.  For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.

What subjects tended to do instead, however, was match probabilities - that is, predict the more probable event with the relative frequency with which it occurred.  For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur.  Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); .70 * .70 + .30 * .30 = .58.

In fact, subjects predict the more frequent event with a slightly higher probability than that with which it occurs, but do not come close to predicting its occurrence 100% of the time, even when they are paid for the accuracy of their predictions...  For example, subjects who were paid a nickel for each correct prediction over a thousand trials... predicted [the more common event] 76% of the time."

(Dawes cites:  Tversky, A. and Edwards, W.  1966.  Information versus reward in binary choice.  Journal of Experimental Psychology, 71, 680-683.)

Do not think that this experiment is about a minor flaw in gambling strategies.  It compactly illustrates the most important idea in all of rationality.

continue reading »

Anthropomorphic Optimism

26 Eliezer_Yudkowsky 04 August 2008 08:17PM

Followup toHumans in Funny Suits, The Tragedy of Group Selectionism

The core fallacy of anthropomorphism is expecting something to be predicted by the black box of your brain, when its casual structure is so different from that of a human brain, as to give you no license to expect any such thing.

The Tragedy of Group Selectionism (as previously covered in the evolution sequence) was a rather extreme error by a group of early (pre-1966) biologists, including Wynne-Edwards, Allee, and Brereton among others, who believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population.

The proffered theory was that if there were multiple, geographically separated groups of e.g. foxes, then groups of foxes that best restrained their breeding, would send out colonists to replace crashed populations.  And so, over time, group selection would promote restrained-breeding genes in foxes.

I'm not going to repeat all the problems that developed with this scenario. Suffice it to say that there was no empirical evidence to start with; that no empirical evidence was ever uncovered; that, in fact, predator populations crash all the time; and that for group selection pressure to overcome a countervailing individual selection pressure, turned out to be very nearly mathematically impossible.

The theory having turned out to be completely incorrect, we may ask if, perhaps, the originators of the theory were doing something wrong.

continue reading »

The Genetic Fallacy

22 Eliezer_Yudkowsky 11 July 2008 05:47AM

In lists of logical fallacies, you will find included "the genetic fallacy"—the fallacy attacking a belief, based on someone's causes for believing it.

This is, at first sight, a very strange idea—if the causes of a belief do not determine its systematic reliability, what does?  If Deep Blue advises us of a chess move, we trust it based on our understanding of the code that searches the game tree, being unable to evaluate the actual game tree ourselves.  What could license any probability assignment as "rational", except that it was produced by some systematically reliable process?

Articles on the genetic fallacy will tell you that genetic reasoning is not always a fallacy—that the origin of evidence can be relevant to its evaluation, as in the case of a trusted expert.  But other times, say the articles, it is a fallacy; the chemist Kekulé first saw the ring structure of benzene in a dream, but this doesn't mean we can never trust this belief.

So sometimes the genetic fallacy is a fallacy, and sometimes it's not?

The genetic fallacy is formally a fallacy, because the original cause of a belief is not the same as its current justificational status, the sum of all the support and antisupport currently known.

Yet we change our minds less often than we think.  Genetic accusations have a force among humans that they would not have among ideal Bayesians.

continue reading »

The Moral Void

31 Eliezer_Yudkowsky 30 June 2008 08:52AM

Followup toWhat Would You Do Without Morality?, Something to Protect

Once, discussing "horrible job interview questions" to ask candidates for a Friendly AI project, I suggested the following:

Would you kill babies if it was inherently the right thing to do?  Yes [] No []

If "no", under what circumstances would you not do the right thing to do?   ___________

If "yes", how inherently right would it have to be, for how many babies?     ___________

continue reading »

View more: Prev | Next