People neglect small probability events

11 XiXiDu 02 July 2011 10:54AM

Over at overcomingbias Robin Hanson wrote:

On September 9, 1713, so the story goes, Nicholas Bernoulli proposed the following problem in the theory of games of chance, after 1768 known as the St Petersburg paradox …:

Peter tosses a coin and continues to do so until it should land heads when it comes to the ground. He agrees to give Paul one ducat if he gets heads on the very first throw, two ducats if he gets it on the second, four if on the third, eight if on the fourth, and so on, so that with each additional throw the number of ducats he must pay is doubled.

Nicholas Bernoulli … suggested that more than five tosses of heads are morally impossible [and so ignored]. This proposition is experimentally tested through the elicitation of subjects‘ willingness-to-pay for various truncated versions of the Petersburg gamble that differ in the maximum payoff. … All gambles that involved probability levels smaller than 1/16 and maximum payoffs greater than 16 Euro elicited the same distribution of valuations. … The payoffs were as described …. but in Euros rather than in ducats. … The more senior students seemed to have a higher willingness-to-pay. … Offers increase significantly with income. (more)

This isn’t plausibly explained by risk aversion, nor by a general neglect of possibilities with a <5% chance. I suspect this is more about analysis complexity, about limiting the number of possibilities we’ll consider at any one time.  I also suspect this bodes ill for existential risk mitigation.

The title of the paper is 'Moral Impossibility in the Petersburg Paradox : A Literature Survey and Experimental Evidence' (PDF):

The Petersburg paradox has led to much thought for three centuries. This
paper describes the paradox, discusses its resolutions advanced in the
literature while alluding to the historical context, and presents experimental
data. In particular, Bernoulli’s search for the level of moral impossibility in
the Petersburg problem is stressed; beyond this level small probabilities are
considered too unlikely to be relevant for judgment and decision making. In
the experiment, the level of moral impossibility is elicited through variations
of the gamble-length in the Petersburg gamble. Bernoulli’s conjecture that
people neglect small probability events is supported by a statistical power
analysis.

I think that people who are interested to raise the awareness of risks from AI need to focus more strongly on this problem. Most discussions about how likely risks from AI are, or how seriously they should be taken, won't lead anywhere if the underlying reason for most of the superficial disagreement about risks from AI is that people discount anything under a certain threshold. There seems to be a point where things become vague enough that they get discounted completely.

The problem often doesn't seem to be that people doubt the possibility of artificial general intelligence. But most people would sooner question their grasp of “rationality” than give five dollars to a charity that tries to mitigate risks from AI because their calculations claim it was “rational” (those who have read the article by Eliezer Yudkowsky on 'Pascal's Mugging' know that I used a statement from that post and slightly rephrased it). The disagreement all comes down to a general averseness to options that have a low probability of being factual, even given that the stakes are high.

Nobody is so far able to beat arguments that bear resemblance to Pascal’s Mugging. At least not by showing that it is irrational to give in from the perspective of a utility maximizer. One can only reject it based on a strong gut feeling that something is wrong. And I think that is what many people are unknowingly doing when they argue against the SIAI or risks from AI. They are signaling that they are unable to take such risks into account. What most people mean when they doubt the reputation of people who claim that risks from AI need to be taken seriously, or who say that AGI might be far off, what those people mean is that risks from AI are too vague to be taken into account at this point, that nobody knows enough to make predictions about the topic right now.

When GiveWell, a charity evaluation service, interviewed the SIAI (PDF), they hinted at the possibility that one could consider the SIAI to be a sort of Pascal’s Mugging:

GiveWell: OK. Well that’s where I stand – I accept a lot of the controversial premises of your mission, but I’m a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don’t need to be sold – that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal’s Mugging and don’t accept it; I wouldn’t endorse your project unless it passed the basic hurdles of credibility and workable approach as well as potentially astronomically beneficial goal.

This shows that lot of people do not doubt the possibility of risks from AI but are simply not sure if they should really concentrate their efforts on such vague possibilities.

Technically, from the standpoint of maximizing expected utility, given the absence of other existential risks, the answer might very well be yes. But even though we believe to understand this technical viewpoint of rationality very well in principle, it does also lead to problems such as Pascal’s Mugging. But it doesn’t need a true Pascal’s Mugging scenario to make people feel deeply uncomfortable with what Bayes’ Theorem, the expected utility formula, and Solomonoff induction seem to suggest one should do.

Again, we currently have no rational way to reject arguments that are framed as predictions of worst case scenarios that need to be taken seriously even given a low probability of their occurrence due to the scale of negative consequences associated with them. Many people are nonetheless reluctant to accept this line of reasoning without further evidence supporting the strong claims and request for money made by organisations such as the SIAI.

Here is for example what mathematician and climate activist John Baez has to say:

Of course, anyone associated with Less Wrong would ask if I’m really maximizing expected utility. Couldn’t a contribution to some place like the Singularity Institute of Artificial Intelligence, despite a lower chance of doing good, actually have a chance to do so much more good that it’d pay to send the cash there instead?

And I’d have to say:

1) Yes, there probably are such places, but it would take me a while to find the one that I trusted, and I haven’t put in the work. When you’re risk-averse and limited in the time you have to make decisions, you tend to put off weighing options that have a very low chance of success but a very high return if they succeed. This is sensible so I don’t feel bad about it.

2) Just to amplify point 1) a bit: you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.

3) If you let me put the $100,000 into my retirement account instead of a charity, that’s what I’d do, and I wouldn’t even feel guilty about it. I actually think that the increased security would free me up to do more risky but potentially very good things!

All this shows that there seems to be a fundamental problem with the formalized version of rationality. The problem might be human nature itself, that some people are unable to accept what they should do if they want to maximize their expected utility. Or we are missing something else and our theories are flawed. Either way, to solve this problem we need to research those issues and thereby increase the confidence in the very methods used to decide what to do about risks from AI, or to increase the confidence in risks from AI directly, enough to make it look like a sensible option, a concrete and discernable problem that needs to be solved.

Many people perceive the whole world to be at stake, either due to climate change, war or engineered pathogens. Telling them about something like risks from AI, even though nobody seems to have any idea about the nature of intelligence, let alone general intelligence or the possibility of recursive self-improvement, seems like just another problem, one that is too vague to outweigh all the other risks. Most people feel like having a gun pointed to their heads, telling them about superhuman monsters that might turn them into paperclips then needs some really good arguments to outweigh the combined risk of all other problems.

(Note: I am not making claim about the possibility of risks from AI in and of itself but rather put forth some ideas about the underyling reasons for why some people seem to neglect existential risks even though they know all the arguments.)

An Outside View on Less Wrong's Advice

60 Mass_Driver 07 July 2011 04:46AM

Related to: Intellectual Hipsters, X-Rationality: Not So GreatThe Importance of Self-Doubt, That Other Kind of Status,

This is a scheduled upgrade of a post that I have been working on in the discussion section. Thanks to all the commenters there, and special thanks to atucker, Gabriel, Jonathan_Graehl, kpreid, XiXiDu, and Yvain for helping me express myself more clearly.

-------------------

For the most part, I am excited about growing as a rationalist. I attended the Berkeley minicamp; I play with Anki cards and Wits & Wagers; I use Google Scholar and spreadsheets to try to predict the consequences of my actions.

There is a part of me, though, that bristles at some of the rationalist 'culture' on Less Wrong, for lack of a better word. The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them. Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.

Note that I am *not* claiming that Less Wrong is a cult. Nobody who runs a cult has such a good sense of humor about it. And if they do, they're so dangerous that it doesn't matter what I say about it. No, if anything, "cultishness" is a straw man. Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid. But, he *might* convince you to believe in some very silly things and take some very silly actions.

Therefore, in the spirit of John Stuart Mill, I am writing a one-article attack on much of we seem to hold dear. If there is anything true about what I'm saying, you will want to read it, so that you can alter your commitments accordingly. Even if, as seems more likely, you don't believe a word I say, reading a semi-intelligent attack on your values and mentally responding to it will probably help you more clearly understand what it is that you do believe. 

continue reading »

Thou Art Godshatter

68 Eliezer_Yudkowsky 13 November 2007 07:38PM

Followup toAn Alien God, Adaptation-Executers not Fitness-Maximizers, Evolutionary Psychology

Before the 20th century, not a single human being had an explicit concept of "inclusive genetic fitness", the sole and absolute obsession of the blind idiot god.  We have no instinctive revulsion of condoms or oral sex.  Our brains, those supreme reproductive organs, don't perform a check for reproductive efficacy before granting us sexual pleasure.

Why not?  Why aren't we consciously obsessed with inclusive genetic fitness?  Why did the Evolution-of-Humans Fairy create brains that would invent condoms?  "It would have been so easy," thinks the human, who can design new complex systems in an afternoon.

continue reading »

Protein Reinforcement and DNA Consequentialism

26 Eliezer_Yudkowsky 13 November 2007 01:34AM

Followup toEvolutionary Psychology

It takes hundreds of generations for a simple beneficial mutation to promote itself to universality in a gene pool.  Thousands of generations, or even millions, to create complex interdependent machinery.

That's some slow learning there.  Let's say you're building a squirrel, and you want the squirrel to know locations for finding nuts.  Individual nut trees don't last for the thousands of years required for natural selection.  You're going to have to learn using proteins.  You're going to have to build a brain.

continue reading »

Adaptation-Executers, not Fitness-Maximizers

42 Eliezer_Yudkowsky 11 November 2007 06:39AM

"Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers."
        —John Tooby and Leda Cosmides, The Psychological Foundations of Culture.

Fifty thousand years ago, the taste buds of Homo sapiens directed their bearers to the scarcest, most critical food resources—sugar and fat.  Calories, in a word.  Today, the context of a taste bud's function has changed, but the taste buds themselves have not.  Calories, far from being scarce (in First World countries), are actively harmful.  Micronutrients that were reliably abundant in leaves and nuts are absent from bread, but our taste buds don't complain.  A scoop of ice cream is a superstimulus, containing more sugar, fat, and salt than anything in the ancestral environment.

No human being with the deliberate goal of maximizing their alleles' inclusive genetic fitness, would ever eat a cookie unless they were starving.  But individual organisms are best thought of as adaptation-executers, not fitness-maximizers.

continue reading »

What is Bayesianism?

81 Kaj_Sotala 26 February 2010 07:43AM

This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.

You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about "Bayesian", but that doesn't help much. And the LW usage seems different from just the "Bayesian and frequentist statistics" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say "this is Bayesianism". So let me try to offer my definition, which boils Bayesianism down to three core tenets.

We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)

continue reading »

Babies and Bunnies: A Caution About Evo-Psych

52 Alicorn 22 February 2010 01:53AM

Daniel Dennett has advanced the opinion that the evolutionary purpose of the cuteness response in humans is to make us respond positively to babies.  This does seem plausible.  Babies are pretty cute, after all.  It's a tempting explanation.

Here is one of the cutest baby pictures I found on a Google search.

And this is a bunny.

Correct me if I'm wrong, but the bunny is about 75,119 times cuter than the baby.

Now, bunnies are not evolutionarily important for humans to like and want to nurture.  In fact, bunnies are edible.  By rights, my evolutionary response to the bunny should be "mmm, needs a sprig of rosemary and thirty minutes on a spit".  But instead, that bunny - and not the baby or any other baby I've seen - strikes the epicenter of my cuteness response, and being more baby-like along any dimension would not improve the bunny.  It would not look better bald.  It would not be improved with little round humanlike ears.  It would not be more precious with thumbs, easier to love if it had no tail, more adorable if it were enlarged to weigh about seven pounds.

If "awwww" is a response designed to make me love human babies and everything else that makes me go "awwww" is a mere side effect of that engineered reaction, it is drastically misaimed.  Other responses for which we have similar evolutionary psychology explanations don't seem badly targeted in this way.  If they miss their supposed objects at all, at least it's not in most people.  (Furries, for instance, exist, but they're not a common variation on human sexual interest - the most generally applicable superstimuli for sexiness look like at-least-superficially healthy, mature humans with prominent human sexual characteristics.)  We've invested enough energy into transforming our food landscape that we can happily eat virtual poison, but that's a departure from the ancestral environment - bunnies?  All natural, every whisker.1

continue reading »

Evolutions Are Stupid (But Work Anyway)

34 Eliezer_Yudkowsky 03 November 2007 03:45PM

Followup to:  An Alien God, The Wonder of Evolution

Yesterday, I wrote:

Science has a very exact idea of the capabilities of evolution.  If you praise evolution one millimeter higher than this, you're not "fighting on evolution's side" against creationism.  You're being scientifically inaccurate, full stop.

In this post I describe some well-known inefficiencies and limitations of evolutions.  I say "evolutions", plural, because fox evolution works at cross-purposes to rabbit evolution, and neither can talk to snake evolution to learn how to build venomous fangs.

So I am talking about limitations of evolution here, but this does not mean I am trying to sneak in creationism.  This is standard Evolutionary Biology 201.  (583 if you must derive the equations.)  Evolutions, thus limited, can still explain observed biology; in fact the limitations are necessary to make sense of it.  Remember that the wonder of evolutions is not how well they work, but that they work at all.

Human intelligence is so complicated that no one has any good way to calculate how efficient it is.  Natural selection, though not simple, is simpler than a human brain; and correspondingly slower and less efficient, as befits the first optimization process ever to exist.  In fact, evolutions are simple enough that we can calculate exactly how stupid they are.

continue reading »

The Wonder of Evolution

34 Eliezer_Yudkowsky 02 November 2007 08:49PM

Followup to:  An Alien God

The wonder of evolution is that it works at all.

I mean that literally:  If you want to marvel at evolution, that's what's marvel-worthy.

How does optimization first arise in the universe?  If an intelligent agent designed Nature, who designed the intelligent agent?  Where is the first design that has no designer?  The puzzle is not how the first stage of the bootstrap can be super-clever and super-efficient; the puzzle is how it can happen at all.

continue reading »

An Alien God

80 Eliezer_Yudkowsky 02 November 2007 06:57AM

"A curious aspect of the theory of evolution," said Jacques Monod, "is that everybody thinks he understands it."

A human being, looking at the natural world, sees a thousand times purpose.  A rabbit's legs, built and articulated for running; a fox's jaws, built and articulated for tearing.  But what you see is not exactly what is there...

In the days before Darwin, the cause of all this apparent purposefulness was a very great puzzle unto science.  The Goddists said "God did it", because you get 50 bonus points each time you use the word "God" in a sentence.  Yet perhaps I'm being unfair.  In the days before Darwin, it seemed like a much more reasonable hypothesis.  Find a watch in the desert, said William Paley, and you can infer the existence of a watchmaker.

But when you look at all the apparent purposefulness in Nature, rather than picking and choosing your examples, you start to notice things that don't fit the Judeo-Christian concept of one benevolent God. Foxes seem well-designed to catch rabbits.  Rabbits seem well-designed to evade foxes.  Was the Creator having trouble making up Its mind?

continue reading »

View more: Next