Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

What Cost for Irrationality?

60 Kaj_Sotala 01 July 2010 06:25PM

This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality, so that you might better persuade others of this being important.

What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...

Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)

A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."

Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.

continue reading »

Late Great Filter Is Not Bad News

14 Wei_Dai 04 April 2010 04:17AM

But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.

— Nick Bostrom, in Where Are They? Why I hope that the search for extraterrestrial life finds nothing

This post is a reply to Robin Hanson's recent OB post Very Bad News, as well as Nick Bostrom's 2008 paper quoted above, and assumes familiarity with Robin's Great Filter idea. (Robin's server for the Great Filter paper seems to be experiencing some kind of error. See here for a mirror.)

Suppose Omega appears and says to you:

(Scenario 1) I'm going to apply a great filter to humanity. You get to choose whether the filter is applied one minute from now, or in five years. When the designated time arrives, I'll throw a fair coin, and wipe out humanity if it lands heads. And oh, it's not the current you that gets to decide, but the version of you 4 years and 364 days from now. I'll predict his or her decision and act accordingly.

I hope it's not controversial that the current you should prefer a late filter, since (with probability .5) that gives you and everyone else five more years of life. What about the future version of you? Well, if he or she decides on the early filter, that would constitutes a time inconsistency. And for those who believe in multiverse/many-worlds theories, choosing the early filter shortens the lives of everyone in half of all universes/branches where a copy of you is making this decision, which doesn't seem like a good thing. It seems clear that, ignoring human deviations from ideal rationality, the right decision of the future you is to choose the late filter.

continue reading »

Privileged Snuff

17 rwallace 22 January 2010 05:38AM

So one is asked, "What is your probability estimate that the LHC will destroy the world?"

Leaving aside the issue of calling brown numbers probabilities, there is a more subtle rhetorical trap at work here.

If one makes up a small number, say one in a million, the answer will be, "Could you make a million such statements and not be wrong even once?" (Of course this is a misleading image -- doing anything a million times in a row would make you tired and distracted enough to make trivial mistakes. At some level we know this argument is misleading, because nobody calls the non-buyer of lottery tickets irrational for assigning an even lower probability to a win.)

If one makes up a larger number, say one in a thousand, then one is considered a bad person for wanting to take even one chance in a thousand of destroying the world.

The fallacy here is http://wiki.lesswrong.com/wiki/Privileging_the_hypothesis

continue reading »

We're in danger. I must tell the others...

3 AllanCrossman 13 October 2009 11:06PM

... Oh, no! I've been shot!

— C3PO

A strange sort of paralysis can occur when risk-averse people (like me) decide that we're going to play it safe. We imagine the worst thing that could happen if we go ahead with our slightly risky plan, and this stops us from carrying it out.

One possible way of overcoming such paralysis is to remind yourself just how much danger you're actually in.

Humanity could be mutilated by nuclear war, biotechnology disasters, societal meltdown, environmental collapse, oppressive governments, disagreeable AI, or other horrors. On an individual level, anybody's life could turn sour for more mundane reasons, from disease to bereavement to divorce to unemployment to depression. The terrifying scenarios depend on your values, and differ from person to person. Those here who hope to live forever may die of old age, and then cryonics turns out not to work.

There must be some number X which is the probability of Really Bad Things happening to you. X is probably not a tiny figure, but instead significantly above zero, which encourages you to go ahead with whatever slightly risky plan you were contemplating, as long as it only nudges X upwards a little.

Admittedly, this tactic seems like a cheap hack that relies on an error in human reasoning - is nudging your danger level from .2 to .201 actually more acceptable than nudging it from 0 to .001? Perhaps not. Needless to say, a real rationalist ought to ignore all this and take the action with the highest expected value.

Why safety is not safe

48 rwallace 14 June 2009 05:20AM

June 14, 3009

Twilight still hung in the sky, yet the Pole Star was visible above the trees, for it was a perfect cloudless evening.

"We can stop here for a few minutes," remarked the librarian as he fumbled to light the lamp. "There's a stream just ahead."

The driver grunted assent as he pulled the cart to a halt and unhitched the thirsty horse to drink its fill.

It was said that in the Age of Legends, there had been horseless carriages that drank the black blood of the earth, long since drained dry. But then, it was said that in the Age of Legends, men had flown to the moon on a pillar of fire. Who took such stories seriously?

The librarian did. In his visit to the University archive, he had studied the crumbling pages of a rare book in Old English, itself a copy a mere few centuries old, of a text from the Age of Legends itself; a book that laid out a generation's hopes and dreams, of building cities in the sky, of setting sail for the very stars. Something had gone wrong - but what? That civilization's capabilities had been so far beyond those of his own people. Its destruction should have taken a global apocalypse of the kind that would leave unmistakable record both historical and archaeological, and yet there was no trace. Nobody had anything better than mutually contradictory guesses as to what had happened. The librarian intended to discover the truth.

Forty years later he died in bed, his question still unanswered.

The earth continued to circle its parent star, whose increasing energy output could no longer be compensated by falling atmospheric carbon dioxide concentration. Glaciers advanced, then retreated for the last time; as life struggled to adapt to changing conditions, the ecosystems of yesteryear were replaced by others new and strange - and impoverished. All the while, the environment drifted further from that which had given rise to Homo sapiens, and in due course one more species joined the billions-long roll of the dead. For what was by some standards a little while, eyes still looked up at the lifeless stars, but there were no more minds to wonder what might have been.

continue reading »

The Thing That I Protect

17 Eliezer_Yudkowsky 07 February 2009 07:18PM

Followup toSomething to Protect, Value is Fragile

"Something to Protect" discursed on the idea of wielding rationality in the service of something other than "rationality".  Not just that rationalists ought to pick out a Noble Cause as a hobby to keep them busy; but rather, that rationality itself is generated by having something that you care about more than your current ritual of cognition.

So what is it, then, that I protect?

I quite deliberately did not discuss that in "Something to Protect", leaving it only as a hanging implication.  In the unlikely event that we ever run into aliens, I don't expect their version of Bayes's Theorem to be mathematically different from ours, even if they generated it in the course of protecting different and incompatible values.  Among humans, the idiom of having "something to protect" is not bound to any one cause, and therefore, to mention my own cause in that post would have harmed its integrity.  Causes are dangerous things, whatever their true importance; I have written somewhat on this, and will write more about it.

But still - what is it, then, the thing that I protect?

Friendly AI?  No - a thousand times no - a thousand times not anymore.  It's not thinking of the AI that gives me strength to carry on even in the face of inconvenience.

continue reading »

Investing for the Long Slump

8 Eliezer_Yudkowsky 22 January 2009 08:56AM

I have no crystal ball with which to predict the Future, a confession that comes as a surprise to some journalists who interview me.  Still less do I think I have the ability to out-predict markets.  On every occasion when I've considered betting against a prediction market - most recently, betting against Barack Obama as President - I've been glad that I didn't.  I admit that I was concerned in advance about the recent complexity crash, but then I've been concerned about it since 1994, which isn't very good market timing.

I say all this so that no one panics when I ask:

Suppose that the whole global economy goes the way of Japan (which, by the Nikkei 225, has now lost two decades).

Suppose the global economy is still in the Long Slump in 2039.

Most market participants seem to think this scenario is extremely implausible.  Is there a simple way to bet on it at a very low price?

If most traders act as if this scenario has a probability of 1%, is there a simple bet, executable using an ordinary brokerage account, that pays off 100 to 1?

Why do I ask?  Well... in general, it seems to me that other people are not pessimistic enough; they prefer not to stare overlong or overhard into the dark; and they attach too little probability to things operating in a mode outside their past experience.

But in this particular case, the question is motivated by my thinking, "Conditioning on the proposition that the Earth as we know it is still here in 2040, what might have happened during the preceding thirty years?"

continue reading »

Beyond the Reach of God

68 Eliezer_Yudkowsky 04 October 2008 03:42PM

Followup toThe Magnitude of His Own Folly

Today's post is a tad gloomier than usual, as I measure such things.  It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me.  Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading.  (Unless they have something to protect, including their own life.)

So!  Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong.  Not as the result of any explicit propositional verbal belief.  More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.

Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.

But we don't live in that world.  We live in the world beyond the reach of God.

continue reading »

The Magnitude of His Own Folly

27 Eliezer_Yudkowsky 30 September 2008 11:31AM

Followup toMy Naturalistic Awakening, Above-Average AI Scientists

In the years before I met that would-be creator of Artificial General Intelligence (with a funded project) who happened to be a creationist, I would still try to argue with individual AGI wannabes.

In those days, I sort-of-succeeded in convincing one such fellow that, yes, you had to take Friendly AI into account, and no, you couldn't just find the right fitness metric for an evolutionary algorithm.  (Previously he had been very impressed with evolutionary algorithms.)

And the one said:  Oh, woe!  Oh, alas!  What a fool I've been!  Through my carelessness, I almost destroyed the world!  What a villain I once was!

Now, there's a trap I knew I better than to fall into—

—at the point where, in late 2002, I looked back to Eliezer1997's AI proposals and realized what they really would have done, insofar as they were coherent enough to talk about what they "really would have done".

When I finally saw the magnitude of my own folly, everything fell into place at once.  The dam against realization cracked; and the unspoken doubts that had been accumulating behind it, crashed through all together.  There wasn't a prolonged period, or even a single moment that I remember, of wondering how I could have been so stupid.  I already knew how.

And I also knew, all at once, in the same moment of realization, that to say, I almost destroyed the world!, would have been too prideful.

It would have been too confirming of ego, too confirming of my own importance in the scheme of things, at a time when—I understood in the same moment of realization—my ego ought to be taking a major punch to the stomach.  I had been so much less than I needed to be; I had to take that punch in the stomach, not avert it.

continue reading »

Fighting a Rearguard Action Against the Truth

13 Eliezer_Yudkowsky 24 September 2008 01:23AM

Followup toThat Tiny Note of Discord, The Importance of Saying "Oops"

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI.  His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism.  If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way.  In the end, this is all that turns out to matter.

But as our story begins—as the sky lightens to gray and the tip of the sun peeks over the horizon—Eliezer2001 hasn't yet admitted that Eliezer1997 was mistaken in any important sense.  He's just making Eliezer1997's strategy even better by including a contingency plan for "the unlikely event that life turns out to be meaningless"...

...which means that Eliezer2001 now has a line of retreat away from his mistake.

I don't just mean that Eliezer2001 can say "Friendly AI is a contingency plan", rather than screaming "OOPS!"

I mean that Eliezer2001 now actually has a contingency plan.  If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI.  Eliezer2001 can question his metaethics without it signaling the end of the world.

And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance.  He doesn't have to cough out his whole mistake in one huge lump.

If you think this sounds like Eliezer2001 is too slow, I quite agree.

continue reading »

View more: Next