So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning

48 Matt_Simpson 14 July 2010 04:51PM

Related to: The Conjunction Fallacy, Conjunction Controversy

The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty.  In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.

EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.

EDIT 2: The author no longer holds the views presented in this post. See this comment.

A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

What is the probability that Linda is:

(a) a bank teller

(b) a bank teller and active in the feminist movement

In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.

This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

There are 100 people who fit the description above. How many of them are:

(a) bank tellers

(b) bank tellers and active in the feminist movement

Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.

continue reading »

Positive Bias Test (C++ program)

26 MBlume 19 May 2009 09:32PM

I've written a program which tests positive bias using Wason's procedure from "On the failure to eliminate hypotheses in a conceptual task" (Quarterly Journal of Experimental Psychology, 12: 129-140, 1960). If the user does not discover the correct rule, the program attempts to guess, based on the user's input, what rule the user did find, and explains the existence of the more general rule. The program then directs the user here.

I'd like to use a better set of triplets, and perhaps include more wrong rules. The program should be fairly flexible in this way.

I'd also like to set up a web-based front-end to the program, but I do not currently know any cgi.

I'm not completely happy with the program's textual output. It still feels a bit like the program is scolding the user at the end. Not quite sure how to fix this.

Program source

ETA: Here is a macintosh executable version of the program. I do not have any means to make an exe file, but if anyone does, I can host it.

If you're on Linux, I'm just going to assume you know what to do with a .cpp file =P

Here is a sample run of the program (if you're unfamiliar with positive bias, or the wason test, I'd really encourage you to try it yourself before reading):

continue reading »

Bad reasons for a rationalist to lose

30 matt 18 May 2009 10:57PM

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
  • We need some likelihood estimates:
    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
    • Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
    • Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
    • What else do we need to know?
  • We need some time/cost estimates (these will vary greatly by proposed brain hack):
    • Time required to stage a personal experiment on the hack: ?
    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
    • What else do we need?

… and, what don't we need?

  • A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

Share Your Anti-Akrasia Tricks

20 Vladimir_Golovin 15 May 2009 07:06PM

People have been encouraging me to share my anti-akrasia tricks, but it feels inappropriate to dedicate a top-level post solely to unproven techniques that work for some person and may not work for others, so:

Go ahead and share your anti-akrasia tricks!

Let's make it an open thread where we just share what works and what doesn't, without worrying (yet) about having to explain tricks with deep theories, or designing proper experiments to verify them. However, if you happen to have a theory or a proposed experiment in mind, please share.

Bragging is fine, but please share the failures of your techniques as well – they are just as valuable, if not more.

Note to readers – before you read the comments and try the tricks, keep in mind that the techniques below are not yet proven supported or explained by proper experiments, and are not yet backed by theory. They may work for their authors, but are not guaranteed to work for you, so try them at your own risk. It would be even better to read the following posts before rushing to try the tricks:

Cheerios: An "Untested New Drug"

5 MBlume 15 May 2009 02:26AM

I found this letter from the US Food and Drug Administration to General Mills interesting. It appears on the surface that the agency is trying to protect the American public from ungrounded persuasion, yet I can't find anything in the letter claiming that GM has made an unsupported statement.

Does anyone understand this better than I do?

A Parable On Obsolete Ideologies

113 Yvain 13 May 2009 10:51PM

Followup to:  Yudkowsky and Frank on Religious Experience, Yudkowksy and Frank On Religious Experience Pt 2
With sincere apologies to: Mike Godwin

You are General Eisenhower. It is 1945. The Allies have just triumphantly liberated Berlin. As the remaining leaders of the old regime are being tried and executed, it begins to become apparent just how vile and despicable the Third Reich truly was.

In the midst of the chaos, a group of German leaders come to you with a proposal. Nazism, they admit, was completely wrong. Its racist ideology was false and its consequences were horrific. However, in the bleak poverty of post-war Germany, people need to keep united somehow. They need something to believe in. And a whole generation of them have been raised on Nazi ideology and symbolism. Why not take advantage of the national unity Nazism provides while discarding all the racist baggage? "Make it so," you say.

The swastikas hanging from every boulevard stay up, but now they represent "traditional values" and even "peace". Big pictures of Hitler still hang in every government office, not because Hitler was right about racial purity, but because he represents the desire for spiritual purity inside all of us, and the desire to create a better society by any means necessary. It's still acceptable to shout "KILL ALL THE JEWS AND GYPSIES AND HOMOSEXUALS!" in public places, but only because everyone realizes that Hitler meant "Jews" as a metaphor for "greed", "gypsies" as a metaphor for "superstition", and "homosexuals" as a metaphor for "lust", and so what he really meant is that you need to kill the greed, lust, and superstition in your own heart. Good Nazis love real, physical Jews! Some Jews even choose to join the Party, inspired by their principled stand against spiritual evil.

The Hitler Youth remains, but it's become more or less a German version of the Boy Scouts. The Party infrastructure remains, but only as a group of spiritual advisors helping people fight the untermenschen in their own soul. They suggest that, during times of trouble, people look to Mein Kampf for inspiration. If they open to a sentence like "The Aryan race shall conquer all in its path", then they can interpret "the Aryan race" to mean "righteous people", and the sentence is really just saying that good people can do anything if they set their minds to it. Isn't that lovely?

Soon, "Nazi" comes to just be a synonym for "good person". If anyone's not a member of the Nazi Party, everyone immediately becomes suspicious. Why is she against exterminating greed, lust, and superstition from her soul? Does she really not believe good people can do anything if they set their minds to it? Why does he oppose caring for your aging parents? We definitely can't trust him with high political office.

continue reading »

No One Knows Stuff

7 talisman 12 May 2009 05:11AM

Take a second to go upvote You Are A Brain if you haven't already...

Back?  OK.

Liron's post reminded me of something that I meant to say a while ago.  In the course of giving literally hundreds of job interviews to extremely high-powered technical undergraduates over the last five years, one thing has become painfully clear to me:  even very smart and accomplished and mathy people know nothing about rationality.

For instance, reasoning by expected utility, which you probably consider too basic to mention, is something they absolutely fall flat on.  Ask them why they choose as they do in simple gambles involving risk, and they stutter and mutter and fail.  Even the Econ majors.  Even--perhaps especially--the Putnam winners.

Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahneman and Tversky's research as justifying their exhibition of a bias!

So foundational explanatory work like Liron's is really pivotal.  As I've touched on before, I think there's a huge amount to be done in organizing this material and making it approachable for people that don't have the basics.  Who's going to write the Intuitive Explanation of Utility Theory?

Meanwhile, I need to brush up on my Python and find a way to upvote Liron more than once.  If only...

Update: Tweaked language per suggestion, added Kahneman and Tversky link.

Willpower Hax #487: Execute by Default

47 Eliezer_Yudkowsky 12 May 2009 06:46AM

This is a trick that I use for getting out of bed in the morning - quite literally:  I count down from 10 and get out of bed after the "1".

It works because instead of deciding to get out of bed, I just have to decide to implement the plan to count down from 10 and then get out of bed.  Once the plan is in motion, the final action no longer requires an effortful decision - that's the theory, anyway.  And to start the plan doesn't require as much effort because I just have to think "10, 9..."

As usual with such things, there's no way to tell whether it works because it's based on any sort of realistic insight or if it works because I believe it works; and in fact this is one of those cases that blurs the boundary between the two.

The technique was originally inspired by reading some neurologist suggesting that what we have is not "free will" so much as "free won't": that is, frontal reflection is mainly good for suppressing the default mode of action, more than originating new actions.

Pondering that for a bit inspired the idea that - if the brain carries out certain plans by default - it might conserve willpower to first visualize a sequence of actions and try to 'mark' it as the default plan, and then lift the attention-of-decision that agonizes whether or not to do it, thus allowing that default to happen.

continue reading »

Beware Trivial Inconveniences

90 Yvain 06 May 2009 10:04PM

The Great Firewall of China. A massive system of centralized censorship purging the Chinese version of the Internet of all potentially subversive content. Generally agreed to be a great technical achievement and political success even by the vast majority of people who find it morally abhorrent.

I spent a few days in China. I got around it at the Internet cafe by using a free online proxy. Actual Chinese people have dozens of ways of getting around it with a minimum of technical knowledge or just the ability to read some instructions.

The Chinese government isn't losing any sleep over this (although they also don't lose any sleep over murdering political dissidents, so maybe they're just very sound sleepers). Their theory is that by making it a little inconvenient and time-consuming to view subversive sites, they will discourage casual exploration. No one will bother to circumvent it unless they already seriously distrust the Chinese government and are specifically looking for foreign websites, and these people probably know what the foreign websites are going to say anyway.

Think about this for a second. The human longing for freedom of information is a terrible and wonderful thing. It delineates a pivotal difference between mental emancipation and slavery. It has launched protests, rebellions, and revolutions. Thousands have devoted their lives to it, thousands of others have even died for it. And it can be stopped dead in its tracks by requiring people to search for "how to set up proxy" before viewing their anti-government website.

continue reading »

Epistemic vs. Instrumental Rationality: Case of the Leaky Agent

14 Wei_Dai 07 May 2009 11:09PM

Suppose you hire a real-estate agent to sell your house. You have to leave town so you give him the authority to negotiate with buyers on your behalf. The agent is honest and hard working. He'll work as hard to get a good price for your house as if he's selling his own house. But unfortunately, he's not very good at keeping secrets. He wants to know what is the minimum amount you're willing to sell the house for so he can do the negotiations for you. But you know that if you answer him truthfully, he's liable to leak that information to buyers, giving them a bargaining advantage and driving down the expected closing price. What should you do? Presumably most of you in this situation would give the agent a figure that's higher than the actual minimum. (How much higher involves optimizing a tradeoff between the extra money you get if the house sells, versus the probability that you can't find a buyer at the higher fictional minimum.)

Now here's the kicker: that agent is actually your future self. Would you tell yourself a lie, if you could believe it (perhaps with the help of future memory modification technologies), and if you could profit from it?

Edit: Some commenters have pointed out that this change in "minimum acceptable price" may not be exactly a lie. I should have made the example a bit clearer. Let's say if you fail to sell the house by a certain date, it will be reposessed by the bank, so the minimum acceptable price is the amount left on your mortgage, since you're better off selling the house for any amount above that than not selling it. But if buyers know that, they can just offer you slightly above the minimum acceptable price. It will help you get a better bargain if you can make yourself believe that the amount left on your mortgage is higher than it really is. This should be unambigously a lie.

View more: Next