People think A&B is more likely than A alone, if you ask the right question. That's not very Bayesian; as far as you Bayesians can tell it's really quite stupid.
Is that maybe evidence that Bayesianism is faililng to model how people actually thinking?
Popperian philosophy can make sense of this (without hating on everyone! it's not good to hate on people when there's better options available). It explains it like this: people like explanations. When you say "A happened because B happened" it sounds to them like a pretty good explanatory theory which makes sense. When you say "A alone" they don't see any explanation and they read it as "A happened for no apparent reason" which is a bad explanation, so they score it worse.
To concretize this, you could use A = economic collapse and B = nuclear war.
People are looking for good explanations. They are thinking in a Popperian fashion.
Isn't it weird how you guys talk about all these biases which basically consist of people not thinking in the way you think they should, but when someone says "hey, actually they think in this way Popper worked out" you think that's crazy cause the Bayesian model must be correct? Why did you find all these counter examples to your own theory and then never notice they mean your theory is wrong? In the cases where people don't think in a Popperian way, Popper explains why (mostly b/c of the justificationist tradition informing many mistakes since Aristotle)
More examples, from http://wiki.lesswrong.com/wiki/Bias
Scope Insensitivity - The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Changing the number does not change most of the explanations involved, such as why helping birds is good, what the person can afford to spare, how much charity it takes the person to feel altruistic enough (or moral enough, involved enough, helpful enough, whatever), etc... Since the major explanatory factors they were considering don't change in proportion to the number of birds, their answer doesn't change proportionally either.
Correspondence Bias, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
This happens because people usually know the explanations/excuses for why they did stuff, but they don't know them for others. And they have more reason to think of them for themselves.
Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
People do this because of the justificationist tradition, dating back to Aristotle, which Bayesian epistemology is part of, and which Popper rejected. This is a way people really don't think in the Popperian way -- but they could and should.
Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
This is also caused by the justificationist tradition, which Bayesian epistemology is part of. It's not fallibilist enough. This is a way people really don't think in the Popperian way -- but they could and should.
Well, that's part of the issue. The other part is they come up with a good explanation of what will happen, and they go with that. That part of their thinking fits what Popper said people do. The problem is not enough criticism, which is from the popularity of justificationism.
Do We Believe Everything We're Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
That's very Popperian. The Popperian way is that you can make conjectures however you want, and you only reject them if there's a criticism. No criticism, no rejection. This contrasts with the justificationist approach in which ideas are required to (impossibly) have positive support, and the focus is on positive support not criticism (thus causing, e.g., Confirmation Bias)
Illusion of Transparency - Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
This one is off topic but there's several things I wanted to say. First, people don't always know what their own words mean. People talking about tricky concepts like God, qualia, or consciousness often can't explain what they mean by the words if asked. Sometimes people even use words without knowing the definition, they just heard it in a similar circumstance another time or something.
The reason others don't understand us, often, is because of the nature of communication. To communicate what has to happen is the other person creates knoweldge of what idea(s) you are trying to express to him. That means he has to make guesses about what you are saying and use criticisms to improve those guesses (e.g. by ruling stuff out incompatible with the words he heard you use). In this way Popperian epistemology lets us understand communication, and why it's so hard.
Evaluability - It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
It's because they are trying to come up with a good explanation of what to buy. And "this one is better than this other one" is a pretty simple and easily available kind of explanation to create.
The Allais Paradox (and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
How do you know that kind of thing and still think people reason in a Bayesian way? They don't. They just guess at what to gamble, and the quality of the guesses is limited by what criticisms they use. If they dont' know much math then they don't subject their guesses to much mathematical criticism. Hence this mistake.
New Comment
38 comments, sorted by Click to highlight new comments since: Today at 2:48 PM

Nobody here is claiming that people naturally reason in a Bayesian way.

We are claiming that they should.

This, this, a million times this.

[-]curi13y-20

If people don't reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships). Right?

Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.

Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.

As Eliezer said in Searching for Bayes-Structure:

The way you begin to grasp the Quest for the Holy Bayes is that you learn about cognitive phenomenon XYZ, which seems really useful - and there's this bunch of philosophers who've been arguing about its true nature for centuries, and they are still arguing - and there's a bunch of AI scientists trying to make a computer do it, but they can't agree on the philosophy either -

And - Huh, that's odd! - this cognitive phenomenon didn't look anything like Bayesian on the surface, but there's this non-obvious underlying structure that has a Bayesian interpretation - but wait, there's still some useful work getting done that can't be explained in Bayesian terms - no wait, that's Bayesian too - OH MY GOD this completely different cognitive process, that also didn't look Bayesian on the surface, ALSO HAS BAYESIAN STRUCTURE - hold on, are these non-Bayesian parts even doing anything?

  • Yes: Wow, those are Bayesian too!
  • No: Dear heavens, what a stupid design. I could eat a bucket of amino acids and puke a better brain architecture than that.

Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.

Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.

Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.

You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.

I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.

If you wish to claim that most people are non-inductive you must either:

1) Show that I am unusual for thinking in this way

or

2) Show how someone else could come to the same conclusion without induction.

If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.

[+]curi13y-60

If people don't reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships).

There is. That does not mean that it is without error, or that errors are not errors. A&B is, everywhere and always, no more likely than A. Any method of concluding otherwise is wrong. If the form of reasoning that Popper advocates endorses this error, it is wrong.

Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.

Whoever that was is wrong.

Someone told me that humans do and must think in a bayesian way at some level b/c >>it's the only way that works.

Whoever that was is wrong.

Eliezer?

Eliezer can say whether curi's view is a correct reading of that article, but it seems to me that if Bayesian reasoning is the core that works, but humans do a lot of other stuff as well that is all either useless or harmful, and don't even know the gold from the dross, then this is not in contradiction with demonstrating that the other stuff is due to Popperian reasoning. It rather counts against Popper though. Or at least, Popperianism.

[-]curi13y30

Here's someone saying it again by quoting Yudkowsky saying it:

http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o

No doubt Yudkowsky is wrong, as you say.

See my other response to Oscar_Cunningham, who cited the same article.

The core of the problem:

Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.

No link to that someone? If you can remember who it was, you should go and argue with them. To everyone else, this is a straw man.

[-]Cyan13y40

(Certainly there are researchers looking for Bayes structure in low-level neural processing, but those investigations focus on tasks far below human cognition.)

[-]curi13y30

Here's someone saying it again by quoting Yudkowsky saying it:

http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o

Some straw man... I thought people would be familiar with this kind of thing without me having to quote it.

Please, stop. This has gone on long enough. You don't have to respond to everything, and you shouldn't respond to everything. By trying to do so, you have generated far more text than any reasonable person would be willing to read, and it's basically just repeating the same incorrect position over and over again. It is quite clear that we are not having a rational discussion, so there is nothing further to say.

Indeed. This Popperclipping of the discussion section should cease.

[-][anonymous]13y50

This situation seems an ideal test of the karma system.

[-]prase13y-30

And it works.

[-][anonymous]13y80

What beneficial effect have you observed? I ask because people were complaining about the forum being popperclipped. Do you disagree with these complaints? Or do you think that the karma system has trained the low-karma popperclipping participants to improve the quality of their comments? One of them recently wrote a post admitting and defending the tactic of being obnoxious - he said that his obnoxiousness was to filter out time-wasters.

I mean curi has now insufficient karma to post on the main page and his comments are generally heavily downvoted. People can disable viewing low karma comments, so popperclipping (whatever it means - did the old term "troll" grow out of fashion?) may not be a problem. Therefore I think that karma works.

Curi's karma periodically spikes despite posting no significantly upvoted comments or any improvement in his reception. I suspect he or someone else who frequents his site may be generating puppet accounts to feed his comments karma (his older comments appear to have gone through periodic blanket spikes.) He's posted main page and discussion articles multiple times after his karma has dropped to zero without first producing more comments that are upvoted, due to these spikes.

If this is true, it would be natural for the moderators to step in and ban him.

I asked matt if this could be confirmed, but apparently there's only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.

I asked matt if this could be confirmed, but apparently there's only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.

Jimrandomh had an idea for setting up a script that might help, maybe talk to him? In any event, it might be useful to have the capability to do this in general. That said, since this is only the first time we've had such a problem, it doesn't seem as of right now that this is a common enough issue to really justify investing in additional capabilities for the software.

[-][anonymous]13y80

popperclipping (whatever it means...)

I believe that "popperclipping" is a play on words, a joke, alluding to a popular LW topic. Explaining it more might kill the joke.

I mean curi has now insufficient karma to post on the main page

Currently, on the main page, the most recent post under "Recent Posts" is curi's The Conjunction Fallacy Does Not Exist. The comments under this are showing up in the Recent Comments column. Of the five comments I see in the recent comments column, three are comments under curi's posts. That is a majority. As of now, then, it appears that curi continues to dominate discussion, either directly or by triggering responses.

Damn, I thought it was in the discussion. Then, I retract my statement that karma works. Still, what's the explanation? Where did curi get enough karma to balance the blow from his heavily downvoted comments and posts? I have looked onto two pages of his recent activity where his score was -112 (-70 for the main page post, -42 for the rest). And I know he was near zero after his last but one main page post was published.

[-][anonymous]13y00

Maybe mass upvoting by sockpuppets?

I believe that "popperclipping" is a play on words, a joke, ...

Certainly. I only missed the standard name for that behaviour spelled out loud.

Seconded. When I discovered this ongoing conversation on Popperian epistemology, there were already three threads, some of them with hundreds of comments, and no signs of progress and mutual agreement, only argument. There may be some comments worth reading in the stack, but they're not worth the effort of digging.

While agreeing with you completely, I'll also point out that quite a few people have been feeding this particular set of threads... that is, continuing to have, at enormous length, a discussion in which no progress is being made.

Others have already answered this, but there's another problem: you clearly haven't read the actual literature on the conjunction fallacy. It doesn't just occur in the form "A because of B." It connects with the representative heuristic. Thus, for suitably chosen A and B, people act like "A and B" is more likely than "A". See Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Tversky, Amos; Kahneman, Daniel Psychological Review, Vol 90(4), Oct 1983, 293-315. doi: 10.1037/0033-295X.90.4.293

Please stop posting and read the literature on these issues.

With the Allais Paradox, would you say that the decisions people make are consistent with Popperian philosophy? Or at any rate would you say that, as a Popperian, you would make similar decisions?

Are you implying human thinking should be used as some sort of benchmark? Why in the space of all possible thought processes would the human family of thought processes, hacked together by evolution to work just barely well enough, represent the ideal? Also, are you applying the 'popperian' label to human thinking? If I prove human thinking to be wrong by its own standards, have I falsified the popperian process of approaching truth?

I am not well versed (or much invested) in bayes but this is not making much sense.

To clarify/rephrase/expand on this, i think Alexandros is suggesting that questions "how do humans think", and "what is a rational way to think" are separate questions, and if we are discussing the first of these two questions then perhaps we have been sidetracked.

In fact, this is nicely highlighted by your very first sentence:

People think A&B is more likely than A alone, if you ask the right question. That's not very Bayesian; as far as you Bayesians can tell it's really quite stupid.

That is a quite stupid way to think, and if we want to think rationally we should desire to not think that way, regardless of whether it is in fact a common way of thinking.

[+]curi13y-180
[-]zaph13y20

I think you should read up on the conjunction fallacy. Your example does not address the observations made in research by Kahneman and Tversky. The questions posed in the research do not assume causal relationships, they are just two independent probabilities. I won't rewrite the whole wiki article, but the upshot of the conjunction fallacy is that people using representativeness heuristic to asses odds, instead of using the correct procedures they would have used if that heuristic isn't cued. People who would never say "Joe rolled a six and a two" is more likely than "Joe rolled a two" do say "Joe is a New Yorker who rides the subway" is more likely than "Joe is a New Yorker", when presented with information about Joe.