Nobody here is claiming that people naturally reason in a Bayesian way.
We are claiming that they should.
If people don't reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships). Right?
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
As Eliezer said in Searching for Bayes-Structure:
The way you begin to grasp the Quest for the Holy Bayes is that you learn about cognitive phenomenon XYZ, which seems really useful - and there's this bunch of philosophers who've been arguing about its true nature for centuries, and they are still arguing - and there's a bunch of AI scientists trying to make a computer do it, but they can't agree on the philosophy either -
And - Huh, that's odd! - this cognitive phenomenon didn't look anything like Bayesian on the surface, but there's this non-obvious underlying structure that has a Bayesian interpretation - but wait, there's still some useful work getting done that can't be explained in Bayesian terms - no wait, that's Bayesian too - OH MY GOD this completely different cognitive process, that also didn't look Bayesian on the surface, ALSO HAS BAYESIAN STRUCTURE - hold on, are these non-Bayesian parts even doing anything?
- Yes: Wow, those are Bayesian too!
- No: Dear heavens, what a stupid design. I could eat a bucket of amino acids and puke a better brain architecture than that.
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
If people don't reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships).
There is. That does not mean that it is without error, or that errors are not errors. A&B is, everywhere and always, no more likely than A. Any method of concluding otherwise is wrong. If the form of reasoning that Popper advocates endorses this error, it is wrong.
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
Whoever that was is wrong.
Eliezer can say whether curi's view is a correct reading of that article, but it seems to me that if Bayesian reasoning is the core that works, but humans do a lot of other stuff as well that is all either useless or harmful, and don't even know the gold from the dross, then this is not in contradiction with demonstrating that the other stuff is due to Popperian reasoning. It rather counts against Popper though. Or at least, Popperianism.
Here's someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
No doubt Yudkowsky is wrong, as you say.
The core of the problem:
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
No link to that someone? If you can remember who it was, you should go and argue with them. To everyone else, this is a straw man.
(Certainly there are researchers looking for Bayes structure in low-level neural processing, but those investigations focus on tasks far below human cognition.)
Here's someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
Some straw man... I thought people would be familiar with this kind of thing without me having to quote it.
Please, stop. This has gone on long enough. You don't have to respond to everything, and you shouldn't respond to everything. By trying to do so, you have generated far more text than any reasonable person would be willing to read, and it's basically just repeating the same incorrect position over and over again. It is quite clear that we are not having a rational discussion, so there is nothing further to say.
What beneficial effect have you observed? I ask because people were complaining about the forum being popperclipped. Do you disagree with these complaints? Or do you think that the karma system has trained the low-karma popperclipping participants to improve the quality of their comments? One of them recently wrote a post admitting and defending the tactic of being obnoxious - he said that his obnoxiousness was to filter out time-wasters.
I mean curi has now insufficient karma to post on the main page and his comments are generally heavily downvoted. People can disable viewing low karma comments, so popperclipping (whatever it means - did the old term "troll" grow out of fashion?) may not be a problem. Therefore I think that karma works.
Curi's karma periodically spikes despite posting no significantly upvoted comments or any improvement in his reception. I suspect he or someone else who frequents his site may be generating puppet accounts to feed his comments karma (his older comments appear to have gone through periodic blanket spikes.) He's posted main page and discussion articles multiple times after his karma has dropped to zero without first producing more comments that are upvoted, due to these spikes.
I asked matt if this could be confirmed, but apparently there's only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.
I asked matt if this could be confirmed, but apparently there's only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.
Jimrandomh had an idea for setting up a script that might help, maybe talk to him? In any event, it might be useful to have the capability to do this in general. That said, since this is only the first time we've had such a problem, it doesn't seem as of right now that this is a common enough issue to really justify investing in additional capabilities for the software.
popperclipping (whatever it means...)
I believe that "popperclipping" is a play on words, a joke, alluding to a popular LW topic. Explaining it more might kill the joke.
I mean curi has now insufficient karma to post on the main page
Currently, on the main page, the most recent post under "Recent Posts" is curi's The Conjunction Fallacy Does Not Exist. The comments under this are showing up in the Recent Comments column. Of the five comments I see in the recent comments column, three are comments under curi's posts. That is a majority. As of now, then, it appears that curi continues to dominate discussion, either directly or by triggering responses.
Damn, I thought it was in the discussion. Then, I retract my statement that karma works. Still, what's the explanation? Where did curi get enough karma to balance the blow from his heavily downvoted comments and posts? I have looked onto two pages of his recent activity where his score was -112 (-70 for the main page post, -42 for the rest). And I know he was near zero after his last but one main page post was published.
I believe that "popperclipping" is a play on words, a joke, ...
Certainly. I only missed the standard name for that behaviour spelled out loud.
Seconded. When I discovered this ongoing conversation on Popperian epistemology, there were already three threads, some of them with hundreds of comments, and no signs of progress and mutual agreement, only argument. There may be some comments worth reading in the stack, but they're not worth the effort of digging.
While agreeing with you completely, I'll also point out that quite a few people have been feeding this particular set of threads... that is, continuing to have, at enormous length, a discussion in which no progress is being made.
Others have already answered this, but there's another problem: you clearly haven't read the actual literature on the conjunction fallacy. It doesn't just occur in the form "A because of B." It connects with the representative heuristic. Thus, for suitably chosen A and B, people act like "A and B" is more likely than "A". See Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Tversky, Amos; Kahneman, Daniel Psychological Review, Vol 90(4), Oct 1983, 293-315. doi: 10.1037/0033-295X.90.4.293
Please stop posting and read the literature on these issues.
With the Allais Paradox, would you say that the decisions people make are consistent with Popperian philosophy? Or at any rate would you say that, as a Popperian, you would make similar decisions?
Are you implying human thinking should be used as some sort of benchmark? Why in the space of all possible thought processes would the human family of thought processes, hacked together by evolution to work just barely well enough, represent the ideal? Also, are you applying the 'popperian' label to human thinking? If I prove human thinking to be wrong by its own standards, have I falsified the popperian process of approaching truth?
I am not well versed (or much invested) in bayes but this is not making much sense.
To clarify/rephrase/expand on this, i think Alexandros is suggesting that questions "how do humans think", and "what is a rational way to think" are separate questions, and if we are discussing the first of these two questions then perhaps we have been sidetracked.
In fact, this is nicely highlighted by your very first sentence:
People think A&B is more likely than A alone, if you ask the right question. That's not very Bayesian; as far as you Bayesians can tell it's really quite stupid.
That is a quite stupid way to think, and if we want to think rationally we should desire to not think that way, regardless of whether it is in fact a common way of thinking.
I think you should read up on the conjunction fallacy. Your example does not address the observations made in research by Kahneman and Tversky. The questions posed in the research do not assume causal relationships, they are just two independent probabilities. I won't rewrite the whole wiki article, but the upshot of the conjunction fallacy is that people using representativeness heuristic to asses odds, instead of using the correct procedures they would have used if that heuristic isn't cued. People who would never say "Joe rolled a six and a two" is more likely than "Joe rolled a two" do say "Joe is a New Yorker who rides the subway" is more likely than "Joe is a New Yorker", when presented with information about Joe.