http://lesswrong.com/lw/ji/conjunction_fallacy/
The moral? Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.
This moral is not what the researchers said; they were careful not to say much in the way of a conclusion, only hint at it by sticking it in the title and some general remarks. It is an interpretation added on by Less Wrong people (as the researchers intended it to be).
This moral contains equivocation: it says it "can" happen. The Less Wrong people obviously think it's more than "can": it's pretty common and worth worrying about, not a one in a million.
The moral, if taken literally, is pretty vacuous. Removing detail or assumptions can also make an event seem more plausible, if you do it in particular ways. Changing ideas can change people's judgment of them. Duh.
It's not meant to be taken that literally. It's meant ot say: this is a serious problem, it's meaningful, it really has something to do with adding conjunctions!
The research is compatible with this moral being false (if we interpret it to have any substance at all), and with it only being a one in a million event.
You may think one in a million is an exaggeration. But it's not. The size of set of possible questions to ask that the researchers selected from was ... I have no idea but surely far more than trillions. How many would have worked? The research does not and cannot say. The only things that could tell us that are philosophical arugments or perhaps further research.
The research says this stuff clearly enough:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
In this paper, the researchers clearly state that their research does not tell us anything about the prevalence of the conjunction fallacy. But Less Wrongers have a different view of the matter.
They also state that it's not arbitrary conjunctions which trigger the "conjunction" fallacy, but ones specifically designed to elicit it. But Less Wrong people are under the impression that people are in danger of doing a conjunction fallacy when there is no one designing situations to elicit it. That may or may not be true; the research certainly doesn't demonstrate it is true.
Note: flawed results can tell you something about prevalence, but only if you estimate the amount of error introduced by the flaws. They did not do that (and it is very hard to do in this case). Error estimates of that type are difficult and would need to be subjected to peer review, not made up by readers of the paper.
Here's various statements by Less Wrongers:
http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/
There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving "probability".
The research does not claim the students could not have misinterpretted. It suggests they wouldn't have -- the less wronger has gotten the idea the researchers wanted him to -- but they won't go so far as to actually say miscommunication is ruled out because it's not. Given they were designing their interactions with their subjects on purpose in a way to get people to make errors, miscommunication is actually a very plausible interpretation, even in cases where the not very detailed writeup of what they actually did fails to explicitly record any blatant miscommunication.
The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
This also goes beyond what the research says.
Also note that he forgot to equivocate about how often this happen. He didn't even put a "sometimes" or a "more often than never". But the paper doesn't support that.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you'd actually be making a point.
He is apparently unaware that it differs from normal life in that in normal life people's careers don't depend on tricking you into making mistakes which they can call conjunction fallacies.
When this was pointed out to him, he did not rethink things but pressed forward:
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life?
But when you read about the fallacy in the Less Wrong articles about it, they do not state "only happens in the cases where people are trying to trick you". If it only applies in those situations, well, say so when telling it to people so they know when it is and isn't relevant. But this cosntraint on applicability, from the paper, is simply ignored most of the time to reach a conclusion the paper does not support.
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no "trickery" here.
Here's someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice.
Note the heavy equivocation. He retreats from "always" which is a straw man, to "sometimes" which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often. But the paper does not say that.
BTW the paper is very misleading in that it says things like, in the words of a Less Wronger:
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event
The paper is full of statements that sound like they have something to do with the prevalence of the conjunction fallacy. And then one sentence admiting all those numbers should be disregarded. If there is any way to rescue those numbers as legitimate at all, it was too hard for the researchers and they didn't attempt it (I don't mean to criticize their skill here. I couldn't do it either. Too hard.)
It is difficult for me to decide how to respond to this. You are obviously sincere - not trolling. The "conjunction fallacy" is objectionable to you - an ideological assault on human dignity which must be opposed. You see evidence of this malevolent influence everywhere. Yet I see it nowhere. We are interpreting plain language quite differently. Or rather, I would say that you are reading in subtexts that I don't think are there.
To me, the conjunction fallacy is something like one of those optical illusions you commonly find in psychology...
One of the best achievements of the LessWrong community is our high standard of discussion. More than anywhere else, people here are actively trying to interpret others charitatively, argue to the point, not use provocative or rude language, apologise for inadvertent offenses while not being overtly prone to take offense themselves, avoid their own biases and fallacies instead seeking them in others, and most importantly, find the truth instead of winning the argument. Maybe the greatest attribute of this approach is its infectivity - I have observed several newcomers to change their discussing habits for better in few weeks. However, not everybody is susceptible to the LW standards and our attitude produces somewhat bizarre results when confronted with genuine trolls.
Recent posts about epistemology1 have all generated large number of replies; in fact, the discussions were among the largest in the last few months. People have commented there (yes, I too am guilty) even if it was clear that the author of the posts doesn't actually react to our arguments. After he was rude and had admitted to do it on purpose. After commiting several fallacies, after generating an unreasonable amount of text of mediocre to low quality, after saying that he is neither trying to convince anyone nor he is willing to learn anything nor he aims for agreement. In short, perhaps all symptoms of trolling were present, and still, people were repeatedly patiently explaining what's wrong with the author's position. Which reaction is, I must admit, sort of amazing - but on the other hand, it is hard to deny that the whole discussion was detrimental to the quality of LW content and was mostly a waste of time.
So, here is the question: why didn't we apply the don't feed the troll meme, as would probably happen much sooner on most forums? I have several hypotheses on that.
1. We are unable to recognise trolls for lack of training. The first hypothesis is quite improbable, given that the concerned troll was downvoted to oblivion2, but still possible. There are not many trolls on LW and perhaps it is difficult to believe that someone is actively seeking that sort of confrontation. I have never understood the psychology of trolls - I try to avoid combative arguments instinctively and find it hard to imagine why somebody would intentionally try to create one. Perhaps a manifestation of the typical mind fallacy combines with compartmentalisation here: although we consciously know that there are trolls out there (as this is hard to ignore), when meeting one our instict tells us that the person cannot be so much different from us.
2. We are unwilling to deal with trolls. The second theory is that although we know that a person isn't sincere, we cherish our standards of discussion so strongly that we still try to respond kindly and maintain a civil debate, or at least one side of the debate. If it is the case, it is not automatically a bad policy. Our rationality is limited and we always operate under the threat of self-serving biases. Some quasi-deontological rule of kindness in debates, even if it is an overkill, may be useful in the same way presumption of innocence is useful in justice.
3. Sunken costs. Once the debate has started, our initial investments feel binding. It is unsettling to quit an argument admitting that it was completely useless and we have lost an hour of our life for nothing. Sunken costs fallacy is well known and widespread, there is no reason to expect we are immune.
4. Best rebuttal contest. An interesting fact is that not only the number of replies was fairly large, but also lot of replies were strongly upvoted. It leads me to suspect that those replies weren't in fact aimed at the opponent in the discussion, but rather intended to impress the fellow LessWrongers. Once the motivation is not "I want to convince my interlocutor" but rather "I can craft an extraordinarily elegant counter-argument which until now didn't appear", the attitude of the opponent doesn't matter. The debate becomes an exercise in arguing, a potentially useful practice maybe, but with many associated dangers.
5. Trollish arguments are fun. I include this possibility mainly for completeness since I don't much believe that significant number of LW users enjoy pointless arguments. But still, there is something fascinating in fallacious arguments. They are frustrating to follow, for sure, especially for a rationalist, but I cannot entirely leave out of consideration the appeal of seeing biases and fallacies in real life, as opposed to mere reading about them in a Kahneman and Tversky paper.
Whatever of the above hypotheses is correct, or even if none of them is correct, I don't doubt that on reflection most of us would prefer to have less irrational discussions. The karma system works somehow, but slowly, and cannot prevent the trollish discussions from gaining momentum if people continue in their present voting patterns. One of the problems lies in upvoting the rebuttals which gives additional motivation for people to participate. There seem to be two main strategies of voting: "I want to see more/less of this" and "this deserves more/less karma than it presently has". The first strategy seems marginally better for dealing with trolls, but both strategies should work better when applied in context. Even a brilliant reply should not be upvoted when placed in an irrational debate: first, it is mostly wasting of resources, and more, we certainly want to see less irrational debates. I don't endorse downvoting good replies, if only because the troll could interpret it as support for his cause. But leaving them on zero seems to be a correct policy.
1 I am not going to link to them because I don't want to generate more traffic there; one of those posts figures already on the 4th place when you Google lesswrong epistemology. Neither I write down the precise topic or the name of the author explicitly, which I hope decreases the probability of his appearing here.
2 In fact, the downvoting, even if massive, came relatively late, with the person in question being able to post on the main site after several days.