Ray Kurzweil is pretty impressive, although I would be much less confident in his predictions from now to 2029 than from 1999-2009.
I got the 1982 University of British Columbia ordering right easily, though that might be because I'm already aware of the phenomenon being studied.
It would be much harder for me as a subject to deal properly with the Second International Congress on Forecasting experiment. Even if I'm aware that adding or removing detail can lead my estimate of probability to change in an illogical way, my ability to correct for this is limited. For one thing, it is probably hard to correctly estimate the probability that I would have assigned to a more (or less) detailed scenario. So I may just have the one probability estimate available to me to work with. If I tell myself, "I would have assigned a lower probability to a less detailed scenario", that by itself does not tell me how much lower, so it doesn't really help me to decide whether and how much I should adjust my probability estimate to correct for this. Furthermore, even if I were somehow able to accurately estimate the probabilities I would have assigned to scenarios with varying levels of detail, that still would not tell me what probability I should assign. If my high-detail assigned probability is illogically higher than the low-detail assigned probability, that doesn't tell me whether it is the low-detail probability that is off, or the high-detail probability that is off.
So as someone trying to correct for the "conjunction fallacy" in a situation like that of the Congress in Forecasting experiment, I'm still pretty helpless.
Although Robin's critiques of "gotcha" bias are noted, I experienced this as a triumph of learned heuristic over predisposed bias. My gut instinct was to rank accountant+jazz player as more probable than jazz player, and then I thought about the conjunction rule of probability theory.
"The ranking E > C was also displayed by 83% of 32 grad students in the decision science program of Stanford Business School, all of whom had taken advanced courses in probability and statistics."
This is shocking, particularly if they had more than 30 seconds to make a decision.
I think this might possibly be explained if they looked at it in reverse. Not "how likely is it that somebody with this description would be A-F", but "how likely is it that somebody who's A-F would fit this description".
When I answered it I started out by guessing how many doctors there were relative to accountants -- I thought fewer -- and how many architects there were relative to doctors -- much fewer. If there just aren't many architects out there than it would take a whole lot of selection for somebody to be more likely to be one.
But if you look at it the other way around then the number of architects is irrelevant. If you ask how likely is it an architect would fit that description, you don't care how many architects there are.
So it might seem unlikely that a jazz hobbyist would be unimaginative and lifeless. But more likely if he's also an accountant.
I think this is a key point - given a list of choices, people compare each one to the original statement and say "how well does this fit?" I certainly started that way before an instinct about multiple conditions kicked in. Given that, its not that people are incorrectly finding the chance that A-F are true given the description, but that they are correctly finding the chance that the description is true, given one of A-F.
I think the other circumstances might display tweaked version of the same forces, also. For example, answering the suspension of relations question not as P(X^Y) vs P(Y), but perceiving it as P(Y), given X.
But if the question "What is P(X), given Y?" is stated clearly, and then the reader interprets it as "What is P(Y), given X", then that's still an error on their part in the form of poor reading comprehension.
Which still highlights a possible flaw in the experiment.
Imagine a group of 100,000 people, all of whom fit Bill's description (except for the name, perhaps). If you take the subset of all these persons who play jazz, and the subset of all these persons who play jazz and are accountants, the second subset will always be smaller because it is strictly contained within the first subset.
Nitpicking: Concluding that this is a strict inclusion implicitly assumes that there is at least one jazz player who is not an accountant in the original set. Otherwise, the two subsets may still be equal (and thus, equal in size).
The interesting thing to me is the thought process here, as I also knew what was being tested and corrected myself. But the intuitive algorithm for answering the question is to translate "which of these statements is more probable" with "which of these stories is more plausible." And adding detail adds plausibility to the story; this is why you can have a compelling novel in which the main character does some incomprehensible thing at the end, which makes perfect sense in the sequence of the story.
The only way I can see to consistently avoid this error is to map the problem into the domain of probability theory, where I know how to compute an answer and map it back to the story.
While I personally answered both experiments correctly, I see the failure of those whom we assume should be able to do so as a lack of being able to adapt learned knowledge for practical use. I have training in both statistics and philosophy, but I believe that any logical person would be capable of making these judgments correctly, sans statistics and logic classes. Is there any real reason to believe that someone who has studied statistics would be more likely to answer these questions correctly? Or is the ability simply linked to a general intelligence and that participation in an advanced statistics and probability curriculum is a poor indicator of that intelligence?
Going to the reason why. If I simply ask, which is more probable, that a random person I pick out of a phone book is an accountant or that same person is an accountant and is also a jazz musician. Then I suspect more grad students would get the answer correct.
That personality traits are given to the random selection clutters up the "test". We can understand the possibility that Bill is an accountant. So we look for that trait and accept the secondary trait of jazz. But jazz by itself - Never. We read answer E as if to say "If Bill is an accountant, he might play jazz" and this which we can accept for Bill much greater than Bill actually playing jazz. It would also be more probably with typical prejudice.
So an interesting question here is (if I'm correct) why do our prejudices want to make answer E as an accountant who might play jazz rather than the wording actually used. I think it makes more intuitive sense to an typical reader. Can we imagine Bill as an accountant who might play jazz - absolutely. Can we imagine Bill as an account who does play jazz - not as easliy: Lets substitute what it is, with what I want it to read so it makes sense and makes me feel comfortable about solving this riddle.
QED A>E>C
If one is presented two questions,
There was an implied "Bill is not an accountant" in the way I read it initially, and I failed to notice my confusion until it was too late.
So in answer to your question, that has now happened at least once.
I, too, was worried about this at first, but you'll find that http://lesswrong.com/lw/jj/conjunction_controversy_or_how_they_nail_it_down/ contains a thorough examination of the research on the conjunction fallacy, much of which involves eliminating the possibility of this error in numerous ways.
Reasoning with frequencies vs. reasoning with probabilities
Though it's frustrating that we humans seem so poorly designed for explicit probabilistic reasoning, we can often dramatically improve our performance on these sorts of tasks with a quick fix: just translate probabilities into frequencies.
Recently, Gigerenzer (1994) hypothesized that humans reason better when information is phrased in terms of relative frequencies, instead of probabilities, because we were only exposed to frequency data in the ancestral environment (e.g., 'I've found good hunting here in the past on 6 out of 10 visits'). He rewrote the conjunction fallacy task so that it didn’t mention probabilities, and with this alternate phrasing, only 13% of subjects committed the conjunction fallacy. That's a pretty dramatic improvement!
Bill is 34 years old. He is intelligent, but unimaginative, compulsive, and generally lifeless. In school, he was strong in mathematics but weak in social studies and humanities.
There are 200 people who fit the description above. How many of them are: A: Accountants …
Gigerenzer, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright and P. Ayton, eds., Subjective Probability. New York: John Wiley.
Catapult:
The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could cause the conjunction fallacy.
Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who don't play jazz" or C is "jazz players who are not accountants".
I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the compound statement is the conditional probability, IOW, turning the "and" statement into an "if" statement. If the soviets invaded Poland, the probability of a cutoff might be high, certainly higher than the current probability given no new information.
But of course that was not the question. A big part of our problem is sometimes translation of english statements into probability statements. If we do that intuitively or cavalierly, these fallacies become very easy to fall into.
josh wrote: Sebastian,
I know a jazz musician who is not an accountant.
Josh, note that it is not sufficient for one such person to exist; that person also has to be in the set of 100,000 people Eliezer postulated to allow one to conclude the strict inclusion between the two subsets mentioned.
Sebastian, I thought of including that as a disclaimer, and decided against it because I figured y'all were capable of working that part out by yourselves. Unless both figures are equal to 0, I think it is rather improbable that in a set of 10 jazz players, they are all accountants.
The probability of P(A&B) should always be strictly less than P(B), since just as infinity is not an integer, 1.0 is not a probability, and this includes the probability P(A|B). However you may drop the remainder if it is below the precision of your arithmetic.
When initially presenting the question, he doesn't mention a sample of 100,000 people. I assumed we were using the sample of all people. My gooch.
1.0 is a probability. According to the axioms of probability theory, for any A, P(A or (not A))=1. (Unless you're an intuitionist/constructivist who rejects the principle that not(not A) implies A, but that's beyond the scope of this discussion.)
My question about the die-rolling experiment is: how would raising the $25 reward to, say $250, affect the probability of an undergraduate commiting the conjunction fallacy?
(By the way, Bill commits fallacies for a hobby, and he plays the tuba in a circus, but not jazz)
It seems to me like a fine way to avoid this fallacy is to always, as a habit, disassemble statements into atomic statements and evaluate the probability of those. Even if you don't use numbers and any of the rules of probability, just the act of disassembling a statement should make the fallacy obvious and hence easier to avoid.
I think this fallacy could have severe consequences for criminal detectives. They spend a lot of time trying to understand the criminals, and create possible scenarios. It's not good if a detective finds a scenario more plausible the more detailed he imagines it.
The case of the die rolled 20 times nad trying to determine which sequecne is more likely is not one covered in most basic statistics courses. Yes you can apply the rule of statistics and get the right answer, but knowing the rules and being able to apply them are different things. Otherwise we could give people Euclids postulates one day and expect them to know all of geometry. I see a lot of people astonished by peoples answers, but how many of you could correctly determine the exact probability of each of the sequences appearing?
Maybe I am wrong but I think to get the probability of an arbitrary sequence appearing you have to construct a markov model of the sequence. And then I think it is a bunch of matirx multiplication that determines the ultimate probability. Basically you have to take a 6 by 6 matrix and take it to the 20th power. Obviously this is not required, but I think when people can't calculate the probabilities they tend to use intuition, which is not very good when it comes to probability theory.
I think most people would say that there's a high probability Bill is an accountant and a low probability that he plays jazz. If Bill is an accountant that does not play jazz, then E is "half right" whereas C is completely wrong. They may be judging the statements on "how true they are" rather than "how probable they are", which seems an easy mistake to make.
Re: Dice game
Two reasons why someone would choose sequence 2 over sequence 1, one of them rational:
1) Initially I skimmed the sequences for Gs, assumed a non-fixed type font, and thought all the sequences were equal length. On a slightly longer inspection, this was obviously wrong.
2) The directions state: " you will win $25 if the sequence you chose appears on successive rolls of the die." A person could take this to mean that they will successively win $25 for each roll which is a member of a complete version of the sequence. It seems likely the 2nd sequence would be favored in this scenario. The winners would probably complain though, so this likely would have been caught.
This was a really nice article, especially the end.
So, I tried each of these tests before I saw the answers, and I got them all correct- but I think the only reason that I got them correct is because I saw the statements together. With the exception of the dice-rolling, If you had asked me to rate the probabilities of different events occurring with sufficient time in between for the events to become decoupled in my mind, I suspect the absolute probabilities I would given would be in a different order from how I ordered them when I had access to all of them at once. Having the coupled events listed independently forced me to think of each event separately and then combine them rather than trying to just guess the probability of both of a joint event.
But I'm not sure if that's the same problem- it might be more related to how inconsistent people really are when they try to make predictions.
"Logical Conjunction Junction"
Logical conjunction junction, what's your function?
To lower probability,
By adding complexity.
Logical conjunction junction, how's that function?
I've got hundreds of thousands of words,
They each hide me within them.
Logical conjunction junction, what's their function?
To make things seem plausible,
Even though they're really unlikely.
Logical conjunction junction, watch that function!
I've got "god", "magic", and "emergence",
They'll get you pretty far.
[spoken] "God". That's a being with complexity at least that of the being postulating it, but one who is consistently different from that in logically impossible ways and also has several literature genres' worth of back story,
"Magic". That's sort of the opposite, where instead of having an explanation that makes no sense, you have no explanation and just pretend that you do,
And then there's "emergence", E-mergence, where you collapse levels everywhere except in one area that seems "emergent" by comparison, because only there do you see higher levels are perched on lower levels that are different from them.
"God", "magic", and "emergence",
Get you pretty far.
[sung] Logical conjunction junction, what's your function?
Hooking up two concepts so they hold each other back!
Logical conjunction junction, watch that function!
Not just when you see an "and",
Also even within short words.
Logical conjunction junction, watch that function!
Some words combine many ideas,
Ideas that can't all be true at once!
Logical conjunction junction, watch that function!
The YouTube link is broken. Did you intend to link to a YouTube video of the original video from Schoolhouse Rock?
I suspect respondents are answering different questions from the ones asked. And where the question does not include probability values for the options the respondents are making up their own. It does not account for respondents arbitrarily ordering what they perceive as equal probabilities. And finally, they may be changing the component probabilities so that they are using different probability values throughout when viewing the options.
The respondents are actually reading the probabilities as independent, and assigning probabilities such as this: A: P(Accountant) = 0.1 C: P(Jazz) = 0.01 E: P(Accountant^Jazz) = P(Accountant) x P(Jazz) = 0.001, and you would expect the correct ranking
But if they are perceiving E as conditional then P(Accountant|Jazz) = P(Accountant^Jazz)/P(Jazz) = .001/.01 = 0.1, and leaving the equal ranking of A, E ordered as A, E they end up with A >= E > C. And, it's also possible they are using an intuitive conditional probability and coarsely and approximately ranking without calculation.
They may also be doing the intuitive of the following, by reading the questions in order:
A: Yeah, sounds about right for Bill. Let's say 0.1 C: Nah, no way does Bill play Jazz. Let's say zero! E: Well, I really don't think he plays jazz, and I really thought he'd be an accountant. But I guess he could be both. In this case I'm going for 0.05 accountant, but 0.02 Jazz. 0.05 x 0.02 = 0.001
So, A > E > C
In this last case the fact that he could both be an Accountant and play Jazz (E) is more plausible than he would play Jazz and not be an accountant (reading C as not being an accountant). Of course C does not rule out him also being an accountant, but that's not what appears to be the intuitive implication of C. It's as if the respondent is thinking, why would they include E if C already includes the possibility of being an accountant? And though the options are expressed as a set the respondent is not connecting them and so adapting the independent probabilities in each option. As I said, this might be quite intuitive so that the respondents do not perform the calculations and so do not see the mistake. That the question says "not mutually exclusive or exhaustive" may not register.
The diplomatic response might be explained by the following. Without any good reason respondents to (1) think suspension unlikely. Because they are not asked (2) they are asked to rate this independently of anything else, whether that be invasion of Poland, assassination of the US President, or anything else not mentioned in (1). Since they are not given any reason for suspension they think it very unlikely. So, your point that "there is no possibility that the first group interpreted (1) to mean 'suspension but no invasion' " does not hold. They can interpret it to mean 'suspension but nothing else'.
But in (2) the respondents are given a good reason to thank that if invasion is likely then suspension will follow hot on its heels. Also, some respondents might be answering a question such as "If invasion then suspension?", even though that is not what they are being asked.
So I think there are explanations as to why respondents don't get it that go beyond simply not knowing or remembering the conjunction condition, let alone knowing it as a 'fallacy' to avoid.
Is probability a cognitive version of an optical illusion? Two lines may not look the same length, but when you measure them they are. When two probability statements appear one way they may actually turn out to be another way if you perform the calculation. The difference in both cases is relying on intuition rather than measurement or calculation. Looked at it from this point of view probability 'illusions' are no more embarrassing than optical ones, which we still fall for even when we know the falsity of what we perceive.
A : A complete suspension of diplomatic relations between the USA and Russia, sometime in 2023.
B : A Russian invasion of Poland, sometime in 2023.
C : Chicago Bulls winning NBA competition, sometime in 2023.
D <=> A & B
E <=> A & C.
In order to estimate the likelyhood of an event, the mind looks in the available memory for information. The more easily available an information the more it is taken into account.
A and B hold information that is relevent to each other. A and B are correlated and the occurence of one of them strengthens the probability of the other happening. The mind while trying to evaluate the likelyhood of each of the components of D takes one as a relevant information about the other hence leading to overestimate p(A) and p(B).
p(D) = p(A&B) = p(A).p(B | A) = p(B).p(A | B)
Then the mind gets it wrong is when it makes the above equation equal to either p(A | B) or p(B | A) or oddly equal to their sum, as it has been mentionned in previous comments.
The intuitive mind has trouble understanding probability multiplication. It rather functions in an addition (for correlated events) and substraction (for independent events) mode when evaluating likelyhood. p(E) for example is likely to be seen as p(C) - p(A). C is a more likely event ( even more if you live in Chicago) than A. Let say 5% for C and to be generous 1% for A.
The mind would rather do p(E) = p(C) - p(A) = 4 % which ends up making p(A&C) > p(A) ,
rather than the correct p(A) . p(C |A) = p(C) . p(A | C) = p(A) . p(C) = 0.05 % assuming A and C completely independent.
The great speed at which the intuitive mind makes decisions and assigns likelyhood for propositions seems to be at the expense of occuracy due to oversimplification, poor calculus ability, sensitivity to current emotional state leading to volatility of priority order, sensitivity to chronology of information acquisition... etc.
Nevertheless, the intuitive mind shared with other species proved to be a formidable machine fine tuned for survival by ages of natural selection. It is capable of dealing with huge amount of sensorial and abstract information gathered upon time, sorting it dynamically, and making survival decisions in a second or two.
There's also a linguistic issue here. The English "and" doesn't simply mean mathematical set theoretical conjunction in everyday speech. Indeed, without using words like "given" or "suppose" or a long phrase such as "if we already know that", we can't easily linguistically differentiate between P(Y | X) and P(Y, X).
"How likely is it that X happens and then Y happens?", "How likely is it that Y happens after X happened?", "How likely is it that event Y would follow event X?". All these are ambiguous in everyday speech. We aren't sure whether X has hypothetically already been observed or it's a free variable, too.
In my experience, the english "and" can also be interpreted as separating two statements that should be evaluated (and given credit for being right/wrong) separately. Under that interpretation, someone who says "A and B" where A is true and B is false is considered half-right, which is better than just saying "B" and being entirely wrong.
Though, looking back at the original question, it doesn't appear to use the word "and", so problems with that word specifically aren't very relevant to this article.
I agree that there are some important methodological issues with the paper, and it is far from the last word. What the criticisms you link don't address well, however, is that fact that (a) the paper is strengthened by the fact that it has a strong, validated theory of underlying behavior...
- "AnonySocialScientist", Reddit
Great article and I know I'm commenting on an 8 year-old post but two points that came to mind:
1 ) I wonder if the UBC and Stanford undergraduates would have done better if the first dice sequence had a leading space or two so that it lined up like so: ( underscore in place of space)
2 ) Edit: Realised this second point was completely wrong
Is this really a fallacy? In the USSR and Poland case, we might take the probability space in (1) to exclude an invasion of Poland, and the space in (2) to include one. Then the claims are perfectly consistent, since the probability space changes; people just reason with respect to "stereotypical" alternatives.
Okay, Eliezer should add a boldfaced note at the bottom of this post asking people not to comment until they've read the followup.
we might take the probability space in (1) to exclude an invasion of Poland, and the space in (2) to include one
That seems like an unjustified interpretation, since, according to the OP:
Two different experimental groups were respectively asked to rate the probability of two different statements, each group seeing only one statement:
- "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
- "A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
Since the subjects receiving statement 1 do not even see statement 2, they would have no reason to exclude the possibility of an invasion of Poland from statement 1.
I believe this is not a conjunction fallacy, but for a different reason. In the first case, the test subject is required to conceive of a reason that might lead to a complete suspension of relations. There are many different choices, invasion, provocation, oil embargo of Europe, etc. Each of these seems remote, that the test subject might not even contemplate them. In the second case, the test is given a more specific, therefore more conceivable sequence of events.
A good third scenario, to control for this, would have been to ask another group of subjects the probabilities independently:
A. That USSR invades Poland
B. That US suspends relations
This provide the same trigger of a plausible provocation, but doesn't directly link them. Variances between the estimates of B in this case v. 1 in the original test would indicate confidence interval between variances between 1 and 2.
I have some issues with the third experiment (URSS vs. USA). Let me try to explain them with an example.
Suppose you see a chess match up to a given point, with black to move. You are asked an estimate W1 of the probability that white can force a win. Then you see black's move, and it's a truly, unexpectedly brilliant move. You are then asked a new estimate W2 of the probability that white can force a win. If black's move is sufficiently brilliant, it's natural for W2 to be lower than the answer a "previous you" gave for W1: black has seriously undermined white's chances with his move. But the Russia vs. USA example seems to suggest that any pair of answers where W2<W1 is a fallacy. After all, the set of possible directions of play before black's move includes all possible directions of play after black's move. If white can force a win in all the former, it can also force a win in all the latter, which are a subset of the former. So one could argue that any rational analyst should always output W2 at least as high as W1.
I think the catch is that having more details made explicit, even details that we implicitly knew ("please think about the possibility of Russia invading Poland") can allow us to reason "more accurately" about the likelihood of a given, complex situation (the diplomatic relations between Russia and the USA failing). It's quite possible that, in the light of those details, our evaluation of the likelihood increases sufficiently to more than compensate for the decrease of likelihood involved in considering just a subcase (the relations failing AND an invasion of Poland). Is this a fallacy? I would rather call it a case of computation resources inadequate to the task at hand - resources that become (more) adequate with a little boost or "hint" allowing the evaluation process to run more efficiently.
In this sense, it would have been interesting to see the results if the first statement had been something along the lines: "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983 (keeping in mind that it's not impossible that the Soviet Union invades Poland)". Or if the analysts had been given the questions, and two months of full time study of US and URSS history.
But what of the fact that any singular event, is actually a conjunction of the probabilities of that event and the negation of the alternative event, so A is really equivalent to a conjunction of A and not B. Given this can it really be said that A is more likely than A and B?
I find EY’s main points very convincing and helpful. After reading this and the follow-on thread, my only nit is that using the suspension-of-relations question as one of the examples seems pedagogically odd, because perfectly rational (OK, bounded-rational but still rational) behavior could have led to the observed results in that case.
The rational behavior that could have led to the observed results is that participants in the second group, having been reminded of the “invade Poland” scenario, naturally thought more carefully about the likelihood of such an invasion (and/or the likelihood of such an invasion triggering suspension), and this more careful thinking caused them to assign a higher probability to the invasion-then-suspension scenario (thus also to the invasion-and-suspension scenario) than they would have assigned to the “suspension” scenario if instead asked Question 1 (which mentions only suspension).
Why? For the simple reason that Question 2 tended to provide them with new information (namely, the upshot of the additional careful thought about the Polish invasion scenario) that Question 1 wouldn’t have.
(To caricature this, imagine showing two separate groups of chess beginners the same superficially-even board position with Player A on move, asking Group 1 participants “what’s the probability that A will win,” and separately asking Group 2 participants “what’s the probability that A will make slightly-tricky-advantageous-move-X and win”? Yes, the event Group 2 was asked about is less likely than the event Group 1 was asked about; Group 2's answers may nevertheless average higher for quite rational reasons.)
I agree. This notion of question 2 providing a plausible cause that might lead to suspension v. question 1 where the test subject has to conceive of their own cause is a bias, but a different type of bias, not a conjunction fallacy. There could be (and possibly have been) ways to construct the test to control for this. For example, there are 3 test groups where 1 and 2 are the same and for the third, the two events are asked independently: What are the probabilities of each event:
A. That USSR invades Poland, or B. That US suspends relations
The following experiment has been slightly modified for ease of blogging. You are given the following written description, which is assumed true:
No complaints about the description, please, this experiment was done in 1974. Anyway, we are interested in the probability of the following propositions, which may or may not be true, and are not mutually exclusive or exhaustive:
Take a moment before continuing to rank these six propositions by probability, starting with the most probable propositions and ending with the least probable propositions. Again, the starting description of Bill is assumed true, but the six propositions may be true or untrue (they are not additional evidence) and they are not assumed mutually exclusive or exhaustive.
In a very similar experiment conducted by Tversky and Kahneman (1982), 92% of 94 undergraduates at the University of British Columbia gave an ordering with A > E > C. That is, the vast majority of subjects indicated that Bill was more likely to be an accountant than an accountant who played jazz, and more likely to be an accountant who played jazz than a jazz player. The ranking E > C was also displayed by 83% of 32 grad students in the decision science program of Stanford Business School, all of whom had taken advanced courses in probability and statistics.
There is a certain logical problem with saying that Bill is more likely to be an account who plays jazz, than he is to play jazz. The conjunction rule of probability theory states that, for all X and Y, P(X&Y) <= P(Y). That is, the probability that X and Y are simultaneously true, is always less than or equal to the probability that Y is true. Violating this rule is called a conjunction fallacy.
Imagine a group of 100,000 people, all of whom fit Bill's description (except for the name, perhaps). If you take the subset of all these persons who play jazz, and the subset of all these persons who play jazz and are accountants, the second subset will always be smaller because it is strictly contained within the first subset.
Could the conjunction fallacy rest on students interpreting the experimental instructions in an unexpected way - misunderstanding, perhaps, what is meant by "probable"? Here's another experiment, Tversky and Kahneman (1983), played by 125 undergraduates at UBC and Stanford for real money:
65% of the subjects chose sequence 2, which is most representative of the die, since the die is mostly green and sequence 2 contains the greatest proportion of green rolls. However, sequence 1 dominates sequence 2, because sequence 1 is strictly included in 2. 2 is 1 preceded by a G; that is, 2 is the conjunction of an initial G with 1. This clears up possible misunderstandings of "probability", since the goal was simply to get the $25.
Another experiment from Tversky and Kahneman (1983) was conducted at the Second International Congress on Forecasting in July of 1982. The experimental subjects were 115 professional analysts, employed by industry, universities, or research institutes. Two different experimental groups were respectively asked to rate the probability of two different statements, each group seeing only one statement:
Estimates of probability were low for both statements, but significantly lower for the first group than the second (p < .01 by Mann-Whitney). Since each experimental group only saw one statement, there is no possibility that the first group interpreted (1) to mean "suspension but no invasion".
The moral? Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.
Do you have a favorite futurist? How many details do they tack onto their amazing, futuristic predictions?
Tversky, A. and Kahneman, D. 1982. Judgments of and by representativeness. Pp 84-98 in Kahneman, D., Slovic, P., and Tversky, A., eds. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.
Tversky, A. and Kahneman, D. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90: 293-315.