Catapult:
The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could cause the conjunction fallacy.
Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who *don't* play jazz" or C is "jazz players who are not accountants".
I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the compound statement is the conditional probability, IOW, turning the "and" statement into an "if" statement. If the soviets invaded Poland, the probability of a cutoff might be high, certainly higher than the current probability given no new information.
But of course that was not the question. A big part of our problem is sometimes translation of english statements into probability statements. If we do that intuitively or cavalierly, these fallacies become very easy to fall into.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?
I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.
Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a conservative.
One thought, the expected utility of letting our god-like figure run this Turing simulation might well be positive! S/He is essentially *creating* these 3^^^3 people and then killing them. And in fact, it's reasonable to assume that expected disutility of killing them is entirely dependent on (and thus exactly balanced by) the utility of their creation.
So, our mugger doesn't really hand us a dilemma unless the claim is that this simulation is already *running*, and those people have lives worth living, but if you don't pay the $5, the program will be altered (sun will stop in the sky, so tto speak) and they will all be killed). This last is more of a nitpick.
It does seem to me that the bayesian inference we draw from this person's statement must be *extraordinarily* low, with an uncertainty much larger than its absolute value. Because a being which is both capable of this and willing to offer such a wager (either in truth or as a test) is deeply beyond our moral or intellectual comprehension. Indeed, if the claim is true, that fact will have utility implications that completely dwarf the immediate decision. If they are willing to do this much over 5 dollars, what will they do for a billion? Or for some end that money cannot normally purchase? Or merely at whim? It seems that the information we receive by failing to pay may be of value commensurate with the disutility of them truthfully carrying out their threat.