Comment author: Michael_Sullivan 23 October 2007 07:00:00PM 0 points [-]

It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?

I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.

Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a conservative.

One thought, the expected utility of letting our god-like figure run this Turing simulation might well be positive! S/He is essentially *creating* these 3^^^3 people and then killing them. And in fact, it's reasonable to assume that expected disutility of killing them is entirely dependent on (and thus exactly balanced by) the utility of their creation.

So, our mugger doesn't really hand us a dilemma unless the claim is that this simulation is already *running*, and those people have lives worth living, but if you don't pay the $5, the program will be altered (sun will stop in the sky, so tto speak) and they will all be killed). This last is more of a nitpick.

It does seem to me that the bayesian inference we draw from this person's statement must be *extraordinarily* low, with an uncertainty much larger than its absolute value. Because a being which is both capable of this and willing to offer such a wager (either in truth or as a test) is deeply beyond our moral or intellectual comprehension. Indeed, if the claim is true, that fact will have utility implications that completely dwarf the immediate decision. If they are willing to do this much over 5 dollars, what will they do for a billion? Or for some end that money cannot normally purchase? Or merely at whim? It seems that the information we receive by failing to pay may be of value commensurate with the disutility of them truthfully carrying out their threat.

In response to Conjunction Fallacy
Comment author: Michael_Sullivan 19 September 2007 09:20:50PM 1 point [-]

Catapult:

The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could cause the conjunction fallacy.

Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who *don't* play jazz" or C is "jazz players who are not accountants".

I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the compound statement is the conditional probability, IOW, turning the "and" statement into an "if" statement. If the soviets invaded Poland, the probability of a cutoff might be high, certainly higher than the current probability given no new information.

But of course that was not the question. A big part of our problem is sometimes translation of english statements into probability statements. If we do that intuitively or cavalierly, these fallacies become very easy to fall into.

Comment author: Michael_Sullivan 12 September 2007 03:59:33PM -1 points [-]

The primary point being that the inviters were not looking for "a female perspective" but "a perspective from a female---who may in all expectation see things differently than we do".

Clearly it depends on the context, and how the questions get asked. Too often I see this kind of thing play out as "Oh let's find a chick to give us the woman's seal of approval". I was trying to be clear about when such a request would and would not play that way. The equivalent to what was discussed in the OP (a call for the participation of artists) would be sending out a general office email asking for (random) women to comment on the ad campaign. That's condescending and classic privileged behavior. Just asking some particular women they respect the very same kind of questions that they might put to a male colleague, isn't.

Comment author: Michael_Sullivan 11 September 2007 03:13:04PM 5 points [-]

"It's not unlike a group of male advertisers sitting around a table considering whether they should solicit a female colleague's perspective on a particular ad campaign. That might be considered condescending, but its equally likely that her opinion may be of value, if not uniquely "feminine" in some way."

Not "might" but *would* be considered condescending. It's classic privileged behavior to essentially ask the token X to speak for Xs. And Eliezer hits on exactly *why* it's privileged and condescending. Because if they really cared about her opinion, they would *already have specific questions to ask*, rather than merely "solicit her perspective" so they can check "woman" (or in the original case "artist") off on their checklist of countries heard from.

Comment author: Michael_Sullivan 07 September 2007 02:56:59PM 1 point [-]

I think this is another key application of the way of Bayes. The usefulness of typical future predictions is hampered by the expectation of binary statements.

Most people don't make future pronouncements by making lists of 100 absurd-seeming possibilities each with a low but significant probability and say "although I would bet againt any single one of these happening by 2100, I predict that at least 5 of them will."

A classic simplified model for predicting uncertain futures is a standard tournament betting pool (like the NCAAs for instance). In any reasonably competitive 64 team field, given an even bet on the best team to be the winner, you would be against. But it is still correct to bet the best team to win in a pool (barring any information about other bets). OTOH, if you have big upset incentives, or if you know who else is betting on what, sometimes you can make profitable (+EV) bets on teams that are less likely to win than the best team, because those bets are claims of the form "I believe team X has greater than Y% probability to do Z", where Y can be arbitrarily low.

Predicting futures is similar. Presumably crazy future predictions look absurd even to field experts because they have a very low probability of occuring. It is right to bet against all of them one on one. But the number of such absurd but not impossible predictions is so large, that it is not right to bet against all of them *together*. As we head further into the future, the probability that *some* absurd thing will happen rapidly approaches 1.

The problem is figuring out which ones to bet on if you are making a typical prediction list that is phrased "In year 2100 thus and so will be the case". And the answer is that we don't have enough information to make any absurd predictions with even close to 50% confidence. If we could make a prediction of something with 50% confidence then, at least within fields possessing appropriate knowledge, it would not be considered absurd.

I'd like to see more futurists make predictions of the form I mentioned in my second paragraph, similar to Robin's approach in the list of 10 crazy things he believes.

Because if experts did that, it would get us thinking more about the 1000 or so currently foreseeable directions from which the 10-20 absurd changes of the next 100 years are most likely to come.

Comment author: Michael_Sullivan 05 September 2007 08:08:11PM 3 points [-]

Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying "I don't know".

Really? I doubt it.

On the set of things that looked absurd 100 years ago, but have actually happened, I'm quite sure you're correct. But of course, that's a highly self-selected sample.

On the set of all possible predictions about the future that were made in 1900? Probably not.

I recall reading not long ago, a list of predictions made about technological and social changes expected during the 20th century, written in 1900. Might have been linked from a previous discussion on this blog, in fact. The surprising thing to me was not how many predictions were way off (quite a few), but how many were dead on, or about as close as they could have been presented in the language and concepts known in 1900 (maybe half).

I'm not going to claim that anti-absurdity is a *good* heuristic, but I don't think you're judging it quite fairly here. I think it's a fair bit better than maximum entropy.

Comment author: Michael_Sullivan 04 September 2007 04:51:33PM 3 points [-]

There is a tremendous demand for mysteries which are frankly stupid. I wish this demand could be satisfied by scientific mysteries instead. But before we can live in that world, we have to undo the idea that what is scientific is not curiosity-material, that it is already marked as "understood".

I think one of the biggest reasons for this is that most of us are satisficers when it comes to explanations of the world. An implication that some scientists know what is going on with a certain phenomenon and are not radically reinterpreting all their theories and designing flurries of experiments means essentially "This phenomenon does not need to radically disturb my map of understanding about the world".

Suppose the answer to the elephant in the room is that God definitely exists and can overturn or modify physical "laws" at whim, and starting today, is willing to provide independently replicable external proof to any willing skeptical observer of that fact, -- this silvery-green elephant is the first salvo in the project.

Now if I know this I could certainly claim that "Somebody else understands why this elephant is here", but it would be a pretty radical stretch to say "Science" even though in some sense, it would be. But when people say/imply that it was explainable by "science", what I believe they mean is that it is explainable in terms that do not render the current common understandings of some major scientific field moot.

Now, in practice, all people's internal maps of understanding are so severely limited that studying *any* deep scientific problem (solved or not, as long as *they* didn't already understand it) would, in fact, radically change their understanding of the world, even if they were not learning anything in the process that scientists in the field don't already know backwards and forwards. I'm a geek and read lots of science, so I've known all sorts of things about the effects of quantum mechanics on how I should understand the world since I was 14, but the moment when I finally *got* the math of the wave equation (after finally deciding to bang my head on the math as long as necessary) was nonetheless transformative.

So I agree with you completely. The fact that something is understood, if it was once a deep mystery, is no reason for anyone to treat it as trivial.

Comment author: Michael_Sullivan 28 August 2007 08:02:31PM 2 points [-]

It seems very normal to expect that the rule will be more restrictive or arithmetic in nature. But if I am supposed to be *sure of the rule*, then I need to test more than just a few possibilities. Priors are definitely involved here.

Part of the problem is that we are trained like Monkeys to make decisions on underspecified problems of this form all the time. I've hardly ever seen a "guess the next [number|letter|item] in the sequence problem that didn't have multiple answers. But most of them have at least one answer that feels "right" in the sense of being simplest, most elegant or most obvious or within typical bounds given basic assumptions about problems of that type.

I'm the sort of accuracy-minded prick who would keep testing until he was very close to *certain* what the rule was, and would probably take forever.

An interesting version of this phenomenon is the game: "Bang! Who's dead". one person starts the game, says "Bang!", and some number of people are metaphorically dead, based on a rule that the other participants are supposed to figure out (which is, AFAIK, the same every time, but I'm not saying it here). The only information that the starter will give is who is dead each time.

Took me forever to solve this, because I tend to have a much weaker version of the bias you consider here. But realistically, most of my mates solved this game much faster than I did. I suspect that this "jump to conclusions" bias is useful in many situations.

Comment author: Michael_Sullivan 14 August 2007 06:52:10PM -2 points [-]

If sabotage increases the probability, lack of sabotage necessarily decreases the probability.

That's true in the averages, but different types of sabotage evidence may have different effects on the probability, some negative, some positive. It's conceivable, though unlikely, for sabotage to on average decrease the probability.

Comment author: Michael_Sullivan 13 August 2007 02:32:09PM 10 points [-]

The particular observation of no sabotage was evidence against, and could not legitimately be worked into evidence for.

You are assuming that there are only two types of evidence, sabotage v. no sabotage, but there can be much more differentiation in the actual facts.

Given Frank's claim, there is a reasoning model for which your claim is inaccurate. Whether this is the model Earl Warren had in his head is an entirely different question, but here it is:

We have some weak independent evidence that some fifth column exists giving us a prior probability of >50%. We have good evidence that some japanese americans are disaffected with a prior of 90%+. We believe that a fifth column which is organized will attempt to make a *significant* coordinated sabotage event, possibly holding off on any/all sabotage until said event. We also believe that the disaffected who are here, if there is *no* fifth column would engage is small acts of sabotage on their own with a high probability.

Therefore, if there are small acts of sabotage that show no large scale organization, this is weak evidence of a lack of a fifth column. If there is a significant sabotage event, this is *strong* evidence of a fifth column. If there is no sabotage at all, this is weak evidence of a fifth column. Not all sabotage is alike, it's not a binary question.

Now, this is a nice rationalization after the fact. The question is, if there had been rare small acts of sabotage, what is the likelihood that this would have been taken by Warren and others in power as evidence that there was no fifth column. I submit that it is very unlikely, and your criticism of their actual logic would thus be correct. But we can't know for certain since they were never presented with that particular problem. And in fact, I wish that you, or someone like you, had been on hand at the hearing to ask the key question: "Precisely what would you consider to be evidence that the fifth column does *not* exist?"

Of course, whether widespread internment was a reasonable policy, even if the logic they were using were not flawed, is a completely separate question, on which I'd argue that *very* strong evidence should be required to adopt such a severe policy (if we are willing to consider it at all), not merely of a fifth column, but of widespread support for it. It is hard to come up with a plausible set of priors where "no sabotage" could possibly imply a high probability of that situation.

View more: Prev | Next