There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before.
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain. Second penny is here.



"Probable enough"

When you have eliminated the impossible, whatever  remains is often more improbable than your having made a mistake in one  of your impossibility proofs.


Bayesian way of thinking introduced me to the idea of "hypothesis which is probably isn't true, but probable enough to rise to the level of conscious attention" — in other words, to the situation when P(H) is notable but less than 50%.

Looking back, I think that the notion of taking seriously something which you don't think is true was alien to me. Hence, everything was either probably true or probably false; things from the former category were over-confidently certain, and things from the latter category were barely worth thinking about.

This model was correct, but only in a formal sense.

Suppose you are living in Gotham, the city famous because of it's crime rate and it's masked (and well-funded) vigilante, Batman. Recently you had read The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker, and according to some theories described here, Batman isn't good for Gotham at all.

Now you know, for example, the theory of Donald Black that "crime is, from the point of view of the perpetrator, the pursuit of justice". You know about idea that in order for crime rate to drop, people should perceive their law system as legitimate. You suspect that criminals beaten by Bats don't perceive the act as a fair and regular punishment for something bad, or an attempt to defend them from injustice; instead the act is perceived as a round of bad luck. So, the criminals are busy plotting their revenge, not internalizing civil norms.

You believe that if you send your copy of book (with key passages highlighted) to the person connected to Batman, Batman will change his ways and Gotham will become much more nice in terms of homicide rate. 

So you are trying to find out Batman's secret identity, and there are 17 possible suspects. Derek Powers looks like a good candidate: he is wealthy, and has a long history of secretly delegating illegal-violence-including tasks to his henchmen; however, his motivation is far from obvious. You estimate P(Derek Powers employs Batman) as 20%. You have very little information about other candidates, like Ferris Boyle, Bruce Wayne, Roland Daggett, Lucius Fox or Matches Malone, so you assign an equal 5% to everyone else.

In this case you should pick Derek Powers as your best guess when forced to name only one candidate (for example, if you forced to send the book to someone today), but also you should be aware that your guess is 80% likely to be wrong. When making expected utility calculations, you should take Derek Powers more seriously than Lucius Fox, but only by 15% more seriously.

In other words, you should take maximum a posteriori probability hypothesis into account while not deluding yourself into thinking that now you understand everything or nothing at all. Derek Powers hypothesis probably isn't true; but it is useful.

Sometimes I find it easier to reframe question from "what hypothesis is true?" to "what hypothesis is probable enough?". Now it's totally okay that your pet theory isn't probable but still probable enough, so doubt becomes easier. Also, you are aware that your pet theory is likely to be wrong (and this is nothing to be sad about), so the alternatives come to mind more naturally.

These "probable enough" hypothesis can serve as a very concise summaries of state of your knowledge when you simultaneously outline the general sort of evidence you've observed, and stress that you aren't really sure. I like to think about it like a rough, qualitative and more System1-friendly variant of Likelihood ratio sharing.

Planning Fallacy

The original explanation of planning fallacy (proposed by Kahneman and Tversky) is about people focusing on a most optimistic scenario when asked about typical one (instead of trying to do an Outside VIew). If you keep the distinction between "probable" and "probable enough" in mind, you can see this claim in a new light.

Because the most optimistic scenario is the most probable and the most typical one, in a certain sense.

The illustration, with numbers pulled out of thin air, goes like this: so, you want to visit a museum.

The first thing you need to do is to get dressed and take your keys and stuff. Usually (with 80% probability) you do this very quick, but there is a weak possibility of your museum ticket having been devoured by an entropy monster living on your computer table.

The second thing is to catch bus. Usually (p = 80%), bus is on schedule, but sometimes it can be too early or too late. After this, the bus could (20%) or could not (80%) get stuck in a traffic jam.

Finally, you need to find a museum building. You've been there before once, so you sorta remember your route, yet still could be lost with 20% probability.

And there you have it: P(everything is fine) = 40%, and probability of every other scenario is 10% or even less. "Everything is fine" is probable enough, yet likely to be false. Supposedly, humans pick MAP hypothesis and then forget about every other scenario in order to save computations.

Also, "everything is fine" is a good description of your plan. If your friend asks you, "so how are you planning to get to the museum?", and you answer "well, I catch the bus, get stuck in a traffic jam for 30 agonizing minutes, and then just walk from here", your friend is going  to get a completely wrong idea about dangers of your journey. So, in a certain sense, "everything is fine" is a typical scenario. 

Maybe it isn't human inability to pick the most likely scenario which should be blamed. Maybe it is false assumption that "most likely == likely to be correct" which contributes to this ubiquitous error.

In this case you would be better off having picked the "something will go wrong, and I will be late", instead of "everything will be fine".

So, sometimes you are interested in the best specimen out of your hypothesis space, sometimes you are interested in a most likely thingy (and it doesn't matter how vague it would be), and sometimes there are no shortcuts, and you have to do an actual expected utility calculation.
New Comment
7 comments, sorted by Click to highlight new comments since: Today at 11:03 PM

The 'how to think of planning fallacy' I grokked was 'people while planning don't simulate the scenario in enough detail and don't see potential difficulties,'* so this is new to me. Or rather, what you say is in some sense part of the way I thought, except I didn't simulate it in enough detail to realise that I should understand it in a probabilistic sense as well, so it's new to me when it shouldn't be.

*In fact, right now I'm procrastinating goign and telling my prof that an expansion I told him I'd do is infinite.

I like your example but there is additional evidence that could be gathered to refine your premise. You can check the traffic situation along your route and make summations about travel time. So there is a chance, given additional tools to up the chances of "everything is fine" to be the more likely scenario over not. I think this is especially true for those of us that drive cars. If you and I decide to go to the Denver Art Museum and you are coming from a hotel in downtown Denver and I'm driving from my house out of town whether I'm gong to be on time or not depends on all the factors you mentioned. However, I can mitigate some of those factors by adding data. I can do the same thing for you by empowering you with a map or by guiding you towards a tool like Google maps to get you from your hotel to the museum more efficiently. I think when you live someplace for a time and you make a trip regularly you get used to certain ideas about your journey which is why "everything is fine" is usually picked by people. To try to compensate for every eventuality is mind-numbing. However, I think making proper use of tools to make things as efficient as possible is also a good idea.

However, I am very much in favor of this line of thinking.

Making sure I understood you: you are saying that people sometimes pick "everything is fine" because:

1) they are confident that if anything goes wrong, they would be able to fix it, so everything is fine once again

2) they are so confident in it they aren't making specific plans, beliving that they would be able to fix everything on the spur of the moment

aren't you?

Looks plausible, but something must be wrong there, because planning fallacy:

a) exists (so people aren't evaluating their abilities well)

b) exists even people aren't familiar with the situation they are predicting (here, people have no ground for "ah, I'm able to fix anything anyway" effect)

c) exists even in people with low confidence (however, maybe the effect is weaker here; it's an interesting theory to test)

I blame overconfidence and similar self-serving biases.

The Planning Fallacy explanation makes a lot of sense.

Off-topic: you seem to be one of the organizers of the Houston meetup. I'll be in town the week of Nov 16, feel free to let me know if there is anything scheduled.

Hi shminux. Sorry, just saw your comment. We don't seem to have a date set for November yet, but let me check with the others. Typically we meet on Saturdays, are you still around on the 22nd? Or we could try Sunday the 16th. Let me know.

I'm leaving on Thu very early, so Sunday is better. However, I might be occupied with some family stuff instead, so please do not change your plans because of me. I'll check the Google group messages and contact you if I can make it. Thanks!