I think this might hit nail on the head with regards to issues people have with monty hall problem (and its variations).
Too bad monty hall problem is too well known. Otherwise we could do tests on people of different native languages and see if perhaps the language confuses the hell out of people who weren't taught to think of the probabilities in language independent way. We say - chances of X , like we say, colour of X, but the chances are not property of X in the same way that the colour is. We say 'distance to' something, not 'distance of' something. Maybe we should say, chances to outcome. Chances from our knowledge, to outcome.
It's a bit of an approximation to speak of "the colour of X" too, and in roughly the same way as it is to speak of "the probability of X".
Indeed. But there's important difference - the X has some physical property that becomes one colour or another, while that's not so for probability. There's also not a great deal of important confusion here.
So that is the Bayesian view of things, and I would now like to point out a couple of classic brainteasers that derive their brain-teasing ability from the tendency to think of probabilities as inherent properties of objects.
Let's take the old classic: You meet a mathematician on the street, and she happens to mention that she has given birth to two children on two separate occasions. You ask: "Is at least one of your children a boy?" The mathematician says, "Yes, he is."
I think that this puzzle still has some brain-teasing ability left, even for the Bayesian.
After all, a proper Bayesian treatment would have to ask, "What was the prior probability that I would ask whether at least one of the children was a boy?" That is, you would have to ask yourself, "How do I condition on the fact that I'm the sort of person who asks whether one of the children is a boy, instead of asking whether one of the children is a girl?" Hence, the problem leads directly into anthropic considerations.
Today's post, Probability is in the Mind was originally published on 12 March 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Mind Projection Fallacy, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.