Manfred comments on Stupid Questions, 2nd half of December - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
I have an intuition that I have dissolved the sleeping beauty paradox as semantic confusion about the word "probability". I am aware that my reasoning is unlikely to be accepted by the community, but I am unsure what is wrong with it. I am posting this to the "stupid questions" thread to see if helps me gain any insight either on Sleeping Beauty or on the thought process that led to me feeling like I've dissolved the question.
When the word "probability" is used to describe the beliefs of an agent, we are really talking about how that agent would bet, for instance in an ideal prediction market. However, if the rules of the prediction market are unclear, we may get semantic confusion.
In general, when you are asked "What is the probability that the coin came up heads" we interpret this as "how much are you willing to pay for a contract that will be worth 1 dollar if the coin came up heads, and nothing if it came up tails". This seems straight forward, but in the sleeping beauty problem, the agent may make the same bet multiple times, which introduces ambiguity.
Person 1 may interpret then the question as follows: "Every time you wake up, there is a new one dollar bill on the table. How much are you willing to pay for a contract that gives you the dollar if the coin came up heads?". In this interpretation, you get to keep all the dollars you won throughout the experiment.
In contrast, person 2 may interpret the question as follows "There is one dollar on the table. Every time you wake up, you are given a chance to revise the price you are willing to pay for the contract, but all earlier bets are cancelled such that only the last bet counts". In this interpretation, there is only one dollar to be won.
Person 1 will conclude that the probability is 1/3, and person 2 will conclude that the probability is 1/2. However, once they agree on what bet they are asked to make, the disagreement is dissolved.
The first definition is probably better matched to current usage of the word. This gives most rationalists a strong intuition that the thirder position is "correct". However, if you want to know which definition is most useful or applicable, this really depends on the disguised query, and on which real world scenario the parable is meant to represent. If the payoff utility is only determined once (at the end of the experiment), then the halfer definition could be more useful?
ETA: After reading the Wikipedia:Talk section for Sleeping Beauty, it appears that this idea is not original and that in fact a lot of people have reached the same conclusion. I should have read that before I commented...
Defining probabilities in terms of bets is one way to do things, but not the only way - one can also define them in terms of limiting frequencies, or as numbers that allow you to efficiently encode the environment, or as objects that follow some axioms you think numerical confidence-objects should follow.
I have witnessed people arguing for betting with probability 1/2 in case 2. After all, they say, the probability is 1/2, so that's how you should bet. Most people who approach this problem for the first time (whether thirders or halfers) use the same decision-making algorithm: first, compute the probability (perhaps wrongly) of winning the bet for the person inside the experiment, second, use that probability to determine the value of the bet.
When you say it's obvious that in case 2 you should bet a certain way, I think you're choosing how to bet in a different way: from the viewpoint of someone on the outside, what strategy should the person on the inside follow to maximize their gains? This viewpoint becomes a lot more obvious after being exposed to LessWrong for a couple of years.
And there's one tricky thing here, which is that if you use this perspective, you as the outside person have some probabilities, but the person inside the experiment also might have probabilities, which do not have to be simply related to the optimal strategy. So you have to be pretty careful with this argument that knowing the correct strategy implies knowing the correct probabilities.