Several months ago, we had an interesting discussion about the Sleeping Beauty problem, which runs as follows:
Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.
Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”
In the end, the fact that there were so many reasonable-sounding arguments for both sides, and so much disagreement about a simple-sounding problem among above-average rationalists, should have set off major alarm bells. Yet only a few people pointed this out; most commenters, including me, followed the silly strategy of trying to answer the question, and I did so even after I noticed that my intuition could see both answers as being right depending on which way I looked at it, which in retrospect would have been a perfect time to say “I notice that I am confused” and backtrack a bit…
And on reflection, considering my confusion rather than trying to consider the question on its own terms, it seems to me that the problem (as it’s normally stated) is completely a tree-falling-in-the-forest problem: a debate about the normatively “correct” degree of credence which only seemed like an issue because any conclusions about what Sleeping Beauty “should” believe weren’t paying their rent, were disconnected from any expectation of feedback from reality about how right they were.
It may seem either implausible or alarming that as fundamental a concept as probability can be the subject of such debates, but remember that the “If a tree falls in the forest…” argument only comes up because the understanding of “sound” as “vibrations in the air” and “auditory processing in a brain” coincide often enough that most people other than philosophers have better things to do than argue about which is more correct. Likewise, in situations that we actually encounter in real life where we must reason or act on incomplete information, long-run frequency is generally about the same as optimal decision-theoretic weighting. If you’re given the question “If you have a bag containing a white marble and two black marbles, and another bag containing two white marbles and a black marble, and you pick a bag at random and pick a marble out of it at random and it’s white, what’s the probability that you chose the second bag?” then you can just answer it as given, without worrying about specifying a payoff structure, because no matter how you reformulate it in terms of bets and payoffs, if your decision-theoretic reasoning talks about probabilities at all then there’s only going to be one sane probability you can put into it. You can assume that answers to non-esoteric probability problems will be able to pay their rent if they are called upon to do so, and so you can do plenty within pure probability theory long before you need your reasoning to generate any decisions.
But when you start getting into problems where there may be multiple copies of you and you don’t know how their responses will be aggregated — or, more generally, where you may or may not be scored on your probability estimate multiple times or may not be scored at all, or when you don’t know how it’s being scored, or when there may be other agents following reasoning correlated with but not necessarily identical to yours — then I think talking too much about “probability” directly will cause different people to be solving different problems, given the different ways they will implicitly imagine being scored on their answers so that the question of “What subjective probability should be assigned to x?” has any normatively correct answer. Here are a few ways that the Sleeping Beauty problem can be framed as a decision problem explicitly:
Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails, and being given a dollar if she was correct. After the experiment, she will keep all of her aggregate winnings.
In this case, intending to guess heads has an expected value of $.50 (because if the coin came up heads, she’ll get $1, and if it came up tails, she’ll get nothing), and intending to guess tails has an expected value of $1 (because if the coin came up heads, she’ll get nothing, and if it came up tails, she’ll get $2). So she should intend to guess tails.
Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails. After the experiment, she will be given a dollar if she was correct on Monday.
In this case, she should clearly be indifferent (which you can call “.5 credence” if you’d like, but it seems a bit unnecessary).
Each interview consists of Sleeping Beauty being told whether the coin landed on heads or tails, followed by one question, “How surprised are you to hear that?” Should Sleeping Beauty be more surprised to learn that the coin landed on heads than that it landed on tails?
I would say no; this seems like a case where the simple probability-theoretic reasoning applies. Before the experiment, Sleeping Beauty knows that a coin is going to be flipped, and she knows it’s a fair coin, and going to sleep and waking up isn’t going to change anything she knows about it, so she should not be even slightly surprised one way or the other. (I’m pretty sure that surprisingness has something to do with likelihood. I may write a separate post on that, but for now: after finding out whether the coin did come up heads or tails, the relevant question is not “What is the probability that the coin came up {heads,tails} given that I remember going to sleep on Sunday and waking up today?”, but “What is the probability that I’d remember going to sleep on Sunday and waking up today given that the coin came up {heads,tails}?”, in which case either outcome should be equally surprising, in which case neither outcome should be surprising at all.)
Each interview consists of one question, “What is the limit of the frequency of heads as the number of repetitions of this experiment goes to infinity?”
Here of course the right answer is “.5, and I hope that’s just a hypothetical…”
Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.
In this case it is optimal to bet 1/3 that the coin came up heads, 2/3 that it came up tails:
Bet on heads: | 1/2 | 1/3 | ||
Actual flip: | Heads | Tails | Heads | Tails |
Monday: | -1 bit | -1 bit | -1.585 bits | -0.585 bits |
Tuesday: | n/a | -1 bit | n/a | -0.585 bits |
Total: | -1 bit | -2 bits | -1.585 bits | -1.17 bits |
Expected: | -1.5 bits | -1.3775 bits |
(If you’re not used to the logarithmic scoring rule enough to trust that 1/3 is better than every other option too, you can check this by graphing y = (log2x + 2 log2(1 - x))/2, where x is the probability you assign to heads, and y is expected utility.)
So I hope it is self-evident that reframing seemingly-paradoxical probability problems as decision problems generally makes them trivial, or at least agreeably solvable and non-paradoxical. What may be more controversial is that I claim that this is satisfactory not as a circumvention but as a dissolution of the question “What probability should be assigned to x?”, when you have a clear enough idea of why you’re wondering about the “probability.” Can we really taboo concepts like “probability” and “plausibility” and “credence”? I should certainly hope so; judgments of probability had better be about something, and not just rituals of cognition that we use because it seems like we’re supposed to rather than because it wins.
But when I try to replace “probability” with what I mean by it, and when I mean it in any normative sense — not, like, out there in the territory, but just “normative” by whatever standard says that assigning a fair coin flip a probability of .5 heads tends to be a better idea than assigning it a probability of .353289791 heads — then I always find myself talking about optimal bets or average experimental outcomes. Can that really be all there is to probability as degree of belief? Can’t we enjoy, for its own sake, the experience of having maximally accurate beliefs given whatever information we already have, even in circumstances where we don’t get to test it any further? Well, yes and no; if your belief is really about anything, then you’ll be able to specify, at the very least, a ridiculous hypothetical experiment that would give you information about how correct you are, or a ridiculous hypothetical bet that would give you an incentive to optimally solve a more well-defined version of the problem. And if you’re working with a problem where it’s at all unclear how to do this, it is probably best to backtrack and ask what problem you’re trying to solve, why you’re asking the question in the first place. So when in doubt, ask for decisions rather than probabilities. In the end, the point (aside from signaling) of believing things is (1) to allow you to effectively optimize reality for the things you care about, and (2) to allow you to be surprised by some possible experiences and not others so you get feedback on how well you’re doing. If a belief does not do either of those things, I’d hesitate to call it a belief at all; yet that is what the original version of the Sleeping Beauty problem asks you to do.
Now, it does seem to me that following the usual rules of probability theory (the ones that tend to generate optimal bets in that strange land where intergalactic superintelligences aren’t regularly making copies of you and scientists aren’t knocking you out and erasing your memory) tells Sleeping Beauty to assign .5 credence to the proposition that the coin landed on heads. Before the experiment has started, Sleeping Beauty already knows what she’s going to experience — waking up and pondering probability — so if she doesn’t already believe with 2/3 probability that the coin will land on tails (which would be a strange thing to believe about a fair coin), then she can’t update to that after experiencing what she already knew she was going to experience. But in the original problem, when she is asked “What is your credence now for the proposition that our coin landed heads?”, a much better answer than “.5” is “Why do you want to know?”. If she knows how she’s being graded, then there’s an easy correct answer, which isn’t always .5; if not, she will have to do her best to guess what type of answer the experimenters are looking for; and if she’s not being graded at all, then she can say whatever the hell she wants (acceptable answers would include “0.0001,” “3/2,” and “purple”).
I’m not sure if there is more to it than that. Presumably the “should” in “What subjective probability should I assign x?” isn’t a moral “should,” but more of an “if-should” (as in “If you want x to happen, you should do y”), and if the question itself seems confusing, that probably means that under the circumstances, the implied “if” part is ambiguous and needs to be made explicit. Is there some underlying true essence of probability that I’m neglecting? I don’t know, but I am pretty sure that even if there were one, it wouldn’t necessarily be the thing we’d care about knowing in these types of problems anyway. You want to make optimal use of the information available to you, but it has to be optimal for something.
I think this principle should help to clarify other anthropic problems. For example, suppose Omega tells you that she just made an exact copy of you and everything around you, enough that the copy of you wouldn’t be able to tell the difference, at least for a while. Before you have a chance to gather more information, what probability should you assign to the proposition that you yourself are the copy? The answer is non-obvious, given that there already is a huge and potentially infinite number of copies of you, and it’s not clear how adding one more copy to the mix should affect your belief about how spread out you are over what worlds. On the other hand, if you’re Dr. Evil and you’re in your moon base preparing to fire your giant laser at Washington, DC when you get a phone call from Austin “Omega” Powers, and he tells you that he has made an exact replica of the moon base on exactly the spot at which the moon laser is aimed, complete with an identical copy of you (and an identical copy of your identical miniature clone) receiving the same phone call, and that its laser is trained on your original base on the moon, then the decision is a lot easier: hold off on firing your laser and gather more information or make other plans. Without talking about the “probability” that you are the original Dr. Evil or the copy or one of the potentially infinite Tegmark duplicates in other universes, we can simply look at the situation from the outside and see that if you do fire your laser then you’ll blow both of yourselves up, and that if you don’t fire your laser then you have some new competitors at worst and some new allies at best.
So: in problems where you are making one judgment that may be evaluated more or less than one time, and where you won’t have a chance to update between those evaluations (e.g. because your one judgment will be evaluated multiple times because there are multiple copies of you or your memory will be erased), just ask for decisions and leave probabilities out of it to whatever extent possible.
In a followup post, I will generalize this point somewhat and demonstrate that it helps solve some problems that remain confusing even when they specify a payoff structure.
Here is a diagram I made of this situation, inspired by my adventures here.
The semantics are such that for example, because all the player nodes are inside a blue blob, p=q=r, unless you choose an answer by flipping coins or something.
Edit: fixed; thanks, RobinZ
...where's the flip corresponding to getting scored for the other outcome? Doesn't that make it 1-p=q=r?