Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The fairness of the Sleeping Beauty

0 MrMind 07 July 2015 08:25AM

This post will attempt a (yet another) analysis of the problem of the Sleeping Beauty, in terms of Jaynes' framework "probability as extended logic" (aka objective Bayesianism).

TL,DR: The problem of the sleeping beauty reduces to interpreting the sentence “a fair coin is tossed”: it can mean either that no results of the toss is favourite, or that the coin toss is not influenced by anthropic information, but not both at the same time. Fairness is a property in the mind of the observer that must be further clarified: the two meanings cannot be confused.

What I hope to show is that the two standard solutions, 1/3 and 1/2 (the 'thirder' and the 'halfer' solutions), are both consistent and correct, and the confusion lies only in the incorrect specification of the sentence "a fair coin is tossed".

The setup is given both in the Lesswrong's wiki and in Wikipedia, so I will not repeat it here. 

I'm going to symbolize the events in the following way: 

- It's Monday = Mon
- It's Tuesday = Tue
- The coin landed head = H
- The coin landed tail = T
- statement "A and B" = A & B
- statement "not A" = ~A

The problem setup leads to an uncontroversial attributions of logical structure:

1)    H = ~T (the coin can land only on head or tail)

2)    Mon = ~Tue (if it's Tuesday, it cannot be Monday, and viceversa) 

And of probability:

3)    P(Mon|H) = 1 (upon learning that the coin landed head, the sleeping beauty knows that it’s Monday)

4)    P(T|Tue) = 1 (upon learning that it’s Tuesday, the sleeping beauty knows that the coin landed tail)

Using the indifference principle, we can also derive another equation.

Let's say that the Sleeping Beauty is awaken and told that the coin landed tail, but nothing else. Since she has no information useful to distinguish between Monday and Tuesday, she should assign both events equal probability. That is:

5)    P(Mon|T) = P(Tue|T)

Which gives

6)    P(Mon & T) = P(Mon|T)P(T) = P(Tue|T)P(T) = P(Tue & T)

It's here that the analysis between "thirder" and "halfer" starts to diverge.

The wikipedia article says "Guided by the objective chance of heads landing being equal to the chance of tails landing, it should therefore hold that". We know however that there's no such thing as 'the objective chance'.

Thus, "a fair coin will be tossed", in this context, will mean different things for different people.

The thirders interpret the sentence to mean that beauty learns no new facts about the coin upon learning that it is Monday.

They thus make the assumption:

(TA) P(T|Mon) = P(H|Mon)

So:

7)    P(Mon & H) = P(H|Mon)P(Mon) = P(T|Mon)P(Mon) = P(Mon & T)

From 6) and 7) we have:

8)    P(Mon & H) = P(Mon & T) = P(Tue & T)

And since those events are a partition of unity, P(Mon & H) = 1/3.

And indeed from 8) and 3):

9)    1/3 =  P(Mon & H) = P(Mon|H)P(H) = P(H)

So that, under TA, P(H) = 1/3 and P(T) = 2/3.

Notice that also, since if it’s Monday the coin landed either on head or tail, P(H|Mon) = 1/2.

The thirder analysis of the Sleeping Beauty problem is thus one in which "a fair coin is tossed" means "Sleeping Beauty receives no information about the coin from anthropic information".

There is however another way to interpret the sentence, that is the halfer analysis:

(HA) P(T) = P(H)

Here, a fair coin is tossed means simply that we assign no preference to either side of the coin.

Obviously from 1:

10)  P(T) + P(H) = 1

So that, from 10) and HA)

11) P(H) = 1/2, P(T) = 1/2

But let’s not stop here, let’s calculate P(H|Mon).

First of all, from 3) and 11)

12) P(H & Mon) = P(H|Mon)P(Mon) = P(Mon|H)P(H) = 1/2

From 5) and 11) also

13) P(Mon & T) = 1/4

But from 12) and 13) we get

14) P(Mon) = P(Mon & T) + P(Mon & H) = 1/2 + 1/4 = 3/4

So that, from 12) and 14)

15) P(H|Mon) = P(H & Mon) / P(Mon) = 1/2 / 3/4 = 2/3

We have seen that either P(H) = 1/2 and P(H|Mon) = 2/3, or P(H) = 2/3 and P(H|Mon) = 1/2.

Nick Bostrom is correct in saying that self-locating information changes the probability distribution, but this is true in both interpretations.

The problem of the sleeping beauty reduces to interpreting the sentence “a fair coin is tossed”: it can mean either that no results of the toss is favourite, or that the coin toss is not influenced by anthropic information, that is, you can attribute the fairness of the coin to prior or posterior distribution.

Either P(H)=P(T) or P(H|Mon)=P(T|Mon), but both at the same time is not possible.

If probability were a physical property of the coin, then so would be its fairness. But since the causal interactions of the coin possess both kind of indifference (balance and independency from the future), that would make the two probability equivalent. 

That such is not the case just means that fairness is a property in the mind of the observer that must be further clarified, since the two meanings cannot be confused.

Anthropic Decision Theory VI: Applying ADT to common anthropic problems

3 Stuart_Armstrong 06 November 2011 11:50AM

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this and previous posts 1 2 3 4 5 6.

Having presented ADT previously, I'll round off this mini-sequence by showing how it behaves with common anthropic problems, such as the Presumptuous Philosopher, Adam and Eve problem, and the Doomsday argument.

The Presumptuous Philosopher

The Presumptuous Philosopher was introduced by Nick Bostrom as a way of pointing out the absurdities in SIA. In the setup, the universe either has a trillion observers, or a trillion trillion trillion observers, and physics is indifferent as to which one is correct. Some physicists are preparing to do an experiment to determine the correct universe, until a presumptuous philosopher runs up to them, claiming that his SIA probability makes the larger one nearly certainly the correct one. In fact, he will accept bets at a trillion trillion to one odds that he is in the larger universe, repeatedly defying even strong experimental evidence with his SIA probability correction.

What does ADT have to say about this problem? Implicitly, when the problem is discussed, the philosopher is understood to be selfish towards any putative other copies of himself (similarly, Sleeping Beauty is often implicitly assumed to be selfless, which may explain the diverge of intuitions that people have on the two problems). Are there necessarily other similar copies? Well, in order to use SIA, the philosopher must believe that there is nothing blocking the creation of presumptuous philosophers in the larger universe; for if there was, the odds would shift away from the larger universe (in the extreme case when only one presumptuous philosopher is allowed in any universe, SIA finds them equi-probable). So the expected number of presumptuous philosophers in the larger universe is a trillion trillion times greater than the expected number in the small universe.

continue reading »

Anthropic Decision Theory V: Linking and ADT

1 Stuart_Armstrong 05 November 2011 01:31PM

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.

Now that we've seen what the 'correct' decision is for various Sleeping Beauty Problems, let's see a decision theory that reaches the same conclusions.

 

Linked decisions

Identical copies of Sleeping Beauty will make the same decision when faced with same situations (technically true until quantum and chaotic effects cause a divergence between them, but most decision processes will not be sensitive to random noise like this). Similarly, Sleeping Beauty and the random man on the street will make the same decision when confronted with a twenty pound note: they will pick it up. However, while we could say that the first situation is linked, the second is coincidental: were Sleeping Beauty to refrain from picking up the note, the man on the street would not so refrain, while her copy would.

The above statement brings up subtle issues of causality and counterfactuals, a deep philosophical debate. To sidestep it entirely, let us recast the problem in programming terms, seeing the agent's decision process as a deterministic algorithm. If agent α is an agent that follows an automated decision algorithm A, then if A knows its own source code (by quining for instance), it might have a line saying something like:

Module M: If B is another algorithm, belonging to agent β, identical with A ('yourself'), assume A and B will have identical outputs on identical inputs, and base your decision on this.

continue reading »

Anthropic Decision Theory IV: Solving Selfish and Average-Utilitarian Sleeping Beauty

0 Stuart_Armstrong 04 November 2011 10:55AM

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.

In the previous post, I looked at a decision problem when Sleeping Beauty was selfless or a (copy-)total utilitarian. Her behaviour was reminiscent of someone following SIA-type odds. Here I'll look at situations where her behaviour is SSA-like.

Altruistic average utilitarian Sleeping Beauty

In the incubator variant, consider the reasoning of an Outside/Total agent who is an average utilitarian (and there are no other agents in the universe apart from the Sleeping Beauties).

"If the various Sleeping Beauties decide to pay £x for the coupon, they will make -£x in the heads world. In the tails world, they will each make £(1-x) each, so an average of £(1-x). This give me an expected utility of £0.5(-x+(1-x))= £(0.5-x), so I would want them to buy the coupon for any price less than £0.5."

And this will then be the behaviour the agents will follow, by consistency. Thus they would be behaving as if they were following SSA odds, and putting equal probability on the heads versus tails world.

continue reading »

Anthropic Decision Theory III: Solving Selfless and Total Utilitarian Sleeping Beauty

3 Stuart_Armstrong 03 November 2011 10:04AM

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.

Consistency

In order to transform the Sleeping Beauty problem into a decision problem, assume that every time she is awoken, she is offered a coupon that pays out £1 if the coin fell tails. She must then decide at what cost she is willing to buy that coupon.

The very first axiom is that of temporal consistency. If your preferences are going to predictably change, then someone will be able to exploit this, by selling you something now that they will buy back for more later, or vice versa. This axiom is implicit in the independence axiom in the von Neumann-Morgenstern axioms of expected utility, where non-independent decisions show inconsistency after partially resolving one of the lotteries. For our purposes, we will define it as:

continue reading »

Anthropic Decision Theory II: Self-Indication, Self-Sampling and decisions

6 Stuart_Armstrong 02 November 2011 10:03AM

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.

In the last post, we saw the Sleeping Beauty problem, and the question was what probability a recently awoken or created Sleeping Beauty should give to the coin falling heads or tails and it being Monday or Tuesday when she is awakened (or whether she is in Room 1 or 2). There are two main schools of thought on this, the Self-Sampling Assumption and the Self-Indication Assumption, both of which give different probabilities for these events.

The Self-Sampling Assumption

The self-sampling assumption (SSA) relies on the insight that Sleeping Beauty, before being put to sleep on Sunday, expects that she will be awakened in future. Thus her awakening grants her no extra information, and she should continue to give the same credence to the coin flip being heads as she did before, namely 1/2.

In the case where the coin is tails, there will be two copies of Sleeping Beauty, one on Monday and one on Tuesday, and she will not be able to tell, upon awakening, which copy she is. She should assume that both are equally likely. This leads to SSA:

continue reading »

Anthropic decision theory I: Sleeping beauty and selflessness

10 Stuart_Armstrong 01 November 2011 11:41AM

A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this and subsequent posts 1 2 3 4 5 6.

Many thanks to Nick Bostrom, Wei Dai, Anders Sandberg, Katja Grace, Carl Shulman, Toby Ord, Anna Salamon, Owen Cotton-barratt, and Eliezer Yudkowsky.

The Sleeping Beauty problem, and the incubator variant

The Sleeping Beauty problem is a major one in anthropics, and my paper establishes anthropic decision theory (ADT) by a careful analysis it. Therefore we should start with an explanation of what it is.

In the standard setup, Sleeping Beauty is put to sleep on Sunday, and awoken again Monday morning, without being told what day it is. She is put to sleep again at the end of the day. A fair coin was tossed before the experiment began. If that coin showed heads, she is never reawakened. If the coin showed tails, she is fed a one-day amnesia potion (so that she does not remember being awake on Monday) and is reawakened on Tuesday, again without being told what day it is. At the end of Tuesday, she is put to sleep for ever. This is illustrated in the next figure:

continue reading »