Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Lotaria10

I'm glad that someone engages with non-Lewisian halfism but this is clearly wrong on a very basic level. To understand why, let's consider a simpler problem:

Two coins are tossed. Then you are told state of one of the coins, but you don't know whether it's the first coin or the second. Then you are told whether it was the first coin or the second. What should be your credence that the state of both coins are the same 1. before you were told the state of one of the coins? 2. after you were told the state of one of the coins? 3. after you were told which coin was it? 

 

You are creating a related but different and also complicated problem: the Two Child Problem, which is notoriously ambiguous. "Then you are told the state of one of the coins" can have many meanings.

If I ask the experimenter "choose one of the coins randomly and tell me what it is" then I am not able to update my probability. It will still be 1/2 that the coins are the same.

If I ask the experimenter "is there at least one heads?" then I will be able to update. If they say yes I can update to 1/3, if they say no I can update to 1.

 

Frankly, it seems as if Conitzer forgot that credence can change for reasons not related to memory loss. That one can simply receive new evidence and change their credence based on that.

Conitzer's problem can be simplified further by letting Beauty flip a coin herself on Monday and Tuesday.

She wakes up Monday and flips a coin. She wakes up Tuesday and flips a coin. That's it.

After flipping a coin, what should her credence be that the coin flips are the same? 

Do you disagree now that the answer is 1/2?

I think it is clearly 1/2 precisely because there is no new evidence. The violation of the Reflection Principle is secondary. More importantly, something has gone wrong if we think she can flip a coin and update the probability of the coins being the same. 

 

She didn't know the sequence in advance, like, for example, she knew that she is to be awakened on Monday. She made a guess and managed to guess right. The difference is that on a repetition of the probability experiment, she is awakened on Monday in every iteration of it, but the sequence of the tosses is the same as she precommited to only in a smal fraction of all iterations of the experiment.

I agree, but she doesn't get to observe the sequence of tosses in the experiment. She isn't even able to observe that a sequence of tosses happens "at least once" in the experiment. That's what Conitzer shows in his problem.

She can't update her probability based on observing a rare event C (as you have defined it), because she can't observe C in the first place.

A version without amnesia is not exactly the same situation, but something similar can happen. Suppose the experimenter will flip a coin, on heads they will flip a new sequence of 1000, on tails they will flip 2 new sequences of 1000. I ask the experimenter "randomly choose one of the sequences and tell me the result", and they tell me the result was 1000 heads in a row. A sequence of 1000 heads in a row is more likely to have occurred at least once if they flipped 2 sequences. But this does not allow me to update my probability of the number of sequences, because I have not learned "there is at least one sequence of 1000 heads."

Lotaria10

In one of your previous posts you said that 'What Beauty actually learns is that "she is awoken at least once"' and in this post you say "Therefore, if the Beauty can potentially observe a rare event at every awakening, for instance, a specific combination , when she observes it, she can construct the Approximate Frequency Argument and update in favor of Tails."

I think this is a mistake, because when you experience Y during Sleeping Beauty, it is not the same thing as learning that "Y at least once." See this example: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf

Conitzer's example is that 2 coins are flipped on Sunday. Beauty wakes up day 1 and sees coin 1, then wakes up day 2 and sees coin 2. When she wakes up and sees a coin, what is her credence that the coins are the same?

I think everyone would agree that the probability is 1/2. However, suppose she sees tails. If she learns "at least one tails" then the probability of "coins are the same" would be only 1/3. Therefore, even though she can see tails, she did not learn "at least one tails". 

Similarly, if she observes C, that is not the same thing as "C at least once." She learned "C today" which seems like it does not allow updating any probabilities, for all the reasons you have given earlier. So rare events such as a specific sequence of coin flips Beauty knew in advance, should still not allow probability updates.

There is new information in the first scenario, but how does it allow you to update the probability that the coins are different without thinking of today as randomly selected?

Imagine you are woken up every day, but the color of the room may be different. You are asked the probability that the coins are different.

HH: blue blue
HT: blue red
TH: red blue
TT: red red

Now you wake up and see "blue." That is new information. You now know that there is at least one "blue", and you can eliminate TT. 

However, I think everyone would agree that the probability is still 1/2. It was 1/2 to begin with, and seeing "blue" as opposed to "red", while it is new information, is not relevant to deciding the coins are different.

Back to scenario 1:

HH: wake wake
HT: wake sleep
TH: sleep wake
TT: sleep sleep

Now you wake up. That is new information, and you can eliminate TT. But the question is, how is that relevant to the coins being different? If you are treating "today" as randomly selected from all days that exist in reality, then that would allow you to update. But if you are not treating "today" as randomly selected at all, then by what mechanism can you update?

Just going by intuition, I personally don't think you should update. In this scenario the coin doesn't need to be tossed until the morning. Heads they wake you up, tails they don't. So when you wake up, you do get new information just like in the blue/red example. But since the coins are independent of each other, how can learning about that morning's coin tell you something about the other coin you don't see? Unless you are using a random selection process in which "today" not primitive.

I find this idea very interesting, especially since it seems to me that it gives different probabilities than most other version of halfing. I wonder if you agree with me about how it would answer this scenario (due to Conitzer):

Two coins are tossed on Sunday. The possibilities are

HH: wake wake
HT: wake sleep
TH: sleep wake
TT: sleep sleep

When you wake up (which is not guaranteed now), what probability should you give that the coins come up differently?

According to most versions of halfing, it would be 2/3. You could say that when you wake up you learn that you wake up at least once, eliminating TT. Alternatively, you could say that when you wake up the day is selected randomly from all the days you wake up in reality. Either way you get 2/3.

However, what if we say that "today" is not selected at all from our perspective? If "today" wasn't selected at all, it can't possibly tell us anything about the other day. So it would be 1/2 probability that the coins are different.

The weird thing about this is that if we change the situation into:

HH: wake wake
HT: wake sleep
TH: sleep wake
TT: wake wake

Now it seems like we are back to the original sleeping beauty problem, where again we would say 1/2 for the probability that the coins are different. How can the probability not change despite TT: sleep sleep turning into TT: wake wake?

And yet, from my own perspective, I could still say that "today" was not selected. So it still gives me no information about whether the other coin is different, and the probability has to stay at 1/2.

I'm talking about the method you're using. It looks like when you wake up and experience y you are treating that as equivalent to "I experience y at least once."

This method is generally incorrect, as shown in the example. Waking up and experiencing y is not necessarily equivalent to "I experience y at least once."

If you yourself believe the method is incorrect when y is "flip heads", why should we believe it is correct when y is something else?

The question is about what information you actually have.

In the linked example, it may seem that you have precisely the information "there is at least one heads." But if you condition on that you get the wrong answer. The explanation is that, in this type of memory loss situation, waking up and experiencing y is not equivalent to "I experience y at least once." When you wake up and experience y you do know that you must experience y on either monday or tuesday, but your information is not equivalent to that statement.

If you asked on sunday "will I experience y at least once?" then the answer would be relevant. But if we nailed down the precise information gained from waking up and experiencing y, it would be irrelevant.

I'm referring to an example from here: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf where you do wake up both days.

Your argument seemed similar, but I may be misunderstanding:

"Treating these and other differences as random, the probability of Beauty having at some time the exact memories and experiences she has after being woken this time is twice as great if the coin lands Tails than if the coin lands Heads, since with Tails there are two chances for these experiences to occur rather than only one."

It sounds like you are conditioning on "at least once such experiences occur". That is, if Beauty wakes up and flips a coin, getting heads, and that's the only experience she has so far, she will condition on "at least one heads." This doesn't seem generally correct, as the linked example covers. Doesn't it also mean that, even before the coin flip, she would know exactly how she was going to update her probability afterward, regardless of result?

Perhaps the issue here is that if you wake up and flip heads, that isn't the same thing as if, on Sunday, you asked "will I flip at least one heads?" and got an affirmative answer. The latter is relevant to the number of wakings. The former is not.

I don't understand the reasoning for using irrelevant information.

If you are saying that there is twice the probability of experiencing y "at least once" on tails, doesn't that fail for the same argument Conitzer gave against halfers? His example was that you wake up both days and flip a coin. If you flip heads, what is the probability that both flips are the same? You are twice as likely to experience heads at least once if the coin tosses are different. But it is irrelevant. The probability of "both the same" is still 1/2.

On the other hand, in reality there might be some relevant information (such as noticeable aging, hunger, etc) but the problem is meant to exclude that.