But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.
I'm really confused. What question are you asking? If you're asking what probability an outsider should assign to the coin coming up heads, the answer's 1/2, if that outsider doesn't have any information about the coin. nyan_sandwich implies this when ey says
(this way she gets $2 half the time instead of $1 half the time for heads).
If you're asking what probability Sleeping Beauty should assign, that depends on what the consequences of making such an assignment is. nyan_sandwich makes this clear, too.
And, finally, if you're asking for an authoritative "correct" subjective probability for Sleeping Beauty to have, I just don't think that notion makes sense, as probability is in the mind. In fact in this case if you pushed me I'd say 1/2 because as soon as the coin is flipped, it lands, the position is recorded, and Sleeping Beauty waking up and falling asleep in the future can't go back and change it. Though I'm not that sure that makes sense even here, and I know similar reasoning won't make sense in more complicated cases. In the end it all comes down to how you count but I'm not sure we have any disagreement on what actually happens during the experiment.
I say (and I think nyan_sandwich would agree), "Don't assign subjective probabilities in situations where it doesn't make a difference." This would be like asking if a tree that fell in a forest made a sound. If you count one way, you get one answer, and if you count another way, you get another. To actually be able to pay off a bet in this situation you need to decide how to count first - that is what differentiates making probability assignments here from other, "standard" situations.
I expect you disagree with something I've said here and I'd appreciate it if you flesh it out. I don't necessarily expect to change your mind and I think it's a distinct possibility you could change mine.
nyan_sandwich implies this when ey says
(this way she gets $2 half the time instead of $1 half the time for heads).
That's a good point - this line of reasoning works fine for the original Sleeping Beauty problem, and one can solve it without really worrying what Sleeping Beauty's subjective probabilities are. That is indeed similar to UDT.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can't add their utilities together anymore. At what probabilit...
(Crossposted from my blog)
I've been developing an approach to anthropic questions that I find less confusing than others, which I call Anthropic Atheism (AA). The name is a snarky reference to the ontologically basic status of observers (souls) in other anthropic theories. I'll have to explain myself.
We'll start with what I call the “Sherlock Holmes Axiom” (SHA), which will form the epistemic background for my approach:
Which I reinterpret as “Reason by eliminating those possibilities inconsistent with your observations. Period.” I use this as a basis of epistemology. Basically, think of all possible world-histories, assign probability to each of them according to whatever principles (eg occams razor), eliminate inconsistencies, and renormalize your probabilities. I won’t go into the details, but it turns out that probability theory (eg Bayes theorem) falls out of this just fine when you translate
P(E|H)as “portion of possible worlds consistent with H that predict E”. So it’s not really any different, but using SHA as our basis, I find certain confusing questions less confusing, and certain unholy temptations less tempting.With that out of the way, let’s have a look at some confusing questions. First up is the Doomsday Argument. From La Wik:
The article goes on to claim that “There is a 95% chance of extinction within 9120 years.” Hard to refute, but nevertheless it makes one rather uncomfortable that the mere fact of one’s existence should have predictive consequences.
In response, Nick Bostrom formulated the “Self Indication Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Applied to the doomsday argument, it says that you are just as likely to exist in 2014 in a world where humanity grows up to create a glorious everlasting civilization, as one where we wipe ourselves out in the next hundred years, so you can’t update on that mere fact of your existence. This is comforting, as it defuses the doomsday argument.
By contrast, the Doomsday argument is the consequence of the “Self Sampling Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.”
Unfortunately for SIA, it implies that “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.” Surely that should not follow, but clearly it does. So we can formulate another anthropic problem:
This one is called the “presumptuous philosopher”. Clearly the presumptuous philosopher should not get a Nobel prize.
These questions have caused much psychological distress, and been beaten to death in certain corners of the internet, but as far as I know, few people have satisfactory answers. Wei Dai’s UDT might be satisfactory for this, and might be equivalent to my answer, when the dust settles.
So what’s my objection to these schemes, and what’s my scheme?
My objection is aesthetic; I don’t like that SIA and SSA seem to place some kind of ontological specialness on “observers”. This reminds me way too much of souls, which are nonsense. The whole “reference-class” thing rubs me the wrong way as well. Reference classes are useful tools for statistical approximation, not fundamental features of epistemology. So I'm hesitant to accept these theories.
Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA. No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning. And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.
By reasoning purely from consistency of possible worlds with observations, SHA gives us a reasonably principled way to just punt on these questions. Let’s see how it does on another anthropic question, the Sleeping Beauty Problem:
SHA says that the coin came up heads in half of the worlds, and no further update happens based on existence. I'm slightly uncomfortable with this, because SHA is cheerfully biting a bullet that has confused many philosophers. However, I see no reason not to bite this bullet; it doesn’t seem to have any particularly controversial implications for actual decision making. If she is paid for each correct guess, for example, she'll say that she thinks the coin came up tails (this way she gets $2 half the time instead of $1 half the time for heads). If she’s paid only on Monday, she’s indifferent between the options, as she should be.
What if we modify the problem slightly, and ask sleeping beauty for her credence that it’s Monday? That is, her credence that “it” “is” Monday. If the coin came up heads, there is only Monday, but if it came up tails, there is a Monday observer and a Tuesday observer. AA/SHA reasons purely from the perspective of possible worlds, and says that Monday is consistent with observations, as is Tuesday, and refuses to speculate further on which “observer” among possible observers she “is”. Again, given an actual decision problem with an actual payoff structure, AA/SHA will quickly reach the correct decision, even while refusing to assign probabilities “between observers”.
I'd like to say that we've casually thrown out probability theory when it became inconvenient, but we haven’t; we've just refused to answer a meaningless question. The meaninglessness of indexical uncertainty becomes apparent when you stop believing in the specialness of observers. It’s like asking “What’s the probability that the Sun rather than the Earth?”. That the Sun what? The Sun and the Earth both exist, for example, but maybe you meant something else. Want to know which one this here comet is going to hit? Sure I'll answer that, but these generic “which one” questions are meaningless.
Not that I'm familiar with UDT, but this really is starting to remind me of UDT. Perhaps it even is part of UDT. In any case, Anthropic Atheism seems to easily give intuitive answers to anthropic questions. Maybe it breaks down on some edge case, though. If so, I'd like to see it. In the mean time, I don’t believe in observers.
ADDENDUM: As Wei Dai, DanielLC, and Tyrrell_McAllister point out below, it turns out this doesn't actually work. The objection is that by refusing to include the indexical hypothesis, we end up favoring universes with more variety of experiences (because they have a high chance of containing *our* experiences) and sacrificing the ability to predict much of anything. Oops. It was fun while it lasted ;)