Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can't add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they're in the Tails world?
OK, if I'm interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you're asking, please clarify for me. I know you mentioned betting but let's just base this on log score and say the return is in utils so that there isn't any ambiguity. Since you're saying they don't add utilities, I'm also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.
So, given all of that, I don't see how the answer is anything but 1/2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn't do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn't care about her copies. Her answer is still 1/2 (unless of course she knew the coin was weighted, etc.).
I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I'd be 99.9% sure that when I exited that room, I'd see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I'd be 99.9% sure that when I exited that room, I'd see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn't go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb's Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn't work there, and it doesn't work here, either.
From the outside though, there isn't a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there's only what actually happens. That's what I was trying to get at.
Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn't the same as yours.
You answered the correct question. (yay)
Ok, so you don't think that I can travel back in time to change the probability of a past event? How about this problem: I flip a coin, and if the coin is heads I put a white stone into a bag. But if the coin is tails, I flip a coin and put one white stone and one black stone into the bag.
You reach into a bag and pull out a stone. It is white. From this, you infer that you are twice as likely to be in heads-world than tails-world. Have you gone back in time and changed the coin?
No - you have not affected the coin ...
(Crossposted from my blog)
I've been developing an approach to anthropic questions that I find less confusing than others, which I call Anthropic Atheism (AA). The name is a snarky reference to the ontologically basic status of observers (souls) in other anthropic theories. I'll have to explain myself.
We'll start with what I call the “Sherlock Holmes Axiom” (SHA), which will form the epistemic background for my approach:
Which I reinterpret as “Reason by eliminating those possibilities inconsistent with your observations. Period.” I use this as a basis of epistemology. Basically, think of all possible world-histories, assign probability to each of them according to whatever principles (eg occams razor), eliminate inconsistencies, and renormalize your probabilities. I won’t go into the details, but it turns out that probability theory (eg Bayes theorem) falls out of this just fine when you translate
P(E|H)as “portion of possible worlds consistent with H that predict E”. So it’s not really any different, but using SHA as our basis, I find certain confusing questions less confusing, and certain unholy temptations less tempting.With that out of the way, let’s have a look at some confusing questions. First up is the Doomsday Argument. From La Wik:
The article goes on to claim that “There is a 95% chance of extinction within 9120 years.” Hard to refute, but nevertheless it makes one rather uncomfortable that the mere fact of one’s existence should have predictive consequences.
In response, Nick Bostrom formulated the “Self Indication Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Applied to the doomsday argument, it says that you are just as likely to exist in 2014 in a world where humanity grows up to create a glorious everlasting civilization, as one where we wipe ourselves out in the next hundred years, so you can’t update on that mere fact of your existence. This is comforting, as it defuses the doomsday argument.
By contrast, the Doomsday argument is the consequence of the “Self Sampling Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.”
Unfortunately for SIA, it implies that “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.” Surely that should not follow, but clearly it does. So we can formulate another anthropic problem:
This one is called the “presumptuous philosopher”. Clearly the presumptuous philosopher should not get a Nobel prize.
These questions have caused much psychological distress, and been beaten to death in certain corners of the internet, but as far as I know, few people have satisfactory answers. Wei Dai’s UDT might be satisfactory for this, and might be equivalent to my answer, when the dust settles.
So what’s my objection to these schemes, and what’s my scheme?
My objection is aesthetic; I don’t like that SIA and SSA seem to place some kind of ontological specialness on “observers”. This reminds me way too much of souls, which are nonsense. The whole “reference-class” thing rubs me the wrong way as well. Reference classes are useful tools for statistical approximation, not fundamental features of epistemology. So I'm hesitant to accept these theories.
Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA. No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning. And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.
By reasoning purely from consistency of possible worlds with observations, SHA gives us a reasonably principled way to just punt on these questions. Let’s see how it does on another anthropic question, the Sleeping Beauty Problem:
SHA says that the coin came up heads in half of the worlds, and no further update happens based on existence. I'm slightly uncomfortable with this, because SHA is cheerfully biting a bullet that has confused many philosophers. However, I see no reason not to bite this bullet; it doesn’t seem to have any particularly controversial implications for actual decision making. If she is paid for each correct guess, for example, she'll say that she thinks the coin came up tails (this way she gets $2 half the time instead of $1 half the time for heads). If she’s paid only on Monday, she’s indifferent between the options, as she should be.
What if we modify the problem slightly, and ask sleeping beauty for her credence that it’s Monday? That is, her credence that “it” “is” Monday. If the coin came up heads, there is only Monday, but if it came up tails, there is a Monday observer and a Tuesday observer. AA/SHA reasons purely from the perspective of possible worlds, and says that Monday is consistent with observations, as is Tuesday, and refuses to speculate further on which “observer” among possible observers she “is”. Again, given an actual decision problem with an actual payoff structure, AA/SHA will quickly reach the correct decision, even while refusing to assign probabilities “between observers”.
I'd like to say that we've casually thrown out probability theory when it became inconvenient, but we haven’t; we've just refused to answer a meaningless question. The meaninglessness of indexical uncertainty becomes apparent when you stop believing in the specialness of observers. It’s like asking “What’s the probability that the Sun rather than the Earth?”. That the Sun what? The Sun and the Earth both exist, for example, but maybe you meant something else. Want to know which one this here comet is going to hit? Sure I'll answer that, but these generic “which one” questions are meaningless.
Not that I'm familiar with UDT, but this really is starting to remind me of UDT. Perhaps it even is part of UDT. In any case, Anthropic Atheism seems to easily give intuitive answers to anthropic questions. Maybe it breaks down on some edge case, though. If so, I'd like to see it. In the mean time, I don’t believe in observers.
ADDENDUM: As Wei Dai, DanielLC, and Tyrrell_McAllister point out below, it turns out this doesn't actually work. The objection is that by refusing to include the indexical hypothesis, we end up favoring universes with more variety of experiences (because they have a high chance of containing *our* experiences) and sacrificing the ability to predict much of anything. Oops. It was fun while it lasted ;)