by [anonymous]
6 min read13th Jan 201454 comments

34

(Crossposted from my blog)

I've been developing an approach to anthropic questions that I find less confusing than others, which I call Anthropic Atheism (AA). The name is a snarky reference to the ontologically basic status of observers (souls) in other anthropic theories. I'll have to explain myself.

We'll start with what I call the “Sherlock Holmes Axiom” (SHA), which will form the epistemic background for my approach:

How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?

Which I reinterpret as “Reason by eliminating those possibilities inconsistent with your observations. Period.” I use this as a basis of epistemology. Basically, think of all possible world-histories, assign probability to each of them according to whatever principles (eg occams razor), eliminate inconsistencies, and renormalize your probabilities. I won’t go into the details, but it turns out that probability theory (eg Bayes theorem) falls out of this just fine when you translate P(E|H) as “portion of possible worlds consistent with H that predict E”. So it’s not really any different, but using SHA as our basis, I find certain confusing questions less confusing, and certain unholy temptations less tempting.

With that out of the way, let’s have a look at some confusing questions. First up is the Doomsday Argument. From La Wik:

Simply put, it says that supposing the humans alive today are in a random place in the whole human history timeline, chances are we are about halfway through it.

The article goes on to claim that “There is a 95% chance of extinction within 9120 years.” Hard to refute, but nevertheless it makes one rather uncomfortable that the mere fact of one’s existence should have predictive consequences.

In response, Nick Bostrom formulated the “Self Indication Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Applied to the doomsday argument, it says that you are just as likely to exist in 2014 in a world where humanity grows up to create a glorious everlasting civilization, as one where we wipe ourselves out in the next hundred years, so you can’t update on that mere fact of your existence. This is comforting, as it defuses the doomsday argument.

By contrast, the Doomsday argument is the consequence of the “Self Sampling Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.”

Unfortunately for SIA, it implies that “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.” Surely that should not follow, but clearly it does. So we can formulate another anthropic problem:

It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations seem to be roughly indifferent between these two theories. The physicists are planning on carrying out a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1

This one is called the “presumptuous philosopher”. Clearly the presumptuous philosopher should not get a Nobel prize.

These questions have caused much psychological distress, and been beaten to death in certain corners of the internet, but as far as I know, few people have satisfactory answers. Wei Dai’s UDT might be satisfactory for this, and might be equivalent to my answer, when the dust settles.

So what’s my objection to these schemes, and what’s my scheme?

My objection is aesthetic; I don’t like that SIA and SSA seem to place some kind of ontological specialness on “observers”. This reminds me way too much of souls, which are nonsense. The whole “reference-class” thing rubs me the wrong way as well. Reference classes are useful tools for statistical approximation, not fundamental features of epistemology. So I'm hesitant to accept these theories.

Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA. No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning. And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.

By reasoning purely from consistency of possible worlds with observations, SHA gives us a reasonably principled way to just punt on these questions. Let’s see how it does on another anthropic question, the Sleeping Beauty Problem:

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be wakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be wakened and interviewed on Monday only. If the coin comes up tails, she will be wakened and interviewed on Monday and Tuesday. In either case, she will be wakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is wakened and interviewed, she is asked, “What is your belief now for the proposition that the coin landed heads?”

SHA says that the coin came up heads in half of the worlds, and no further update happens based on existence. I'm slightly uncomfortable with this, because SHA is cheerfully biting a bullet that has confused many philosophers. However, I see no reason not to bite this bullet; it doesn’t seem to have any particularly controversial implications for actual decision making. If she is paid for each correct guess, for example, she'll say that she thinks the coin came up tails (this way she gets $2 half the time instead of $1 half the time for heads). If she’s paid only on Monday, she’s indifferent between the options, as she should be.

What if we modify the problem slightly, and ask sleeping beauty for her credence that it’s Monday? That is, her credence that “it” “is” Monday. If the coin came up heads, there is only Monday, but if it came up tails, there is a Monday observer and a Tuesday observer. AA/SHA reasons purely from the perspective of possible worlds, and says that Monday is consistent with observations, as is Tuesday, and refuses to speculate further on which “observer” among possible observers she “is”. Again, given an actual decision problem with an actual payoff structure, AA/SHA will quickly reach the correct decision, even while refusing to assign probabilities “between observers”.

I'd like to say that we've casually thrown out probability theory when it became inconvenient, but we haven’t; we've just refused to answer a meaningless question. The meaninglessness of indexical uncertainty becomes apparent when you stop believing in the specialness of observers. It’s like asking “What’s the probability that the Sun rather than the Earth?”. That the Sun what? The Sun and the Earth both exist, for example, but maybe you meant something else. Want to know which one this here comet is going to hit? Sure I'll answer that, but these generic “which one” questions are meaningless.

Not that I'm familiar with UDT, but this really is starting to remind me of UDT. Perhaps it even is part of UDT. In any case, Anthropic Atheism seems to easily give intuitive answers to anthropic questions. Maybe it breaks down on some edge case, though. If so, I'd like to see it. In the mean time, I don’t believe in observers.

ADDENDUM: As Wei Dai, DanielLC, and Tyrrell_McAllister point out below, it turns out this doesn't actually work. The objection is that by refusing to include the indexical hypothesis, we end up favoring universes with more variety of experiences (because they have a high chance of containing *our* experiences) and sacrificing the ability to predict much of anything. Oops. It was fun while it lasted ;)

New to LessWrong?

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 1:28 PM

Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA.

This is an idea that I had considered and rejected before settling on UDT.

And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.

This is wrong. Recall that both T1 and T2 are theories with finite universes and finite numbers of observers. Also, T1 and T2 are not complete hypothesis which can generate predictions, but actually classes of hypotheses, because in order to generate predictions you need initial conditions in addition to a theory. Now if you take a random hypothesis in the T1 class (i.e., the theory T1 along with some random initial conditions), it's much less likely to predict a universe that contains someone with your exact history of observations compared to a random hypothesis in the T2 class since each T2 universe contains many more observers than a T1 universe. In order words, "updating" on your observations by ruling out hypotheses that don't predict the existence of someone with your observations would cause you to rule out a much greater fraction of T1 hypotheses than T2 hypotheses, thereby causing you to update heavily in the direction of the T2 theory being correct.

[-][anonymous]10y50

it's much less likely to predict a universe that contains someone with your exact history of observations compared to a random hypothesis in the T2 class since each T2 universe contains many more observers than a T1 universe.

Whoops, you are right. I'll think about that

How does UDT handle this, by the way?

How does UDT handle this, by the way?

I wrote a post on how UDT deals with the Presumptuous Philosopher, but it's been a while since I wrote that or last read it, so I can try explaining it again and hopefully offer something new.

UDT deals with decision problems, so let's assume that in T1 and T2 universes, everyone is born an UDT-using adult and is immediately offered a bet on whether they are in T1 or T2, and then they're offered the same bet again a while later after they've made some observations. We ask what initial odds they should demand, and whether they should change the odds after making those observations.

First it should be clear that it makes no sense to change the odds unless there is some way to condition the new odds on different observations (i.e., if some observations were relatively more likely in T1 than T2). If you can't condition the new odds but change your odds anyway, then all other UDT agents do the same and you might as well choose those odds to begin with, before you made any observations.

What about the initial odds? That depends on your values. A bet on whether you're in T1 or T2 is can be viewed as a transfer of wealth between T1 worlds and T2 worlds. Suppose everyone is offered a bet where you win $1 if you're in T2, and lose $1 if you're in T1. UDT would reason like this: if I accept the bet, then everyone in both T1 and T2 worlds accepts, so everyone in T1 worlds loses $1 and everyone in T2 worlds gains $1. Is this trade worth it? Suppose the total "measure" (or "reality-fluid") I assign to T1 worlds and T2 worlds are equal and I'm an average utilitarian, then I'd be indifferent because I lose as much average utility in T1 worlds as I gain in T2 worlds. But if I'm a total utilitarian, then I'd accept the bet because there are many more people in a T2 world than in a T1 world and hence a lot more winners than losers.

So UDT can give you either SIA-like answers or non-SIA-like answers depending on your values. People seem to have both average-utilitarian-like intuitions and total-utilitarian-like intuitions, depending on what thought experiments you present to them (and who you ask), so according to UDT it's not surprising that they would find SIA intuitive some times and not intuitive other times.

Your next question might be, what if I'm not a utilitarian of any sort, but have selfish values? Well, it's actually not clear what "selfish values" means when talking about UDT agents, or what decision theory can handle selfish values better. I wrote a post about that as well.

[-][anonymous]10y20

A bet on whether you're in T1 or T2 is can be viewed as a transfer of wealth between T1 worlds and T2 worlds.

This is a good framing

So UDT can give you either SIA-like answers or non-SIA-like answers depending on your values.

I feel cheated. I guess it could be arbitrary like this, but I'll have to think about it. Grumble grumble. I was hoping for a grand resolution.

I would argue that selfish values should look like a state of information like "I am a person, I like cookies, here is a bet about cookies."

Could you elaborate on the implications of that statement? I'm not following what you're trying to say.

Rather than "I am a person," let''s substitute "I am painted green."

Suppose we start out with ten people, none of them painted green.

A coin is flipped. If heads, one person is painted green. If tails, nine people are painted green.

If you observe that you have been painted green, what is your probability that the landed heads? Bayes rule time!

P(heads | green) = P(heads) * P(green | heads) / P(green) = 0.5 * 0.9 / 0.5 = 0.9. Observing that you have been painted green, you conclude that the coin is more likely to be heads. Simple Bayesian updating.

In this simple problem, upon learning that you have been painted green, you give equal weight to each green person, weighted by the prior probability of the coin.

If she is paid for each correct guess, for example, she'll say that she thinks the coin came up tails (this way she gets $2 half the time instead of $1 half the time for heads). If she’s paid only on Monday, she’s indifferent between the options, as she should be.

This clicked so hard it almost hurt. Indeed Bayesians should be willing to bet on their beliefs; so the rational belief depends on how specifically the bet is resolved. In other words, what specifically happens to the Sleeping Beauty based on her beliefs? (And if the beliefs have absolutely no consequence, what's the point of getting them right?)

the rational belief depends on how specifically the bet is resolved

No. Bayesian prescribes believing things in proportion to their likelihood of being true, given the evidence observed; it has nothing to do with the consequences of those beliefs for the believer. Offering odds cannot change the way the coin landed. If I expect a net benefit of a million utilons for opining that the Republicans will win the next election, I will express that opinion, regardless of whether I believe it or not; I will not change my expectations about the electoral outcome.

There is probability 0.5 that she will be woken once and probability 0.5 that she will be woken twice. If the coin comes up tails she will be woken twice and will receive two payouts for correct guesses. It is therefore in her interests to guess that the coin came up tails when her true belief is that P(T)=0.5; it is equivalent to offering a larger payout for guessing tails correctly than for guessing heads correctly.

Suppose sleeping beauty secretly brings a coin into the experiment and flips when she wakes up. There are now six possible combinations of heads and tails, each with their own possibilities:

HH: 1/4

HT: 1/4

THH: 1/8

THT: 1/8

TTH: 1/8

TTT: 1/8

When she wakes up and flips the coin, she notices it lands on heads. This eliminates two of the possibilites. Now renormalizing their values:

HH: 2/5

HT: 0

THH: 1/5

THT: 1/5

TTH: 1/5

TTT: 0

She can conclude that the coin landed on tails with 60% probability, rather than the normal 50% probability. She could flip the coins more times. Doing so, she will asymptotically approach 2/3 probability that it landed on tails.

Perhaps she gets caught with the coin, and has it taken away. This isn't a problem. She can just look at dust specks, or any other thing she can't predict and won't be consistent. For all intents and purposes, she's using SSA. There's a difference if she's woken so many times that it's likely she'll make exactly the same observations more than once, but that takes her being woken order of 10^million times.

[-][anonymous]10y30

This is very interesting, but I haven't quite grokked it yet. Thank you for what might be a fatal flaw. Upvoted while I think about it.

[-]jpet10y00

That doesn't look right--if she just flipped H, then THT is also eliminated. So the renormalization should be:

HH: 1/2

HT: 0

THH: 1/4

THT: 0

TTH: 1/4

TTT: 0

Which means the coin doesn't actually change anything.

In the THT case, on Monday she flips heads. Thus, if she flips heads, and has no way of knowing whether or not it's monday, she can't eliminate the possibility of THT.

[-]IainM10y-10

I think this is mistaken in that eliminating the HT and TTT possibilities isn't the only update SB can make on seeing heads. Conditioning on a particular sequence of flips, an observation of heads is certain under the HH or THH sequences, but only 50% likely under the THT or TTH sequences, so SB should adjust probabilities accordingly and consequently end up with no new information about the initial flip.

HOWEVER. The above logic relies on the assumption that this is a coherent and useful way to consider probabilities in this kind of anthropic problem, and that's not an assumption I accept. So take with a grain of salt.

I think this is mistaken in that eliminating the HT and TTT possibilities isn't the only update SB can make on seeing heads.

It's the Sherlock Holmes Axiom that the original post was suggesting we use.

Conditioning on a particular sequence of flips, an observation of heads is certain under the HH or THH sequences, but only 50% likely under the THT or TTH sequences, so SB should adjust probabilities accordingly and consequently end up with no new information about the initial flip.

This would be SB deciding that she is randomly selected from the reference class of SBs. In other words, it's SSA, only with a much smaller reference class than I'd suggest using.

If she uses a larger reference class, she'd realize that she's about twice as likely to wake up in a room during the experiment if the coin landed on tails, and would conclude that there's a nearly 2/3 probability of the coin landing on tails.

[-][anonymous]10y-30

I haven't figured out how to verbalize this properly yet, but it feels to me like the "THT" and "TTH" entries are problematic — it seems like she should only be able to count one of those options, not both. When you remove one of them, then the first coin has equal probability of coming up heads and tails as we'd expect.

[This comment is no longer endorsed by its author]Reply

In your example about the two physics theories, it seems that you don't really need observers. Indeed, simply replace observers by blue warbles.

Suppose T1 predicts trillion trillion blue warbles in the universe and T2 predicts trillion trillion trillion blue warbles (but both theories are agnostic about how and where they occur). Now, you send an expedition to Mars and find that Mars has several billion blue warbles. What does that mean for T1 vs T2? I would say that T2 is more likely as it assigned a higher prior probability that you'd generally find blue warbles.

I may be wrong, but it seems to me that blue warbles on Mars is entirely symmetrical to observers on Earth.

[-][anonymous]10y100

Not quite. T2 presumably predicts more warbles because it is bigger. If so, encountering some concentration of warbles doesn't distinguish between the two, because they have the same warble-density.

On the other hand, if T2 predicts a billion times the density, then you might use local concentration as evidence, but it could go either way depending on predicted densities.

The thing is... Only one of the potential universes has any observers. Hypothetical observers have dont actually observe anything, which means that you cannot reason backwards from the number of observers each universe would in theory have.

If what the super-theory predicted was that both universes existed, but was agnostic about which bit of the multiverse you were in? Then you could make a guess based on the numbers (and a very large absolute number of observers would get it wrong). But that is not how this problem was posed. And given the way it was posed, at most one of the theories can be correct, and will be observed to be so by whichever percentage of its inhabitants get to that stage of understanding. The hypothetical inhabitants of the other universe have no bearing on the matter due to not existing.

Only one of the potential universes has any observers. Hypothetical observers have dont actually observe anything

And hypothetical blue warbles don't actually warble. What's your point?

Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA.

This sounds a lot like full non-indexical conditioning (FNC). The standard problem with FNC is that it seems to make science impossible. For, many actual theories predict "big universes", in which every subjective experience is almost certainly experienced by some observer somewhere, even if that observer is just a Boltzmann brain.

Prior to all observation, it seems that you should be indifferent among all these theories, or at least among many of them. But then, on FNC, no empirical observation can give weight to one of these theories over the others, because they all equally predict the bare existence of an observer who saw whatever observation you saw. Thus, the entire practice of empirical science fails to affect your beliefs about these theories at all.

This is very well-written, exceptionally clear-headed, and, I'd suggest, Mainworthy. This kind of thinking does indeed seem to be what several have/are converging upon, including, IIRC, Wei Dai, Eliezer, some SPARC attendees who were thrown anthropics to try, possibly Carl Shulman, and presumably many others (e.g. other advocates of UDT and its offspring). Anthropics may well be/become the best example of LW rapidly solving/making major progress on a significant open problem in philosophy and reaching consensus before mainstream philosophy manages to do so.

It really does seem to me that the massive confusion around Doomsday is a result of people who are very smart and even good at reductionism (e.g. even Bos(s)trom, though I've by no means read all or even most of his stuff) lapsing and thinking about anthropics in such a way that they might as well be talking about souls.

Related.

That said, as best I can tell, Eliezer has remained mysteriously silent on Sleeping Beauty and Doomsday, which makes he hesitate slightly to declare them solved. (E.g. I'd expect his endorsement of a solution by now if he agreed and did not feel confused.) And specifically, last I heard, Eliezer held probability theory as above vulgar things like betting or something like that, so the lack of an obvious way to reconcile that view of probability with the dissolution of Sleeping Beauty in this post and the one I linked gives me pause. (This could be a failure of reductive effort on my part, though.)

Agreed about Eliezer thinking similar thoughts. At least, he's thinking thoughts which seem to me to be similar to those in this post. See Building Phenomenological Bridges (article by Robby based on Eliezer's facebook discussion).

That article discusses (among other things) how an AI should form hypotheses about the world it inhabits, given its sense perceptions. The idea "consider all and only those worlds which are consistent with an observer having such-and-such perceptions, and then choose among those based on other considerations" is, I think, common to both these posts.

As Jaynes (I recommend that link to you) says, not assigning prior probabilities doesn't mean you don't have prior probabilities - it just means you have to sneak them in without much examination. In practice, "not having prior probabilities" usually means assigning everything equal prior probability. But it leaves open a trap where people accidentally sneak in whatever prior probabilities they want - I think you fall into this on the Sleeping Beauty problem.

[-][anonymous]10y40

Hmm. Maybe. Can you go into a bit more detail? I'm not seeing it. AFAICT, I'm refusing to assign probability to a meaningless question, and whatever probability I might have assigned to that question has no consequence when you cash out that question to actual meaningful decisions.

the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.

I interpret "the presumptuous philosopher is an idiot" as a claim that that the posterior probabilities of the two theories aren't affected by the number of people produced. Because you exist in each theory, and so you don't have to update the probability, the conclusion is really a statement about the prior probability you've snuck in. This prior probability assigns an equal weight to different possible states of the world, no matter how many people they produce.

But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.

The answer to both of these questions ultimately depends on the prior. But your procedure doesn't care about the prior - it leaves the user to sneak in whatever prior is their favorite. Thus, different users will sneak in different priors and get different answers.

[-][anonymous]10y20

The answer to both of these questions ultimately depends on the prior. But your procedure doesn't care about the prior - it leaves the user to sneak in whatever prior is their favorite. Thus, different users will sneak in different priors and get different answers.

Yes, of course, but that's fine; I'm not claiming any particular prior. What I am saying is that the prior is over possible worlds not observer moments, just as it is not over planets. I refuse to assign probabilities between observer moments, and assert that it is entirely unnecessary. If you can show me how I'm nonetheless assigning probability between observer moments by some underhanded scheme, or even where it matters what probabilities I sneak in, go ahead, but I'm still not seeing it.

[-][anonymous]10y00

But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.

I'm really confused. What question are you asking? If you're asking what probability an outsider should assign to the coin coming up heads, the answer's 1/2, if that outsider doesn't have any information about the coin. nyan_sandwich implies this when ey says

(this way she gets $2 half the time instead of $1 half the time for heads).

If you're asking what probability Sleeping Beauty should assign, that depends on what the consequences of making such an assignment is. nyan_sandwich makes this clear, too.

And, finally, if you're asking for an authoritative "correct" subjective probability for Sleeping Beauty to have, I just don't think that notion makes sense, as probability is in the mind. In fact in this case if you pushed me I'd say 1/2 because as soon as the coin is flipped, it lands, the position is recorded, and Sleeping Beauty waking up and falling asleep in the future can't go back and change it. Though I'm not that sure that makes sense even here, and I know similar reasoning won't make sense in more complicated cases. In the end it all comes down to how you count but I'm not sure we have any disagreement on what actually happens during the experiment.

I say (and I think nyan_sandwich would agree), "Don't assign subjective probabilities in situations where it doesn't make a difference." This would be like asking if a tree that fell in a forest made a sound. If you count one way, you get one answer, and if you count another way, you get another. To actually be able to pay off a bet in this situation you need to decide how to count first - that is what differentiates making probability assignments here from other, "standard" situations.

I expect you disagree with something I've said here and I'd appreciate it if you flesh it out. I don't necessarily expect to change your mind and I think it's a distinct possibility you could change mine.

nyan_sandwich implies this when ey says

(this way she gets $2 half the time instead of $1 half the time for heads).

That's a good point - this line of reasoning works fine for the original Sleeping Beauty problem, and one can solve it without really worrying what Sleeping Beauty's subjective probabilities are. That is indeed similar to UDT.

Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can't add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they're in the Tails world?

probability is in the mind.

Doesn't mean there's not a correct one.

[-][anonymous]10y10

At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they're in the Tails world?

Depends on self-altruism and such concepts. No longer as clear cut. The question comes down to "do you prefer that your copies all get a dollar, or what"

If I need to specify the degree of "self-altruism," suppose that sleeping beauty is not a human, but is instead a reinforcement-learning robot with no altruism module, self- or otherwise.

[-][anonymous]10y00

Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can't add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they're in the Tails world?

OK, if I'm interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you're asking, please clarify for me. I know you mentioned betting but let's just base this on log score and say the return is in utils so that there isn't any ambiguity. Since you're saying they don't add utilities, I'm also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.

So, given all of that, I don't see how the answer is anything but 1/2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn't do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn't care about her copies. Her answer is still 1/2 (unless of course she knew the coin was weighted, etc.).

I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I'd be 99.9% sure that when I exited that room, I'd see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I'd be 99.9% sure that when I exited that room, I'd see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn't go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb's Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn't work there, and it doesn't work here, either.

From the outside though, there isn't a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there's only what actually happens. That's what I was trying to get at.

Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn't the same as yours.

You answered the correct question. (yay)

Ok, so you don't think that I can travel back in time to change the probability of a past event? How about this problem: I flip a coin, and if the coin is heads I put a white stone into a bag. But if the coin is tails, I flip a coin and put one white stone and one black stone into the bag.

You reach into a bag and pull out a stone. It is white. From this, you infer that you are twice as likely to be in heads-world than tails-world. Have you gone back in time and changed the coin?

No - you have not affected the coin at all. So how come you think the coin is more likely heads than tails? Because the coin has affected you.

The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.

[-][anonymous]10y00

The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.

Yep, that was pretty dumb. Thanks for being gentle with me.

However, I still don't understand what's wrong with my conclusion in your version of Sleeping Beauty. Upon waking, Sleeping Beauty (whichever copy of her) doesn't observe anything (colored stones or otherwise) correlated with the result of the coin flip. So it seems she has to stick with her original probability of tails having been flipped, 1/2.

Next, out of curiosity, if you had participated in my red/green thought experiment in real life, how would you anticipate if you woke up in a red room (not how would you bet, because I think IRL you'd probably care about copies of you)? I just can't even physically bring myself to imagine seeing 9,999 copies of me coming out of their respective rooms and telling me they saw red, too, when I had been so confident beforehand that this very situation would not happen. Are you anticipating in the same way as me here?

Finally, let's pull out the anthropic version of your stones in a bag experiment. Let's say someone flips an unbiased coin; if it comes up heads, you are knocked out and wake up in a white room, while if it comes up tails, you are knocked out, then copied, and one of you wakes up in a white room and the other wakes up in a black room. Let's just say the person in each room (or in just the white room if that's the only one involved) is asked to guess whether the coin came up heads or tails. Let's also say, for whatever reason, the person has resolved to, if ey wakes up in the white room, guess heads. If ey wakes up in the black room, ey won't be guessing, ey'll just be right. Now, if we repeat this experiment multiple times, with different people, it will turn out that, looking at all of the different people (/copies) that actually did wake up in white rooms, it turns out that exactly half of them will have guessed right. Right now I'm just talking about watching this experiment many times from the outside. In fact, it doesn't matter with what probability the person resolves to guess heads if ey wakes up in the white room - this result holds (that around half of the guesses from white rooms will be correct, in the long run).

Now, given all of that, here's how I would reason, from the inside of this experiment, if we're doing log scores in utils (if for some reason I didn't care about copies of me, which IRL I would) for a probability of heads. Please tell me if you'd reason differently, and why:

In a black room, duh. So let's say I wake up in a white room. I'd say, well, I only want to maximize my utility. The only way I can be sure to uniquely specify myself, now that I might have been copied, is to say that I am "notsonewuser-in-a-white-room". Saying "notsonewuser" might not cut it anymore. Historically, when I've watched this experiment, "person-in-a-white-room" guesses the coin flip correctly half of the time, no matter what strategy ey has used. So I don't think I can do better than to say 1/2. So I say 1/2 and get -1 util (as opposed to an expected -1.08496... utils which I've seen historically hold up when I look at all the people in white rooms who have said a 2/3 probability of heads).

Now I also need to explain why I think this differs from the obvious situation you brought up (obvious in that the answer was obvious, not in that it wasn't a good point to make, I think it definitely was!). For one thing, looking historically at people who pick out white stones, they have been in heads-world 2/3 of the time. I don't seem to have any other coherent answer for the difference, though, to be honest (and I've already spent hours thinking about this stuff today, and I'm tired). So my reduction's not quite done, but given the points I've made here, I don't think yours is, either. Maybe you can see flaws in my reasoning, though. Please let me know if you do.

EDIT: I think I figured out the difference. In the situation where you are simply reaching into a bag, the event "I pull out a white stone." is well defined. In the situation in which you are cloned, the event "I wake up in a white room." is only well-defined when it is interpreted as "Someone who subjectively experiences being me wakes up in a white room.", and waking up in a black room is not evidence against the truth of this statement, whereas pulling out a black stone is pretty much absolute evidence that you did not pull out a white stone.

[-][anonymous]10y10

But it leaves open a trap where people accidentally sneak in whatever prior probabilities they want - I think you fall into this on the Sleeping Beauty problem.

I see this as explicitly not happening. nyan_sandwich says:

No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning.

"Other reasoning" including whatever prior probabilities were there before.

[-]Jiro10y40

Here's a modified Sleeping Beauty problem. Instead of having Sleeping Beauty awakened 2 times if the coin is tails and 1 time if the coin is heads, recruit two people, Sleeping Beauty and Snow White.

If the coin comes up heads, wake one of them randomly and ask the question, and just let the other one go.. If the coin comes up tails, wake both of them and then them both the question.

This version of the problem eliminates many of the troublesome factors of the original version, yet it it's hard to justify why it would have a different answer than the original version. And the answer to this version is obviously that tails has a 2/3 probability.

Now, if you still think this has a different answer from the original problem, here's yet another variation. You have a crowd of people, all of whom go to sleep. If the coin comes up heads you wake one of them and ask the question; if the coin comes up tails, you wake two of them and ask the question. Does the answer change if waking two of them happens with replacement (so you can pick the same person twice in which case you cause the same amnesia that's in the original problem) or without replacement? And if the answer doesn't change between with replacement and without replacement, then you should be able to shrink the size of the crowd down to 1 (thus reducing it to the original problem) while keeping the answer the same.

And if the answer doesn't change between with replacement and without replacement, then you should be able to shrink the size of the crowd down to 1 (thus reducing it to the original problem) while keeping the answer the same.

Not so, if there is a crowd your being woken is stronger evidence of two people being woken than of one person being woken. In a crowd of 10, you have 1/10 chance of being woken if one random person is woken, and 19/100 chance of being woken at least once if two random people (with replacement) are woken. In a crowd of size 1, you have 100% chance to be woken at least once either way. Same odds == observation provides no evidence.

[-]Jiro10y00

Don't compute the odds that two people have been woken, compute the odds that this is a two-wakings experiment. That's also higher than 50% and that (unlike "the odds that two people have been woken") stays higher when you shrink the crowd size.

Imagine that you and I are sitting at a table. Hidden in my lap, I have a jar of beans. We are going to play the traditional game wherein you try to guess the number of beans in the jar. However, you don’t get to see the jar. The rule is that I remove beans from the jar and place them on the table for as long as I like, and then at an arbitrary point ask you how many beans there are in total. That’s all you get to see.

One by one, I remove a dozen beans. As I place the twelfth bean on the table in front of you, I ask: “So how many beans are there total, including those left in the jar?”

“I have no idea,” you reasonably reply.

“Alright, well let’s try to narrow it down,” I say helpfully. “What is the greatest amount of beans I could possibly have in total?”

You reason thusly: “Well, given the Copernican principle, this twelfth bean is equally likely to fall anywhere along the distribution of the total number of beans. Thus, for example, all else held equal, there is a 50% chance that it will be within the last 50% of beans removed from the jar – or the first 50%, for that matter.

“But, obviously, it further follows that there is a 70% chance that it will be in the final 70% of beans, and a 95% chance that it will be within the last 95% of beans you might remove, and so on. In this scenario – if the 11 previous beans represent only 5% of the total – then there should be at most 11x20 total beans, or 220. Thus, I can be 95% confident that there are no more than 220 beans. Of course, the actual possible number asymptotically approaches infinity by this reasoning (say I wanted to be 99% confident?), but 95% confidence is good enough for me! So I’ll take 220 as my upper bound…”

You are wrong. I have over 1,000 beans left in the jar.

Or: you are (technically) right. There is only one bean left in the jar.

Or: any other possibility.

Either way, it seems obvious that your reasoning is completely disconnected from the actual number of beans left in the jar. Given the evidence you’ve actually seen, it seems intuitively that it could just as well be any number (12 or greater).

Where did you go wrong?

The proper Bayesian response to evidence is to pick a particular hypothesis – say, “there are fewer than 220 beans,” which is the hypothesis you just pegged at 95% confidence – and then see whether the given evidence (“he stopped at the 12th bean”) updates you towards or away from it.[1]

It seems clear that this kind of update is not what you have done in reasoning about the beans. Rather, you picked a hypothesis that was merely compatible with the evidence – “there are fewer than 220 beans.” You then found this weird value of the percentage of possible worlds wherein the evidence could possibly appear[2], out of possible worlds where the hypothesis is true (i.e. worlds where there are at least 12 beans out of worlds with fewer than 220). And this was then conflated this with the actual posterior probability of the hypothesis.

It seems to me that the Doomsday Argument is exactly analogous to this situation, except that it’s a sentient 12th bean itself (i.e. a human somewhere in the timeline) that happens to be making the guess.

I am not at all confident that I haven’t failed to address some obvious feature of the original argument. Please rebut.

[1] I’ve just tried to do this, but I’m rubbish at math, especially when it includes tricky (to me) things like ranges and summations. (Doesn’t the result depend entirely on your prior probability that there are (0, 220] beans, which would depend on your implicit upper bound for beans to begin with, if you assume there can’t be infinite beans in my jar?)

[2] Not does appear – remember, I could have stopped on any bean. This chunk of possible worlds includes, e.g. the world where I went all the way to bean 219.

His reasoning would be entirely correct if you had determined the number of beans you draw randomly from between 0 and the total number. His priors were all wrong, and so he failed.

Could we take all possible prior distributions, assign to each some prior that is probably wrong, and then use those prior distributions as theories to use the number of beans as evidence for?

Good point; you're right that his reasoning would be correct if he knew that, e.g., I had used a random number generator to randomly-generate a number between 1 and (total # of beans) and resolved to ask him, only on that numbered bean, to guess the upper bound on the total.

Perhaps to make the bean-game more similar to the original problem, I ought to ask for a guess on the total number after every bean placed, since every bean represents an observer who could be fretting about the Doomsday Argument.

Analogously, it would be misleading to imagine that You the Observer were placed in the human timeline at a single randomly-chosen point by, say, Omega, since every bean (or human) is in fact an observer.

Unfortunately I'm getting muddled and am not clear what consequences this has. Thoughts?

His reasoning would be entirely correct if you had determined the number of beans you draw randomly from between 0 and the total number.

Let's put this a bit more technically: the reasoning would have been correct if the number of beans were a random value drawn from a known (and sufficiently well-behaved) distribution.

When you're saying "oops," you're just saying "oops" about the scheme you propose at the end, right? Because I still don't believe in observers either.

[-][anonymous]10y10

The whole post and concept still has valuable perspective and insight, but there are fatal mistakes that sink anthropic atheism in the current incarnation as a general theory of anthropics. I still think the real deal isn't going to have anything like reference classes or "observer moments".

Upvoted for the frank admission of error.

eliminating those possibilities inconsistent with your observations

There's the (/a) rub. When is a hypothesis inconsistent with observations? More generally, what probabilities does a hypothesis assign to observations? If we want our world models to really capture the universe, including a fine-grained self-understanding, they will not look like predicted sequences of observations, which are already high-level phenomena within an "observer". They should be more reductionist, i.e. true to the actual structure of the universe. But then, how do you know when a (hypothetical) universe predicts that you see red vs. green? This "self-location" or "bridging hypothesis" is the whole problem.

[-][anonymous]10y20

More generally, what probabilities does a hypothesis assign to observations?

Hypothesis don't assign probabilities in this model, they only make absolute predictions.

"bridging hypothesis" is the whole problem.

I wouldn't call it the "whole problem", but yeah, bridging is not handled by this model, and is currently an open problem AFAIK.

[-][anonymous]10y10

In my opinion, you did a very good job explaining your viewpoint. De'da would approve, and I do, too. I've agreed with him since reading that post I linked.

I think this should be in Main.

[This comment is no longer endorsed by its author]Reply

I had considered making a post about the Sleeping Beauty Problem, but I'll just latch on to yours. I thought a modification of the original problem would be more interesting: As before, Sleeping Beauty is put to sleep, and if woken up and questioned, is put to sleep with amnesia. As before, a coinflip determines whether to wake her up once or twice. But the new bit is that instead of being guaranteed to wake up, a die is rolled once each day that she might have been woken up. This time, she can in fact use the fact of being awake as evidence of the coinflip, and the strength of the evidence would depend on the probability determined with the die. Eg if she knew she had 1/3 chance to be woken up each day, the probability of the coinflip is 5/9 : 1/3 in favor of two days, based on her knowledge that she was awoken at least once.

In anthropic terms, it means that our existence is evidence that the world is such that our existence is likely. If your reward structure were such that your reward is multiplied by the number of observers that agree with you (as with multiple copies of yourself), you'd be best off overestimating the number of observers, but I don't see any way that would be useful in practice.

I'm not seeing why atheism is included in the post title.

[-][anonymous]10y00

The name is a snarky reference to the ontologically basic status of observers (souls) in other anthropic theories.

What if we modify the problem slightly, and ask sleeping beauty for her credence that it’s Monday? That is, her credence that “it” “is” Monday. If the coin came up heads, there is only Monday, but if it came up tails, there is a Monday observer and a Tuesday observer. AA/SHA reasons purely from the perspective of possible worlds, and says that Monday is consistent with observations, as is Tuesday, and refuses to speculate further on which “observer” among possible observers she “is”. Again, given an actual decision problem with an actual payoff structure, AA/SHA will quickly reach the correct decision, even while refusing to assign probabilities “between observers”.

As much as I agree that the term "observers" causes needless confusion, I don't think you adequately defend the position above.

Let's say Sleeping Beauty acts under the following reasoning: If you count all the instances where she is woken up, both in the possible universe where the coin came up heads and where it came up tails, there are two instances where she woke up on monday, and one instance where she woke up on tuesday. This yields 2/3 probability that it is a monday if she is woken up, and she should therefore accept 2-1 odds against that it is monday. Say that the testers agree to pay her 1 dollar if it is monday and she pays them 2 dollars if it is tuesday whenever she wakes up. If the testers then run the experiment 1000 times, the coin will come up heads 500 times, in which case they pay 1 dollar each time, and tails 500 times, in which case they first pay 1 dollar and then receive 2 dollars the next day. In total they pay 1000 dollars and receive 1000 dollars on average, making the bet entirely fair if you base it on 2/3 odds that it is monday. So if we use your logic of basing the decision on bets, it works out if you assume 2/3 odds that it is monday.

[-]jbay10y20

Yes, I also had this concern when I read that paragraph.

If Sleeping Beauty were paid $1 for a correct guess as to which day it is, she should probably always answer "today is Monday".

If the coin came up heads, she is correct. If the coin came up tails, she will be correct on Monday and wrong on Tuesday. So this guarantees that she earns $1 every time the experiment runs.

If she instead always answers "today is Tuesday", then if the coin came up heads, she earns nothing; if the coin came up tails, she earns nothing on Monday and $1 on Tuesday. So this strategy has an expected revenue of $0.50.

So is it fair to "say that Monday is consistent with observations, as is Tuesday, and refuse to speculate further"? I think that's yielding too much ground.

This particular case isn't a philosophical question about getting information from knowledge of being an observer among all sets of observers. It's a purely pragmatic analysis that says you'll always be woken up on a Monday but not always on a Tuesday, so Monday awakenings are simply more probable. That's the structure of the experiment, and has nothing to do with the anthropic principle.

An analogous experiment that removes the observer is: The experimenter has a bag with only a white ball and a green ball in it. He draws once from the bag randomly, but if he drew the white ball, he draws again. Anyone can tell you that he'll produce a green ball every round, but a white ball only half the time. If it's done behind a screen and you're asked "Was a green ball drawn this round?", the smart bet is yes.