This is the second post in my series on Anthropics. The previous one is Anthropical Motte and Bailey in two versions of Sleeping Beauty. The next one is Antropical probabilities are fully explained by difference in possible outcomes.

Introduction

Since the first time I've heard about anthropics something felt off. Be it updating on awakening in Sleeping Beauty, accepting high probability of doom in the Doomsday Argument, or the premise of Grabby Aliens; following SSA or SIA, the whole pattern of reasoning seemed wrong to me. 

That it's cheating. That there is something obviously unlawful going on. That it doesn't look at all like the way cognition engines produce map-territory correspondence.

It was hard to point out what exactly had been wrong, though. The discourse seemed to be focused around either accepting SIA or SSA, both of which are obviously absurd and wrong in some cases. Or discarding all the anthropic reasoning altogether - a position towards which I'm quite sympathetic, but which also seemed to be an overcorrection, as it tends to discard some sound reasoning.

It took me some time to formalize this feeling of wrongness into an actual rule that can be applied to anthropical problems, separating correct reasoning from wrong reasoning. In this post I want to explore this principle and its edge cases, and show how using it can make anthropics add up to normality.

The metaphysics of existence vs having a blue jacket

Let's consider the Blue Jacket Experiment (BJE):

A fair coin is tossed. On Tails two people will be created, wearing blue jackets. On Heads two people will also be created but only one, randomly chosen, will have a blue jacket. You are created and notice that you are wearing a blue jacket. What's the probability that the coin landed Heads?

Here the situation is quite obvious.  As having a blue jacket is twice as likely when the coin is Tails, we can do a simple Bayesian update:

Notice that this is not an Anthropical Motte where we count Tails outcomes twice for the same coin toss. In a repeated experiment guessing Tails when you have a blue jacket gives about 2/3 accuracy. You can actually guess Tails per experiment better than chance this way.

So why doesn't the same principle apply to the Incubator Sleeping Beauty (ISB) problem, as one would naively think? Why can't I notice that I exist, update on it and guess the result of a coin toss with 2/3 accuracy per experiment

There are two failure modes here. The first, is to keep confusing the Motte with the Bailey and bite the bullet, saying that indeed ISB works exactly as BJE. I hope, that my previous post and all the accentuation that we are talking about per experiment accuracy made that mistake as hard to make as it can get.

The second, is to decide that there is something fundamentally special about consciousness or first-person perspective. That there is a metaphysical difference between your existence and having a blue jacket. This line of reasoning leads people to magical thinking that the universe cares more about some people with some specific properties. I hope this post will push back against that failure mode and show that there is no weird metaphysics going on.

But what's the answer then?

Well, the short answer is that in BJE you receive new evidence. You couldn't be confident that you would have blue jacket, and now you know that you have one. On the other hand, there is no unaccounted-for information in the fact of existence. But this is confusing for some people. How do I know whether I have already accounted for my existence or not? Maybe I couldn't be confident and should be surprised that I exist? So, let's make a step back.

Let's notice that it's logically impossible to correctly guess Tails in Incubator Sleeping Beauty with 2/3 accuracy, as it's possible for BJE. About 50% of coin tosses are Heads in both experiments. So, guessing Tails every time on a repeated experiment can't possibly give you 2/3 accuracy among all the iterations. However, you can get 2/3 accuracy in some subset of all iterations

In BJE it's the subset of iterations in which you have a Blue Jacket. We can get a subset of iterations among which you can predict the Tails with 2/3 accuracy, because the number of all iterations is greater than number of iterations in which you have a Blue Jacket.

But in ISB there are no iterations in which you do not exist. The number of outcomes in which you are created equals the total number of iterations.

Thus, there is no possibility to get 2/3 accuracy in the subset of iterations where you exist.

This is the underlying principle of the Conservation of Expected Evidence. If you couldn't have possibly expected to observe the outcome not A, you do not get any new information by observing outcome and there is nothing to update on. You can't expect to observe your own non-existence, but you can expect to observe yourself not having a Blue Jacket. That's why you update in the latter and not the former case.

I think generally it's a good heuristic. But there are still confusing edge cases, related to the way natural language works. For instance, death. You can't expect to observe yourself being dead. But you can expect yourself to die. Is there some metaphysical difference between death and non-existence?

The metaphysics of non-existence vs death

Let's consider the Assassination Experiment (AE):

A fair coin is tossed. On Tails two people will be created. On Heads two people will also be created but then one, randomly chosen, dies half an hour later. You are created and notice that half an hour has passed and you are still alive. What's the probability that the coin landed Heads?

On one hand the situation is completely similar to BJE

But doesn't this contradict the Conservation of Expected Evidence?

We can say that when you are still alive you can expect yourself to die, but you can't expect yourself to not exist when you've never existed in the first place, because there is no one to expect anything. But this is still not exactly the actual rule. 

What if you were created unconscious and then killed? Then you couldn't possibly expect anything, could you? What about unconscious states in general as in Classic Sleeping Beauty? Sometimes we need to include them in our mathematical model - like when we are talking about Beauty's chance to be asleep on a random day and sometimes not - when we are specifically talking about her awake states and attempts to guess the result of the coin toss. 

Adding all these caveats makes the rule appear complex and artificial. What we want is to talk about possibility of expectation in principle, based on the simple fact that for a person created in this experiment it's possible not to observe their survival, because the amount of all people survived is less than all people created.

The true rule has little to do with the specialness of first-person experience. And that's why anthropic theories that focus on that always led to bizarre conclusions. The true rule is about the causal process that goes on inside the reality, or in our case, inside a specific mind experiment.

No metaphysics, just random sampling

Thankfully there is a simple way to capture this idea: whether there is a random sampling going on or not.

For example, wearing a blue jacket is an outcome of a random sample. Heads outcome in BJE leads to random choice between two people whom to give a blue jacket. Likewise, survival in AE is an outcome of random sample, regardless of whether people are created conscious or not. 

But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.

Whether there is random selection going on or not, equals whether you gain new information or not, equals whether you follow or contradict the conservation of expected evidence, when you update on this information. It doesn't matter what kind of evidence we are talking about. Both existence and having a blue jacket follow the same rule.

To demonstrate this, let’s change the condition of the BJE a bit to get Fixed Blue Jacket Experiment (FBJE):

A fair coin is tossed. You will be created wearing a blue jacket regardless of outcome. On Tails another person will also be created, wearing a blue jacket. On Heads a person without a blue jacket will be created. You are created and notice that you are wearing a blue jacket. What's the probability that the coin landed Heads?

Here updating on wearing blue jacket would violate the Conservation of Expected Evidence as the causal process that gave you the jacket didn't use random sampling, there is no possible outcome where you do not have a Blue Jacket, so no new information in having one. And thus:

You may notice that in this regard wearing a blue jacket in BJE is similar to finding yourself in Room 1 in Incubator Sleeping Beauty, and in FBJE it's similar to learning that it's Monday in Classical Sleeping Beauty.

Now let's modify Sleeping Beauty so that update on existence/awakening was similar to wearing a blue jacket in BJE. Here is Bargain Sleeping Beauty:

You and another person participate in the Sleeping Beauty experiment. Sadly, the funding is limited so no amnesia drug is provided. Instead, a coin is tossed. On Heads one of you, randomly picked, will be put to sleep and then awakened. The other person, meanwhile is free to go. On Tails both of you will be put to sleep and then awakened in different rooms. You were put to sleep and now are awakened. What is the probability that the coin landed Heads

 Now there is a random selection process, it's possible for you not to be picked and thus awakening in the room is relevant evidence.

As a corollary, we can notice that wrongly assuming that random sampling is going on when it's actually not the case, leads to wrong conclusions, as it makes us update on irrelevant information and contradicts the Conservation of Expected Evidence. 

And this is what is going on with every bad example of anthropic reasoning.

The Doomsday Argument falsely assumes that we are randomly sampled among all the humans who lived or who will ever live. Grabby Aliens - that we are randomly sampled among all the possible sentient civilizations. Thirdism in Sleeping Beauty - that you are randomly sampled among all possible awakened states. Most of the time causality is completely ignored.

Sometimes the sampling assumption of SSA or SIA is not satisfied by the conditions of the experiment, and then they unsurprisingly output crazy results. It's no use to argue which of them is true or even just better than the other, because they are not universal laws. They are literally just assumptions which occasionally fail to correspond to reality. And it's totally fine, our mathematical models are supposed to fail in the circumstances they are not meant to work in.

Do not blindly follow anthropic theories off the cliff, biting all the ridiculous bullets on the way. Check the causal structure, see if there is random sampling going on, and base your conclusions on that. Follow the Law of Conservation of Expected Evidence and you won't be led astray.

The next post in the series is Antropical probabilities are fully explained by difference in possible outcomes.

New Comment
9 comments, sorted by Click to highlight new comments since:

One needs to define the probability spaces, and it's appropriate to see if those probability spaces are relevant to something. It's no use to discuss "probability" on the level of a word or surrounding intuitions. One way of formulating relevance is by treating probabilities as details of how a machine that is an agent makes decisions internally, and ask what probability assignments lead to what kinds of outcomes. Or we can set up a prediction market with some scoring rule. But without such sources of desiderata for probability assignment, or some other motivated way of choosing probability spaces and events on them that give meaning to updating of probabilities, the discussion is lost in words (where most philosophy goes to die).

Conservation of expected evidence is a trivial theorem that always holds when the formal context is there. And the formal context should be there in any case, even when lacking motivation, for the discussion to be meaningful at all. Without the formal context, checking if a theorem holds about superficial data seems like a futile thing to do, since the suspicion should just lead to checking if there is a formal context beneath the superficial data and words directly.

Saying that an assumption of some manner of sampling is "wrong" requires explaining what it means to be sampled vs. not sampled vs. sampled in a different way, and I don't see what it could possibly mean, outside of some external process that performs the sampling and keeps the score, for example for the purpose of assigning rewards for a prediction market (but also, agent's own preference is such a computation). Maximizing a scoring computation that involves such sampling would then motivate keeping track of that method of sampling, and of the probability assignments that guide it. But those probability assignments would have no other use, and won't be "correct" or "wrong" in some universal sense that doesn't refer to a particular scoring computation.

One needs to define the probability spaces, and it's appropriate to see if those probability spaces are relevant to something. It's no use to discuss "probability" on the level of a word or surrounding intuitions.

 

Oh, I agree. If Adam Elga was initially careful with his reasoning and noticed that saying "centered possible worlds" doesn't allow you to treat non-elementary outcomes as elementary ones, that you need to basically recreate the whole mathematical apparatus of probability theory from scratch to attempt to lawfully do what he did, the whole field of anthropics wouldn't get that much traction and accure this amount of absurdity and bizarreness going on. But now it seems a little bit too late for this.

The main problem is in the fact that people are using probability spaces not relevant to the problems in discussion. Such valid but not sound reasoning is everywhere in anthropics. I'm not sure how to adress this problem in the strict formalism of mathematics without the medium of words and surrounding intuition. Math can show us how the model is not coherent, but not how the model is not applicable to the current situation. 

Indeed the short version, the core idea of this whole anthropic sequence, is basically: "Stop using mathematical models not applicable to the problems you are talking to". But it seems that people really do not see why wouldn't their models be applicable and are more likely to believe that participating in an anthropic experiment gives you weird psychic powers. So I'm trying to carefully address these issues one at a time, using words, building high level intuitions and exploring failure modes. 

One way of formulating relevance is by treating probabilities as details of how a machine that is an agent makes decisions internally, and ask what probability assignments lead to what kinds of outcomes. Or we can set up a prediction market with some scoring rule.

I find attempts to ground probability theory in decision making being quite backwards. As if we are trying to explain boolean algebra with computers. Granted, these are the applications of the corresponding fields but we can still meaningfully talk about mathematical model even when we do not have an application for it. As long as we do not insist that it a priori has to be relevant to a particular problem, of course. Decision theory is a next step, a superstructure on top of probability theory. Different decision makers may be interested in different probabilities, but we can still meaningfully talk about probability of a fair coin landing Heads even without any utility functions attached to the outcomes. Add an utility function, or a scoring rule and you get an extra variable entangled in the mix and it's even harder to talk about it.

Saying that an assumption of some manner of sampling is "wrong" requires explaining what it means to be sampled vs. not sampled vs. sampled in a different way, and I don't see what it could possibly mean, outside of some external process that performs the sampling and keeps the score, for example for the purpose of assigning rewards for a prediction market

I agree that there is still some ambiguity (what is randomness?), but I think it should be generally understandable what I mean here. I'm talking about causal process that determines the outcomes of the experiment. If this causal process uses random sampling - picks a random element from a set instead of always having a fixed element that it was always going to pick - then it makes sense to update on the corresponding evidence. In terms of markets and keeping the score we can talk about correct per experiment probability estimate based on Law of large numbers the way I did it in the previous post with python code samples repeating the experiment multiple times.

But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.

I am twice as likely to exist when the coin is Tails! After all, if the coin is Tails, then there are two of me. I understand how this can lead to a thirder conclusion:

  1. Heads implies one chance for me to exist.
  2. Tails implies two chances for me to exist.
  3. I observe that I exist. This is predicted "twice as much" by the coin being Tails then Heads, so the probability of Tails is 2/3.

However, this there is a mistake happening in this reasoning. The correct one is the following:

  1. Heads implies the the number of "mes" will be 1.
  2. Tails implies the number of "mes" will be 2.
  3. I observe that I exist. Does this mean that there is 1 of me, or 2 of me? I don't know.

So we can't extract information from my existence, and we're back to normalcy: 1/2 chance of Head or Tails.

[Edit] I no longer agree with the parts above that are crossed. Consider two lotteries, one awards only one person, the other awards two people. Only one of these lotteries ends up happening, and you win. It's safe to update on "I won the lottery" and get a higher degree of confidence that the lottery that happened was the one that awards two people, not one. We don't say that well, I don't know if the amount of people awarded was 1 or 2, so no evidence here".

The correct rebuttal to the thirder argument above is that the two "chances" for me to exist given Tails share the 0.5 probability that the coin is Tails, so each gets 0.25.

We can still say that "I am twice as likely to exist on Tails" if we let the words "I", and "exist" do a lot of hidden work: assuming everything goes right with the experiment, I am 100% guaranteed to exist either way.

When in one outcome one person exists and in the other outcome two people exist it may mean that you are twice as likely to exist on the second outcome (if there is random sampling) and then thirder reasoning you describe is correct. Or it may mean that there are just "two of you" in the second scenario, but there are always at least one of you, and so you are not more likely to exist in the second scenario.

Consider these two probability experiments:

Experiment 1: Brain fissure

You go to sleep in Room 1. A coin is tossed. On Heads nothing interesting happens and you wake up as usual. On Tails you are split into two people: your brain is removed from the body, the two lobes are separated in the middle and then the missing parts are grown back, therefore creating two versions of the same brain. Then these two brains are inserted into perfectly recreated copies of your original body. Then on random one body is assigned to Room 1 and the other is assigned to Room 2. Both bodies are returned to life in such a manner that it's impossible to notice that something happened.

You wake up. What's the probability that the coin was Heads? You see that you are in Room 1. What is the probability that the coin was Heads now?

Experiment 2: Embryos and incubator

There are two embryos. A coin is tossed. On Heads one embryo is randomly selected and put into an incubator that grows it into a person who will be put to sleep in a Room 1. On Tails both embryos are incubated and at random one person is put into Room 1 and the other person is put into Room 2. 

You wake up as a person who was incubated this way. What is the probability that the coin was Heads? You see that you are in Room 1. What is the probability that the coin was Heads now?

Do you see the important difference between them?

On the same day I posted my original comment I later realized what I said was wrong, and I'll soon edit it to reflect that.

Regarding your response: I think I have a guess on the important difference you're referring to. They both seem to be equivalent to an Incubator Sleeping Beauty, but see consideration 2 bellow.

1

I think another useful (at least to me) way of seeing/stating what is happening here is that all of the following sentences are true, in an ISB and your two experiments:

  • The probability (from an external POV) that the coin was Heads or Tails is 1/2.
  • Each individual "me" (however many there are) will experience the coin being Heads or Tails one half of the time.
  • If every "me" always predicts Heads, all of my mes will be correct 1/3 of the time and wrong 2/3 of the time. Each individual me will only be able to notice this if we get together after the experiments to compare notes.

I think this is equivalent to the difference in scoring methods you used in Anthropical Motte and Bailey in two versions of Sleeping Beauty.

2

With the two experiments in your response, the only significant difference I can see is that, in experiment 1, there are two identical copies of me, and in 2, there are two different people. I don't know if you're implying that this changes any probabilities, and I'm not sure that it does. What I can say is that experiment 2 is, AFAICT, equivalent to the Doomsday argument in it's setup: two theories on the amount of people that will come to be, with 1:1 prior odds between them, and the question is "should you update on your existing". I have more reflection to make before I can give any firm answer here, but I'm inclined toward "no".

3

I have a feeling that, even though we agree with the final probabilities, we disagree on some of the internal details of how these experiments work. What would you say is the significant difference between the experiments, and does it change the numbers?

Something's not adding up. You said that anthropic paradox is not about first-person perspective or consciousness. But later:

But in ISB there are no iterations in which you do not exist. The number of outcomes in which you are created equals the total number of iterations.

The most immediate question is the definition of "you" in this logic. Why can't thirders define "you" as a potentially existing person? In which case the statement would be false. If you define it as an actually existing person then which one? Seems to me you are using the word "you" to let the reader imagine themselves being a subject created in ISB, so it would point to the intuitively understood self. But that definition uses the first-person perspective as a fundamental concept. And the later:

But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.

So who you are (who the first person is) is fundamental. As well as is its existence. 

From past experience, I know this is not the easiest topic to discuss. So let's use a concrete example:

In BJE, suppose for heads, instead of creating 2 people and then randomly sampling one of them to have a blue jacket, a person is created in the blue jacket, then sometime later another person is created without a blue jacket, so there is no random sampling taking place. Is your analysis going to change? Or answer these questions: 1. Before looking down and check, what is the probability that you are wearing a blue jacket? 2. After seeing the blue jacket, what is the probability that the coin landed Heads?

I'm not saying that there never any differences between first and third person perspectives in any possible setting. I'm saying that all these differences are explained by different possible outcomes and expected evidence - general principles of probability theory and do not require any additional methaphysics. My next post will focus more specifically on this idea.

Why can't thirders define "you" as a potentially existing person?

They can in principle. SIA followers may claim that people are indeed random sampled from a finite set of immaterial soul to inhabit a body. But then the burden of proof would be on them to show some evidence for such an extraordinary claim. As long as there are no reason to expect that your existence is a random sample we shouldn't assume that it's the case. 

In BJE, suppose for heads, instead of creating 2 people and then randomly sampling one of them to have a blue jacket, a person is created in the blue jacket, then sometime later another person is created without a blue jacket, so there is no random sampling taking place.

If you are the person that is guaranteed to have a blue jacket then this is FBJE and indeed the analysis changes as you can not lawfully update on the fact of having a blue jacket. However, if the causal process creating you didn't particularly care about specifically you having or not having a blue jacket, if it was just two people created the first always with a blue jacket and the second always without and, once again, you were not necessary meant to be the first, then this counts as random sampling and BJE analysis stands.

I do think SIA and SSA are making extraordinary claims and the burden of proof is on them. I have proposed assuming the self as a random sample is wrong for several years. That is not the problem I have with this argument. What I disagree with is that your argument depends on phrases and concepts such as "'your' existence" and "who 'you' are" without even attempting to define what/which one is this 'you' refers to. My position is it refers to the self, based on the first-person perspective, which is fundamental,  a primitive concept. So it doesn't require any definition as long as reasoned from the perspective of an experiment subject. But your argument holds the position that perspective is not fundamental. So treating 'you', which is the first-person 'I' to the reader, as primitive is not possible. Then how do you define this critical concept? And why is your definition better than SIA or SSA's? You also have a burden of proof. Because without a clear definition, your argument's conclusion can jump back and forth in limbo. This is illustrated by the example of the modified BJE above. You said:

If it was just two people created the first always with a blue jacket and the second always without and, once again, you were not necessary meant to be the first, then this counts as random sampling and BJE analysis stands.

Isn't this treating you as a random sample when there is no actual sampling process, i.e. the position you are arguing against?  

And how is this experiment different from your FBJE? In other words, which process enables FBJE to guarantee that 'you' will be the person in the blue jacket regardless of the coin toss? How come there is no way that you be the person whose jacket depends on the toss? Some fundamental stipulation about what 'you' would be is used here. 

BTW the FBJE is not comparable to the sleeping beauty problem. In FBJE, by stipulation, you can outright say your blue jacket is not due to the coin landed Tails. But beauty can't outright say this is Monday. 

What I disagree with is that your argument depends on phrases and concepts such as "'your' existence" and "who 'you' are" without even attempting to define what/which one is this 'you' refers to.

The thing is, what "you" refers to, fully depends on the setting of the experiment which is, whether there is random sampling going on or not. In FBJE you are a person in a blue jacket, regardless of the coin toss outcome. In BJE you are one of the created people and can either have a blue jacket or not with probabilities depending on the coin toss. Part of the confusion of anthropics is thinking that "you" always points to the same thing in any experiment setting and what I'm trying to show is that it is not the case. And this approach is clearly superrior to both SSA and SIA, which claim that it has to always be one particular way biting all the ridiculous bullets and presumptious cases on the way.

My position is it refers to the self, based on the first-person perspective, which is fundamental,  a primitive concept.

Is it true, though? I agree it's easy to just accept as an axiom that "selfness" is some fundamental property and try to build your ontological theory on this assumption. But the more we learn more about the ordered mechanism of the universe, the less probable subjective idealism becomes, compared to materialism. 

I believe, on our current level of knowledge, it doesn't really seem plausiable that "first person perspective" is somehow fundamental. In the end it's made from quarks like everything else.

Isn't this treating you as a random sample when there is no actual sampling process, i.e. the position you are arguing against?  

No this is treating you as a random sample when you are actually random sampled. I was comming from the assumption that people have good intuitive understanding what counts as random sample and what doesn't. But I see why this may be confusing in its own right, and I make a note for myself to go deeper into the question in one of future posts. For now I'll just point that a regular coin toss counts as a random sample between two outcomes, even if it was made a year ago. Same logic applies here.

In other words, which process enables FBJE to guarantee that 'you' will be the person in the blue jacket regardless of the coin toss? How come there is no way that you be the person whose jacket depends on the toss?

Well, I can come up with some plausible sounding settings, but this doesn't really matter for the general point I'm making. Whatever is the priocess that is guaranteeing that you in particular will always have the blue jacket the logic stays the same. And if there is no such process - then we have a different logic.  So the question about anthropic probabilities reduces to the question about the causal structure of the experiment and basic probability theory.

BTW the FBJE is not comparable to the sleeping beauty problem. In FBJE, by stipulation, you can outright say your blue jacket is not due to the coin landed Tails. But beauty can't outright say this is Monday. 

I didn't say that she necessary can. I said that if she can, then we have the same setting as with FBJE. Learning that you are awaken on Monday in SB leads to the same update (which is no update at all) as learning that you wear a Blue Jacket because both outcomes were meant to happen regardless of the coin toss outcome.