EDIT: This post has been superceeded by this one.

The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.

There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%. But now consider the situation:

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:

C - same as B, except the coin is flipped before you are created (the killing still happens later).

D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.

E - same as C, except the people to be killed are killed before awakening.

F - same as C, except the people to be killed are simply not created in the first place.

I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)

(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)

 

EDIT: Corrected the language in the presentation of the SIA, after SilasBarta's comments.

EDIT2: There are some objections to the transfer from D to C. Thus I suggest sliding in C' and C'' between them; C' is the same as D, execpt those due to die have the situation explained to them before being killed; C'' is the same as C' except those due to die are told "you will be killed" before having the situation explained to them (and then being killed).

Avoiding doomsday: a "proof" of the self-indication assumption
New Comment
238 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I upvoted this and I think you proved SIA in a very clever way, but I still don't quite understand why SIA counters the Doomsday argument.

Imagine two universes identical to our own up to the present day. One universe is destined to end in 2010 after a hundred billion humans have existed, the other in 3010 after a hundred trillion humans have existed. I agree that knowing nothing, we would expect a random observer to have a thousand times greater chance of living in the long-lasting universe.

But given that we know this particular random observer is alive in... (read more)

9steven0461
You just did -- early doom and late doom ended up equally probable, where an uncountered Doomsday argument would have said early doom is much more probable (because your living in 2009 is much more probable conditional on early doom than on late doom).
3Scott Alexander
Whoa. Okay, I'm clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn't look capable of tilting evidence. I'm not sure why that's the wrong way to look at it, but what you said is definitely right, so I'm making a mistake somewhere. Time to fret over this until it makes sense. PS: Why are people voting this up?!?

I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction

Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you're in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs - then you end up in the same place as after the update.

2CarlShulman
If civilization lasts forever, there can be many simulations of 2009, so updating on your sense-data can't overcome the extreme initial SIA update.
0Eliezer Yudkowsky
Simulation argument is a separate issue from the Doomsday Argument.
4SilasBarta
What? They have no implications for each other? The possibility of being in a simulation doesn't affect my estimates for the onset of Doomsday? Why is that? Because they have different names?
0Eliezer Yudkowsky
Simulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can't tell if you're in a simulation or not - assuming that statement is even meaningful - then you're very likely "in" such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).
1Unknowns
The probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.
0CarlShulman
The regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.
1DanielLC
If you start out certain that humanity will continue forever, won't you conclude that all evidence that you're in 2009 is flawed? Humanity must have been going on for longer than that.
0RobinHanson
Yes this is exactly right.
-1Mitchell_Porter
"On SIA, you start out certain that humanity will continue forever due to SIA" SIA doesn't give you that. SIA just says that people from a universe with a population of n don't mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.
1KatjaGrace
If you are in a universe SIA tells you it is most likely the most populated one.
1Mitchell_Porter
If there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.
1KatjaGrace
We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.
0Mitchell_Porter
That still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win. Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart's "intuitive" form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The "non-intuitive SIA" seems to be a principle for choosing among theories about multiverse demographics but I'm not convinced of its validity.
2KatjaGrace
Intuitive SIA = consider yourself a random sample out of all possible people SSA = consider yourself a random sample from people in each given universe separately e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you
1Vladimir_Nesov
A great principle to live by (aka "taking a stand against cached thought"). We should probably have a post on that.
0wedrifid
It seems to be taking time to cache the thought.
3wedrifid
So it does. I was sufficiently caught up in Yvain's elegant argument that I didn't even notice that it supported that the opposite conclusion to that of the introduction. Fortunately that was the only part that stuck in my memory so I still upvoted!
0Stuart_Armstrong
I think I've got a proof somewhere that SIA (combined with the Self Sampling Assumption, ie the general assumption behind the doomsday argument) has no consequences on future events at all. (Apart from future events that are really about the past; ie "will tomorrow's astonomers discover we live in a large universe rather than a small one").

My paper, Past Longevity as Evidence for the Future, in the January 2009 issue of Philosophy of Science, contains a new refutation to the Doomsday Argument, without resort to SIA.

The paper argues that the Carter-Leslie Doomsday Argument conflates future longevity and total longevity. For example, the Doomsday Argument’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity. My argument has some similarities to that in Dieks 2007, but does not rely on the Self-Sampling Assumption.

I'm relatively green on the Doomsday debate, but:

The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Isn't this inserting a hidden assumption about what kind of observers we're talking about? What definition of "observer" do you get to use, and why? In order to "observe", all... (read more)

1KatjaGrace
SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place). SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.
0Stuart_Armstrong
The debate about what consitutes an "observer class" is one of the most subtle in the whole area (see Nick Bostrom's book). Technically, SIA and similar will only work as "given this definition of observers, SIA implies...", but some definitions are more sensible than others. It's obvious you can't seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes? I tend to work with something like "observers who think they are human", or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.
0SilasBarta
Okay, but it's a pretty significant change when you go to "observers who think they are human". Why should you expect a universe with many of that kind of observer? At the very least, you would be conditioning on more than just your own existence, but rather, additional observations about your "suit".
0Stuart_Armstrong
As I said, it's a complicated point. For most of the toy models, "observers who think they are human" is enough, and avoids having to go into these issues.
0SilasBarta
Not unless you can explain why "universes with many observers who think they are human" are more common than "universes with few observers who think they are human". Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.
1Stuart_Armstrong
Er no - they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.
0SilasBarta
Your TL post said: And you just replaced "observers" with "observers who think they are human", so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.
0Stuart_Armstrong
Sorry, sloppy language - I meant "you, being an observer, are more likely to exist in a universe with many observers".
1SilasBarta
So then the full anthrocentric SIA would be, "you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human". Is that correct? If so, does your proof prove this stronger claim?
0Technologos
Wouldn't the principle be independent of the form of the observer? If we said "universes with many human observers are more likely than universes with few," the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.
0SilasBarta
But why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That's a fundamentally different claim!
0Technologos
I agree that it's a different claim, and not the one I was trying to make. I was just noting that however one defines "observer," the SIA would suggest that such observers should be many. Thus, I don't think that the SIA is inserting a hidden assumption about the type of observers we are discussing.
1SilasBarta
Right, but my point was that your definition of observer has a big impact on your SIA's plausibility. Yes, universes with observers in the general sense are more likely, but why universes with more human observers?
0Technologos
Why would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?
0SilasBarta
My point was that the SIA(human) is less plausible, meaning you shouldn't base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.
0Technologos
That's what I meant, though: you don't calculate the probability of SIA(human) any differently than you would for any other category of observer.
0[anonymous]
Surely the extremes "update on all available information" and "never update on anything" are each more plausible than any mixture like "update on the observation that I exist, but not on the observation that I'm human".

It seems understressed that the doomsday argument is as an argument about max entropy priors, and that any evidence can change this significantly.

Yes, you should expect with p = 2/3 to be in the last 2/3 of people alive. Yes, if you wake up and learn that there have only been tens of billions of people alive but expect most people to live in universes that have more people, you can update again and feel a bit relieved.

However, once you know how to think straight about the subject, you need to be able to update on the rest of the evidence.

If we've never see... (read more)

What bugs me about the doomsday argument is this: it's a stopped clock. In other words, it always gives the same answer regardless of who applies it.

Consider a bacterial colony that starts with a single individual, is going to live for N doublings, and then will die out completely. Each generation, applying the doomsday argument, will conclude that it has a better than 50% chance of being the final generation, because, at any given time, slightly more than half of all colony bacteria that have ever existed currently exist. The doomsday argument tells the bacteria absolutely nothing about the value of N.

8Eliezer Yudkowsky
But they'll be well-calibrated in their expectation - most generations will be wrong, but most individuals will be right.
3cousin_it
Woah, Eliezer defends the doomsday argument on frequentist grounds.
1JamesAndrix
So we might well be rejecting something based on long-standing experience, but be wrong because most of the tests will happen in the future? Makes me want to take up free energy research.
-1[anonymous]
Only because of the assumption that the colony is wiped out suddenly. If, for example, the decline mirrors the rise, about two-thirds will be wrong. ETA: I mean that 2/3 will apply the argument and be wrong. The other 1/3 won't apply the argument because they won't have exponential growth. (Of course they might think some other wrong thing.)
1Stuart_Armstrong
They'll be wrong about the generation part only. The "exponential growth" is needed to move from "we are in the last 2/3 of humanity" to "we are in the last few generations". Deny exponential growth (and SIA), then the first assumption is still correct, but the second is wrong.
0[anonymous]
But that's the important part. It's called the "Doomsday Argument" for a reason: it concludes that doomsday is imminent. Of course the last 2/3 is still going to be 2/3 of the total. So is the first 2/3. Imminent doomsday is the only non-trivial conclusion, and it relies on the assumption that exponential growth will continue right up to a doomsday.
4gjm
The fact that every generation gets the same answer doesn't (of itself) imply that it tells the bacteria nothing. Suppose you have 65536 people and flip a coin 16 [EDITED: for some reason I wrote 65536 there originally] times to decide which of them will get a prize. They can all, equally, do the arithmetic to work out that they have only a 1/65536 chance of winning. Even the one of them who actually wins. The fact that one of them will in fact win despite thinking herself very unlikely to win is not a problem with this. Similarly, all our bacteria will think themselves likely to be living near the end of their colony's lifetime. And most of them will be right. What's the problem?
2Cyan
I think you mean 16 times.
0gjm
Er, yes. I did change my mind a couple of times about what (2^n,n) pair to use, but I wasn't ever planning to have 2^65536 people so I'm not quite sure how my brain broke. Thanks for the correction.

The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a view of both possible outcomes. But since there aren't the same number of people for both outcomes, it isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit, rather than mysterious.

I suspect this is also why the doomsday argument fails. Since it's not possible to define a... (read more)

At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.

After that, the probability remains at 50% all the way through.

The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply "wanting" the Doomsday argument to be incorrect.

0Stuart_Armstrong
Then put a situation C' between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact). Then how does telling these people something just before putting them to death change anything for the survivors?
1Unknowns
In C', the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way. Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.) In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.
0prase
I don't see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%. Not 50%.
0Unknowns
This calculation is incorrect because "you" are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, "you" would have been chosen from the other survivors. So you can't update on survival in that way. You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.
0prase
had the coin flip went the other way, "you" would have been chosen from the other survivors Thanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution. I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns "your" consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F. What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don't count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias). In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?
3Unknowns
Right, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you. This is clearly unreasonable, because "you" could not have found yourself to be one of the non-real people. "You" is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people. In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.
0prase
So, what if the setting is the same as in B or C, except that "you" know that only "you" are told the rules?
0Unknowns
That's a complicated question, because in this case your estimate will depend on your estimate of the reasons why you were selected as the one to know the rules. If you are 100% certain that you were randomly selected out of all the persons, and it could have been a person killed who was told the rules (before he was killed), then your probability of being behind a blue door will be 99%. If you are 100% certain that you were deliberately chosen as a survivor, and if someone else had survived and you had not, the other would have been told the rules and not you, then your probability will be 50%. To the degree that you are uncertain about how the choice was made, your probability will be somewhere between these two values.
-1KatjaGrace
You could have been one of those who didn't learn the rules, you just wouldn't have found out about it. Why doesn't the fact that this didn't happen tell you anything?
0Stuart_Armstrong
What is your feeling in the case where the victims are first told they will be killed, then the situation is explained to them and finally they are killed? Similarly, the survivors are first told they will survive, and then the situation is explained to them.
2Unknowns
This is basically the same as C'. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive. There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don't know, then we're talking about an entirely different situation.
0Stuart_Armstrong
PS: Thanks for your assiduous attempts to explain your position, it's very useful.
0Stuart_Armstrong
A rather curious claim, I have to say. There is a group of people, and you are clearly not in their group - in fact the first thing you know, and the first thing they know, is that you are not in the same group. Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says "maybe I told them, maybe I didn't -I forget" that "I forget" contains the difference between a 99% and a 50% chance of you being blue-doored. To push it still further, if there were to be two experiments, side by side - world C'' and world X'' - with world X'' inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: "you are a survivor/victim of world C''/X'' with 99% blue/red doors", and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in! Unless you can explain why "being in world C''/X'' " is a permissible piece of info to put you in a different class, while "you are a survivor/victim" is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.
0Unknowns
I don't understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in? The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.
0Stuart_Armstrong
Let's simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed. Do you feel these extra observers will change anything from the probability perspective.
0Unknowns
No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.
1Stuart_Armstrong
Actually, strike that - we haven't reached the limit of useful argument! Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game. Under your rules, this would mean that the probability of the coin flip is exactly 50-50. Then, you are told you are not an extra observer, and won't be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% - 1% again. These trillion extra observers have brought you back close to SIA odds again.
1Unknowns
When I said that the extra observers don't change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving. What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)
0Stuart_Armstrong
I get that - I'm just pointing out that your position is not "indifferent to irrelevant information". In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them. (SIA is indifferent to irrelevant extra observers).
1Unknowns
Yes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones. Unfortunately for SIA, no one can ever find himself not to exist.
0Stuart_Armstrong
I think we've reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won't be able to convince you, if I haven't so far. And I haven't been convinced. But do consider the oddity of your position - you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you "you know those people who were killed - who never shared the current subjective experience that you have now, and who are dead - well, before they died, I told them/didn't tell them..." then your probability estimate of your current state would change depending on what he told these dead people. But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.
0casebash
The manner in C' depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.
0Stuart_Armstrong
Which shows how odd and arbitrary reference classes are.
0entirelyuseless
I don't think it is arbitrary. I responded to that argument in the comment chain here and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)

weighted according to the probability of that observer existing

Existence is relative: there is a fact of the matter (or rather: procedure to find out) about which things exist where relative to me, for example in the same room, or in the same world, but this concept breaks down when you ask about "absolute" existence. Absolute existence is inconsistent, as everything goes. Relative existence of yourself is a trivial question with a trivial answer.

(I just wanted to state it simply, even though this argument is a part of a huge standard narrativ... (read more)

1Eliezer Yudkowsky
Wha?
1Vladimir_Nesov
In the sense that "every mathematical structure exists", the concept of "existence" is trivial, as from it follows every "structure", which is after a fashion a definition of inconsistency (and so seems to be fair game for informal use of the term). Of course, "existence" often refers to much more meaningful "existence in the same world", with reasonably constrained senses of "world".
0cousin_it
How do you know that?
0loqi
An ensemble-type definition of existence seems more like an attempt to generalize the term than it does an empirical statement of fact. What would it even mean for a mathematical structure to not exist?

The wikipedia on the SIA points out that it is not an assumption, but a theorem or corollary. You have simply shown this fact again. Bostrom probably first named it an assumption, but it is neither an axiom or an assumption. You can derive it from these assumptions:

  1. I am a random sample
  2. I may never have been born
  3. The pdf for the number of humans is idependent of the pdf for my birth order number

I don't see how the SIA refutes the complete DA (Doomsday Argument).

The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when "considering myself as a random observer drawn from the space of all possible observers" - it's not really all possible observers.

How small is this set? Well, if we rely on just the argument given here for SIA, it's very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies... (read more)

0[anonymous]
Maybe I'm just really tired, but I seem to have grown a blind spot hiding a logical step that must be present in the argument given for SIA. It doesn't seem to be arguing for the SIA at all, just for the right way of detecting a blue door independent of the number of observers. Consider this variation: there are 150 rooms, 149 of them blue and 1 red. In the blue rooms, 49 cats and 99 human clones are created; in the red room, a human clone is created. The experiment then proceeds in the usual way (flipping the coin and killing inhabitants of rooms of a certain color). The humans will still give a .99 probability of being behind a blue door, and 99 out of 100 equally-probable potential humans will be right. Therefore you are more likely to inhabit a universe shared by an equal number of humans and cats, than a universe containing only humans (the Feline Indication Argument).
0[anonymous]
If you are told that you are in that situation, then you would assign a probability of 50/51 of being behind a blue door, and a 1/51 probability of being behind a red door, because you would not assign any probability to the possibility of being one of the cats. So you will not give a probability of .99 in this case.
0[anonymous]
Fixed, thanks. (I didn't notice at first that I quoted the .99 number.)

As we are discussing SIA, I'd like to remind about counterfactual zombie thought experiment:

Omega comes to you and offers $1, explaining that it decided to do so if and only if it predicts that you won't take the money. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don't exist!

Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.

This shows that inference "I think therefore I e... (read more)

1Jack
No. It just means you are a simulation. These are very different things. "I think therefore I am" is still deductively valid (and really, do you want to give the predicate calculus that knife in the back?). You might not be what you thought you were but all "I" refers to is the originator of the utterance.
1Vladimir_Nesov
Remember: there was no simulation, only prediction. Distinction with a difference.
0Jack
Then if you take the money Omega was just wrong. Full stop. And in this case if you take the dollar expected gain is a dollar. Or else you need to clarify.
1Vladimir_Nesov
Assuming that you won't actually take the money, what would a plan to take the money mean? It's a kind of retroactive impossibility, where among two options one is impossible not because you can't push that button, but because you won't be there to push it. Usual impossibility is just additional info for the could-should picture of the game, to be updated on, so that you exclude the option from consideration. This kind of impossibility is conceptually trickier.
2Jack
I don't see how my non-existence gets implied. Why isn't a plan to take the money either a plan that will fail to work (you're arm won't respond to your brain's commands, you'll die, you'll tunnel to the Moon etc.) or a plan that would imply Omega was wrong and shouldn't have made the offer? My existence is already posited one you've said that Omega has offered me this deal. What happens after that bears on whether or not Omega is correct and what properties I have (i.e. what I am). There exists (x) &e there exists (y) such that Ox & Iy & ($xy <--> N$yx) Where O= is Omega, I= is me, $= offer one dollar to, N$= won't take dollar from. I don't see how one can take that, add new information, and conclude ~ there exists (y).
0Stuart_Armstrong
I don't get it, I have to admit. All the experiment seems to be saying is that "if I take the $1, I exist only as a short term simulation in Omega's mind". It says you don't exist as a long-term seperate individual, but doesn't say you don't exist in this very moment...
0Vladimir_Nesov
Simulation is a very specific form of prediction (but the most intuitive, when it comes to prediction of difficult decisions). Prediction doesn't imply simulation. At this very moment I predict that you will choose to NOT cut your own hand off with an axe when asked to, but I'm not simulating you.
0Stuart_Armstrong
In that case (I'll return to the whole simulation/prediction issue some other time), I don't follow the logic at all. If Omega offers you that deal, and you take the money, all that you have shown is that Omega is in error. But maybe its a consequence of advanced decision theory?
-1Vladimir_Nesov
That's the central issue of this paradox: the part of the scenario before you take the money can actually exist, but if you choose to take the money, it follows that it doesn't. The paradox doesn't take for granted that the described scenario does take place, it describes what happens (could happen) from your perspective, in a way in which you'd plan your own actions, not from the external perspective. Think of your thought process in the case where in the end you decide not to take the money: how you consider taking the money, and what that action would mean (that is, what's its effect in the generalized sense of TDT, like the effect of you cooperating in PD on the other player or the effect of one-boxing on contents of the boxes). I suggest that the planned action of taking the money means that you don't exist in that scenario.
5Stuart_Armstrong
I see it, somewhat. But this sounds a lot like "I'm Omega, I am trustworthy and accurate, and I will only speak to you if I've predicted you will not imagine a pink rhinoceros as soon as you hear this sentence". The correct conclusion seems to be that Omega is not what he says he is, rather than "I don't exist".
2Johnicholas
When the problem contains a self-contradiction like this, there is not actually one "obvious" proposition which must be false. One of them must be false, certainly, but it is not possible to derive which one from the problem statement. Compare this problem to another, possibly more symmetrical, problem with self-contradictory premises: http://en.wikipedia.org/wiki/Irresistible_force_paradox
1Eliezer Yudkowsky
The decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there's no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so - and conclude that Omega incorrectly executed the equation in the context outside my own mind. Even if we suppose that "cogito ergo sum" presents an extra bit of evidence to me, whereby I truly know that I am the "real" me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me. If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn't following the stated algorithm. Just like if Omega says "I will offer you this $1 only if 1 = 2" and then offers you the $1.
-5Natalia

The doomsday assumption makes the assumptions that:

  1. We are randomly selected from all the observers who will ever exist.
  2. The observers increase expoentially, such that there are 2/3 of those who have ever lived at any particular generation
  3. They are wiped out by a catastrophic event, rather than slowly dwindling or other

(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is... (read more)

1SilasBarta
Actually, it requires that we be selected from a small subset of these observers, such as "humans" or "conscious entities" or, perhaps most appropriate, "beings capable of reflecting on this problem". Well, for the numbers to work out, there would have to be a sharp drop-off before the slow-dwindling, which is roughly as worrisome as a "pure doomsday".
1Stuart_Armstrong
Then what about introducing a C' between C and D: You are told the initial rules. Then, later you are told about the killing, and then, even later, that the killing had already happened and that you were spared. What would you say the odds were there?
2brianm
Thinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)

99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

Might it make a difference that in scenario F, there is an actual process (namely, the coin toss) which could have given rise to the alternative outcome? Note the lack of any analogous mechanism for "bringing into existence" one out of all the possible worlds. One might maintain that this metaphysical disanalogy also makes an epistemic difference. (Compare cousin_it's... (read more)

0Stuart_Armstrong
This is a standard objection, and one that used to convince me. But I really can't see that F is different from E, and so on down the line. Where exactly does this issue come up? Is it in the change from E to F, or earlier?
0RichardChappell
No, I was suggesting that the difference is between F and SIA.
1Stuart_Armstrong
Ah, I see. This is more a question about the exact meaning of probability; ie the difference between a frequentist approach and a Bayesian "degree of belief". To get a "degree of belief" SIA, extend F to G: here you are simply told that one of two possible universes happened (A and B), in which a certain amount of copies of you were created. You should then set your subjective probability to 50%, in the absence of other information. Then you are told the numbers, and need to update your estimate. If your estimates for G differ from F, then you are in the odd position of having started with a 50-50 probability estimate, and then updating - but if you were ever told that the initial 50-50 comes from a coin toss rather than being an arbitrary guess, then you would have to change your estimates! I think this argument extends it to G, and hence to universal SIA.
0RichardChappell
Thanks, that's helpful. Though intuitively, it doesn't seem so unreasonable to treat a credal state due to knowledge of chances differently from one that instead reflects total ignorance. (Even Bayesians want some way to distinguish these, right?)
1JGWeissman
What do you mean by "knowledge of chances"? There is no inherent chance or probability in a coin flip. The result is deterministically determined by the state of the coin, its environment, and how it is flipped. The probability of .5 for heads represents your own ignorance of all these initial conditions and your inability, even if you had all that information, to perform all the computation to reach to logical conclusion of what the result will be.
0RichardChappell
I'm just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions.
2JGWeissman
By "a coin is fair", do you mean that if we considered all the possible environments in which the coin could be flipped (or some subset we care about), and all the ways the coin could be flipped, then in half the combinations the result will be heads, and in the other half the result will be tails? Why should that matter? In the actual coin flip whose result we care about, the whole system is not "fair", there is one result that it definitely produces, and our probabilities just represent our uncertainty about which one. What if I tell you the coin is not fair, but I don't have any clue which side it favors? Your probability for the result of heads is still .5, and we still reach all the same conclusions.
1RichardChappell
For one thing, it'll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it's fair, we'll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations.
1JGWeissman
That is true, but in the scenario, you never learn the result of a coin flip to update on. So why does it matter?

Final edit: I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)


I don't understand what precisely is wrong with the following intuitive argument, which contradicts the p=.99 result of SIA:

In scenarios E and F, I first wake up after the other people are killed (or not created) based on the coin flip. No... (read more)

1Unknowns
There's nothing wrong with this argument. In E and F (and also in D in fact), the probability is indeed 50%.
0JamesAndrix
How would you go about betting on that?
1Unknowns
If I were actually in situation A, B, or C, I would expect a 99% chance of a blue door, and in D, E, or F, a 50%, and I would actually bet with this expectation. There is really no practical way to implement this, however, because of the assumption that random events turn out in a certain way, e.g. it is assumed that there is only a 50% chance that I will survive, yet I always do, in order for the case to be the one under consideration.
1JamesAndrix
Omega runs 10,000 trials of scenario F, and puts you in touch with 100 random people still in their room who believe there is a %50 chance they have red doors, and will happily take 10 to 1 bets that they are. You take these bets, collect $1 each from 98 of them, and pay out $10 each to 2. Were their bets rational?
1Unknowns
You assume that the 100 people have been chosen randomly from all the people in the 10,000 trials. This is not valid. The appropriate way for these bets to take place is to choose one random person from one trial, then another random person from another trial, and so on. In this way about 50 of the hundred persons will be behind red doors. The reason for this is that if I know that this setup has taken place 10,000 times, my estimate of the probability that I am behind a blue door will not be the same as if the setup has happened only once. The probability will slowly drift toward 99% as the number of trials increases. In order to prevent this drift, you have to select the persons as stated above.
0JamesAndrix
If you find yourself in such a room, why does your blue door estimate go up with the number of trials you know about? Your coin was still 50-50. How much does it go up for each additional trial? ie what are your odds if omega tells you you're in one of two trials of F?
2Unknowns
The reason is that "I" could be anyone out of the full set of two trials. So: there is a 25% chance there both trials ended with red-doored survivors; a 25% chance that both trials ended with blue-doored survivors; and a 50% chance that one ended with a red door, one with a blue. If both were red, I have a red door (100% chance). If both were blue, I have a blue door (100% chance). But if there was one red and one blue, then there are a total of 100 people, 99 blue and one red, and I could be any of them. So in this case there is a 99% chance I am behind a blue door. Putting these things together, if I calculate correctly, the total probability here (in the case of two trials) is that I have a 25.5% chance of being behind a red door, and a 74.5% chance of being behind a blue door. In a similar way you can show that as you add more trials, your probability will get ever closer to 99% of being behind a blue door.
0JamesAndrix
You could only be in one trial or the other. What if Omega says you're in the second trial, not the first? Or trial 3854 of 10,000?
1Unknowns
"I could be any of them" in the sense that all the factors that influence my estimate of the probability, will influence the estimate of the probability made by all the others. Omega may tell me I am in the second trial, but he could equally tell someone else (or me) that he is in the first trial. There are still 100 persons, 99 behind blue doors and 1 behind red, and in every way which is relevant, I could be any of them. Thinking that the number of my trial makes a difference would be like thinking that if Omega tells me I have brown eyes and someone else has blue, that should change my estimate. Likewise with trial 3854 out of 10,000. Naturally each person is in one of the trials, but the persons trial number does not make a significant contribution to his estimate. So I stand by the previous comments.
0JamesAndrix
These factors should not influence your estimation of the probability, because you could not be any of the people in the other trials, red or blue, because you are only in your trial. (and all of those people should know they can't be you) The only reason you would take the trials together as an aggregate is if you were betting on it from the outside, and the person you're betting against could be in any of the trials. Omega could tell you the result of the other trials, (1 other or 9999 others,) you'd know exactly how many reds and blues there are, except for your trial. You must asses your trial in the same way you would if it were stand alone. What if Omega says you are in the most recent trial of 40, because Omega has been running trials every hundred years for 4000 years? You can't be any of those people. (to say nothing of other trials that other omegas might have run.) But you could be any of 99 people if the coin came up heads.
1Unknowns
If Omega does not tell me the result of the other trials, I stand by my point. In effect he has given me no information, and I could be anyone. If Omega does tell me the results of all the other trials, it is not therefore the case that I "must assess my trial in the same way as if it stood alone." That depends on how Omega selected me as the one to estimate the probability. If in fact Omega selected me as a random person from the 40 trials, then I should estimate the probability by estimating the number of persons behind blue door and red doors, and assuming that I could with equal probability have been any of them. This will imply a very high probability of being behind a blue door, but not quite 99%. If he selected me in some other way, and I know it, I will give a different estimate. If I do not know how he selected me, I will give a subjective estimate depending on my estimate of ways that he might have selected me; for example I might assign some probability to his having deliberately selected me as one of the red-doored persons, in order to win if I bet. There is therefore no "right" probability in this situation.
0JamesAndrix
How is it the case that you could be in the year 1509 trial, when it is in fact 2009? (omega says so) Is it also possible that you are someone from the quite likely 2109 trial? (and so on into the future) I was thinking he could tell every created person the results of all the other trials. I agree that if your are selected for something (information revelatiion, betting, whatever), then information about how you were selected could hint at the color of your door. Information about the results of any other trials tells you nothing about your door.
0Unknowns
If he tells every person the results of all the other trials, I am in effect a random person from all the persons in all the trials, because everyone is treated equally. Let's suppose there were just 2 trials, in order to simplify the math. Starting with the prior probabilities based on the coin toss, there is a 25% chance of a total of just 2 observers behind red doors, in which case I would have a 100% chance of being behind a red door. There is a 50% chance of 1 observer behind a red door and 99 observers behind blue doors, which would give me a 99% chance of being behind a blue door. There is a 25% chance of 198 observers behind blue doors, which would give me a 100% chance of being behind a blue door. So my total prior probabilities are 25.5% of being behind a red door, and 74.5% of being behind a blue door. Let's suppose I am told that the other trial resulted in just one observer behind a red door. First we need the prior probability of being told this. If there were two red doors (25% chance), there would be a 100% chance of this. If there were two blue doors (25% chance), there would be a 0% chance of this. If there was a red door and a blue door (50% chance), there would be a 99% chance of this. So the total prior probability of being told that the other trial resulted in a red door is again 74.5%, and the probability of being told that the other trial resulted in a blue door is 25.5%. One more probability: given that I am behind a red door, what is the probability that I will be told that the other trial resulted in an observer behind a red door? There was originally a 25% chance of two red trials, and a 50% chance of 1 red and 1 blue trial. This implies that given that I am behind a red door, there is a 1/3 chance that I will be told that the other trial resulted in red, and a 2/3 that I will be told that the other trial resulted in blue. (Once again things will change if we run more trials, for similar reasons, because in the 1/3 case, there are 2 obs
0JamesAndrix
Well you very nearly ruined my weekend. :-) I admit I was blind sided by the possibility that information about the other trials could yield information about your door. I'll have to review the monty hall problem. Using your methods, I got: Being blue given told red=(.745 being blue prior/.745 told red prior) x (2/3 told red given blue)=.666... Which doesn't match your 11.4%, so something is missing. In scenario F, if you're not told, why assume that your trial was the only one in the set? You should have some probability that the omegas would do this more than once.
1Unknowns
Also, I agree that in theory you would have some subjective probability that there were other trials. But this prevents assigning any exact value to the probability because we can't make any definitively correct answer. So I was assuming that you either know that the event is isolated, or you know that it is not, so that you could assign a definite value.
0JamesAndrix
I'm not sure what it would mean for the event to be isolated. (Not to contradict my previous statement that you have to treat it as a stand alone event. My position is that it is .99 for any number of trials, though I still need to digest your corrected math.) I'm not sure how different an event could be before you don't need to consider it part of the set you could have found yourself in. If you're in a set of two red-blue trials, and omegas says there is another set of orange-green trials run the same and likewise told about the red-blues, then it seems you would need to treat that as a set of 4. If you know you're in a trial with the (99 blue or 1 red) protocol, but there is also a trial with a (2 blue or 1 red) protocol, then those 1 or 2 people will skew your probabilities slightly. If Omega tells you there is an intelligent species of alien in which male conceptions yield 99 identical twins and female conceptions only 1, with a .50 probability of conceiving female, and in which the young do not know their gender until maturity... then is that also part of the set you could have been in? If not, I'm honestly not sure where to draw the line. If so, then there I'd expect we could find so many such situations that apply to how individual humans come to exist now, so there may be billions of trials.
1Unknowns
You're correct, I made a serious error in the above calculations. Here are the corrected results: Prior probability for situation A, namely both trials result in red doors: .25; Prior probability for situation B, namely one red and one blue: .50; Prior probability for situation C, namely both trials result in blue doors: .25; Prior probability for me getting a blue door: .745; Prior probability for me getting a red door: .255; Prior probability of the other trial getting red: .745; Prior probability of the other trial getting blue: .255; Then probability of situation A, given I have a red door = (Pr(A)/Pr(red)) x P(red given A). Pr(red given A)=1, so the result is pr(A given red) = .25/.255 = .9803921... So the probability that I will be told red, given I have red, is not 1/3, but over 98% (namely the same value above)! And so the probability that I will be told blue, given I have red, is of course .01960784, namely the probability of situation B given that I have a red door. So using Bayes' theorem with the corrected values, the probability me having a red door, given that I am told the other resulted in red = (pr being red/ pr other red) x pr (told red given red) = (.255/.745) x .9803921... = .33557... or approximately 1/3. You can work out the corresponding calculation (probability of being blue given told red) by starting with the probability of situation C given I have a blue door, and then deriving the probability of B given I have a blue, and you will see that it matches this one (i.e. it will be approximately 2/3.)
0DanArmak
Thanks! I think this comment is the best so far for demonstrating the confusion (well, I was confused :-) about the different possible meanings of the phrase "you are an observer chosen from such and such set". Perhaps a more precise and unambiguous phrasing could be used.
0[anonymous]
Clearly the bets would not be rational. This reinforces my feeling that something is deeply wrong with the statement of the problem, or with my understanding of it. It's true that some random survivor is p=.99 likely to be behind a blue door. It does not seem true for me, given that I survive.
-2JamesAndrix
Replace death with the light in the room being shut off.
0DanArmak
That's not applicable to scenarios E and F which is where I have a problem. The observers there never wake up or are never created (depending on the coin toss), I can't replace that with a conscious observer and the light going off. Whereas in scenarios A through D, you don't need SIA to reach the (correct) p=.99 conclusion, you don't even need the existence of observers other than yourself. Just reformulate as: I was moved to a room at random; the inhabitants of some rooms, if any, were killed based on a coin flip; etc.
0JamesAndrix
Do it anyway. Take a scenario in which the light is shut off while you are sleeping, or never turned on. What does waking up with the lights on (or off) tell you about the color of the door? Even in A thru D, the dead can't update.
0DanArmak
The state of the lights tells me nothing about the color of the door. Whatever color room I happen to be in, the coin toss will turn my lights on or off with 50% probability. I don't see what you intend me to learn from this example...
1JamesAndrix
That dead or alive you are still most likely behind a blue door. You can use the lights being on as evidence just as well as your being alive. That in B through D you are already updating based on your continued existence. Beforehand you would expect a 50% chance of dying. Later, If you are alive, then the coin probably came up heads. In E and F, You wake up, You know the coin flip is in your past, You know that most 'survivors' of situations like this come out of blue doors. If you play Russian roulette and survive, you can have a much greater than 5/6 confidence that the chamber wasn't loaded. You can be very certain that you have great grandparents, given only your existence and basic knowledge about the world.
0DanArmak
In E-F this is not correct. Your words "dead or alive" simply don't apply: the dead observers never were alive (and conscious) in these scenarios. They were created and then destroyed without waking up. There is no possible sense in which "I" could be one of them; I am by definition alive now or at least were alive at some point in the past. Even under the assumptions of the SIA, a universe with potential observers that never actually materialize isn't the same as one with actual observers. I still think that in E-F, I'm equally likely to be behind a blue or a red door. Correct. The crucial difference is that in B-D I could have died but didn't. In other Everett branches where the coin toss went the other way I did die. So I can talk about the probability of the branch where I survive, and update on the fact that I did survive. But in E-F I could never have died! There is no branch of possibility where any conscious observer has died in E-F. That's why no observer can update on being alive there; they are all alive with p=1. Yes, because in our world there are people who fail to have grandchildren, and so there are potential grandchildren who don't actually come to exist. But in the world of scenarios E and F there is no one who fails to exist and to leave a "descendant" that is himself five minutes later...
1DanArmak
I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)
2JamesAndrix
I wouldn't delete, if nothing else it serves as a good example of working through the dissonance. edit It would also be helpful if you explained from your own perspective why you changed your mind.
1wedrifid
Second James's preference and note that I find it useful as a reader to see an edit note of some sort in comments that are no longer supported.
0DanArmak
I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument.

I'm not sure about the transition from A to B; it implies that, given that you're alive, the probability of the coin having come up heads was 99%. (I'm not saying it's wrong, just that it's not immediately obvious to me.)

The rest of the steps seem fine, though.

1gjm
Pr(heads|alive) / Pr(tails|alive) = {by Bayes} Pr(alive|heads) / Pr(alive|tails) = {by counting} (99/100) / (1/100) = {by arithmetic} 99, so Pr(heads|alive) = 99/100. Seems reasonable enough to me.
0[anonymous]
At B, if tails comes up (p=0.5) there are no blues - if heads comes up (p=0.5) there are no reds. So, depending only on the coin, with equal probability you will be red or blue. It's not unreasonable that the probability should change - since it initially depended on the number of people who were created, it should later depend on the number of people who were destroyed.
0eirenicon
It doesn't matter how many observers are in either set if all observers in a set experience the same consequences. (I think. This is a tricky one.)
[-]R0k010

Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

What about rejecting the assumption that there will be finitely many humans? In the infinite case, the argument doesn't hold.

1Vladimir_Nesov
But in the finite case it supposedly does. See least convenient possible world.
0wedrifid
Similarly, physics as I know it prohibits an infinite number of humans. This world is inconvenient. Still, I do think R0k0's point would be enough to discourage the absolute claim of exclusivity quoted.
0AngryParsley
This is a bit off-topic, but are you the same person as Roko? If not, you should change your name.

Your justification of the SIA requires a uniform prior over possible universes. (If the coin is biased, the odds are no longer 99:1.) I don't see why the real-world SIA can assume uniformity, or what it even means. Otherwise, good post.

0Stuart_Armstrong
Note the line "weighted according to the probability of that observer existing". Imagine flipping a coin twice. If the coin comes heads first, a universe A with one observer is created. If it comes up TH, a universe B with two observers is created, and if it comes up TT, a universe with four observers is created. From outside, the probabilities are A:1/2, B:1/4, C:1/4. Updating with SIA gives A:1/4, B:1/4, C:1/2. No uniform priors assumed or needed.
0jimmy
His prior is uniform because uniform is max entropy. If your prior is less than max entropy, you must have had information to update on. What is your information?
1cousin_it
No, you don't get it. The space of possible universes may be continuous instead of discrete. What's a "uniform" prior over an arbitrary continuous space that has no canonical parameterization? If you say Maxent, why? If you say Jeffreys, why?
0jimmy
It's possible to have uniform distributions on continuous spaces. It just becomes probability density instead of probability mass. The reason for max entropy is that you want your distribution to match your knowlege. When you know nothing, thats maxiumum entropy, by definition. If you update on information that you don't have, you probabilistically screw yourself over. If you have a hard time drawing the space out and assigning the maxent prior, you can still use the indifference prinicple when asked about the probability of being in a larger universe vs a smaller universe. Consider "antipredictions". Say I ask you "is statement X true? (you can't update on my psychology since I flipped a coin to determine whether to change X to !X). The max entropy answer is 50/50 and it's just the indifference principle. If I now tell you that X = "I will not win the lottery if I buy a ticket?" and you know nothing about what ball will come up, just that the number of winning numbers is small and the number of not winning numbers is huge, you decide that it is very likely to be true. We've only updated on which distribution we're even talking about. If you're too confused to make that jump in a certain case, then don't. Or you could just say that for any possible non uniformity, it's possible that there's an opposite non uniformity that cancels it out. Whats the direction of the error? Does that explain any better?
5cousin_it
No, it doesn't. In fact I don't think you even parsed my question. Sorry. Let's simplify the problem: what's your uninformative prior for "proportion of voters who voted for an unknown candidate"? Is it uniform on (0,1) which is given by maxent? What if I'd asked for your prior of the square of this value instead, masking it with some verbiage to sound natural - would you also reply uniform on (0,1)? Those statements are incompatible. In more complex real world situations, how exactly do you choose the parameterization of the model to feed into maxent? I see no general way. See this Wikipedia page for more discussion of this problem. In the end it recommends the Jeffreys rule for use in practice, but it's not obviously the final word.
0jimmy
I see what you're saying, but I don't think it matters here. That confusion extends to uncertainty about the nth digit of pi as well-it's nothing new about different universes. If you put a uniform prior on the nth digit of pi instead of uniform of the square of the nth digit or Jeffreys prior, why don't you do the same in the case of different universes? What prior do you use? The point I tried to make in the last comment is that if you're asked any question, you start with the indifference principle. which is uniform in nature, and upon receiving new information, (perhaps the possibility that the original phrasing wasn't the 'natural' way to phrase it, or however you solve the confusion) then you can update. Since the problem never mentioned a method of parameterizing a continuous space of possible universes, it makes me wonder how you can object to assigning uniform priors given this parameterization or even say that he required it. Changing the topic of our discussion, it seems like your comment is also orthogonal to the claim being presented. He basically said "given this discrete set of two possible universes (with uniform prior) this 'proves' SIA (worded the first way)". Given SIA, you know to update on your existence if you find yourself in a continuous space of possible universes, even if you don't know where to update from.

If continuity of consciousness immortality arguments also hold, then it simply doesn't matter whether doomsdays are close - your future will avoid those scenarios.

2PlaidX
It "doesn't matter" only to the extent that you care only about your own experiences, and not the broader consequences of your actions. And even then, it still matters, because if the doomsday argument holds, you should still expect to see a lot of OTHER people die soon.
0JamesAndrix
Not if the world avoiding doomsday is more likely than me, in particular, surviving doomsday. I'd guess most futures in which I live have a lot of people like me living too.

SIA self rebuttal.

If many different universes exist, and one of them has infinite number of all possible observers, SIA imply that I must be in it. But if infinite number of all possible observers exist, the condition that I may not be born is not working in this universe and I can't apply SIA to the Earth's fate. Doomsday argument is on.

Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past.

Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0.

I just wanted to follow up on this remark I made. There is a suble an... (read more)

[-][anonymous]00

The crucial step in your argumentation is from A to B. Here you are changing your a-priori probabilities. Counterintuitively, the probability of dying is not 1/2.

This paradox is known as the Monty Hall Problem: http://en.wikipedia.org/wiki/Monty_Hall_problem

[-][anonymous]00

The doomsday example, as phrased, simply doesn't work.

Only about 5-10% of the ever-lived population is alive now. Thus, if doomsday happened, only about that percentage would see it within our generation. Not 66%. 5-10%. Maybe 20%, if it happened in 50 years or so. The argument fails on its own merits: it assumes that because 2/3 of the ever-human population will see doomsday, we should expect with 2/3 probability to see doomsday, except that means we should also expect (with p=.67) that only 10% of the ever-human population will see doomsday. This doesn't... (read more)

[-][anonymous]00

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%. But now consider the situation:

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be kill

... (read more)

SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

"Other things equal" is a huge obstacle for me. Without formalizing "other things equal", this is a piece of advice, not a theorem to be proved. I accept moving from A->F, but I don't see how you've proved SIA in general.

How do I go about obtaining a probability distribution over all possible universes conditioned on nothing?

How do I get a distribution over universes conditioned on "my" existence? And what do I mean by "me" in universes other than this one?

1CronoDAS
Nobody really knows, but some people have proposed Kolmogorov complexity as the basis of such a prior. In short, the longer the computer program required to simulate something, the less probable it is. (The choice of which programming language to use is still a problem, though.)
0cousin_it
That's not the only problem. We don't even know whether our universe is computable, e.g. physical constants can have uncomputable decimal expansions, like Chaitin's Omega encoded into G. Are you really damn confident in assigning this possibility a prior of zero?
0Jonathan_Graehl
It amazes me that people will start with some particular prior over universes, then mention offhand that they also give significant probability to simulation from prior universes nearly unrelated to our own (except as much as you generically expect simulators to prefer conditions close to their own). Then, should I believe that most universes that exist are simulations in infinite containing universes (that have room for all simulations of finite universes)? Yudkowsky's recent "meta crossover" fan fiction touched on this. Simulation is sexy in the same way that creation by gods used to be. Are there any other bridges that explain our universe in terms of some hidden variable? How about this: leading up to the big crunch, some powerful engineer (or collective) tweaks the final conditions so that another (particular) universe is born after (I vaguely recall Asimov writing this). Does the idea of universes that restart periodically with information leakage between iterations change in any way our prior for universes-in-which-"we"-exist? In my opinion, I only exist in this particular universe. Other universes in which similar beings exist are different. So p(universe|me) needs to be fleshed out better toward p(universe|something-like-me-in-that-xyz). I guess we all realize that any p(universe|...) we give is incredibly flaky, which is my complaint. At least, if you haven't considered all kinds of schemes for universes inside or caused by other universes, then you have to admit that your estimates could change wildly any time you encounter a new such idea.
0Stuart_Armstrong
I don't need to. I just need to show that if we do get such a distribution (over possible universes, or over some such subset), then SIA update these probabilities. If we can talk, in anyway, about the relative likelyhood of universe Y versus J, then SIA has a role to play.

SIA makes perfect sense to me, but I don't see how it negates the doomsday argument at all. Can you explain further?

1R0k0
If the human race ends soon, there will be fewer people. Therefore, assign a lower prior to that. This cancels exactly the contribution from the doomsday argument.
0[anonymous]
And you get a prior arrived at through rationalization. Prior probability is not for grabs.
0PlaidX
Oh, I see. How are we sure it cancels exactly, though?
1R0k0
see Bostrom's paper
3PlaidX
Ah, that makes sense. In retrospect, this is quite simple: If you have a box of ten eggs, numbered 1 through 10, and a box of a thousand eggs, numbered 1 through 1000, and the eggs are all dumped out on the floor and you pick up one labeled EGG 3, it's just as likely to have come from the big box as the small one, since they both have only one egg labeled EGG 3. I don't buy bostrom's argument against the presumptuous philosopher though. Does anyone have a better one?

I don't feel like reading through 166 comments, so sorry if this has already been posted.

I did get far enough to find that brianm posted this: "The doomsday assumption makes the assumptions that:

  1. We are randomly selected from all the observers who will ever exist..."

Since we're randomly selecting, let's not look at individual people. Let's look at it like taking marbles from a bag. One marble is red. 99 are blue. A guy flips a coin. If it comes up heads, he takes out the red marble. If it comes up tails, he takes out the blue marbles. You then take one of the remaining marbles out at random. Do I even need to say what the probability of getting a blue marble is?

1JamesAndrix
You have to look at individuals in order to get odds for individuals. Your obvious probability of getting a blue marble is for the group of marbles. But I think we can still look at individual randomly selected marbles. Before the coin flip let's write numbers on all the marbles, 1 to 100, without regard to color. And let's say we roll a fair 100 sided die, and get the number 37. After the flip and extraction of colored marbles. I look in the bag and find that marble 37 is in it. Given that marble 37 survived, what is the probability that it is blue?

Edit again: OK, I get it. That was kind of dumb.

I read "2/3 of humans will be in the final 2/3 of humans" combined with the term "doomsday" as meaning that there would be 2/3 of humanity around to actually witness/experience whatever ended humanity. Thus, we should expect to see whatever event does this. This obviously makes no sense. The actual meaning is simply that if you made a line of all the people who will ever live, we're probably in the latter 2/3 of it. Thus, there will likely only be so many more people. Thus, some "doom... (read more)

2Alicorn
It's not necessary that 2/3 of the people who ever live be alive simultaneously. It's only necessary that the last humans not a) all die simultaneously and b) constitute more than 2/3 of all humans ever. You can still have a last 2/3 without it being one giant Armageddon that kills them in one go.
0Psychohistorian
I agree in principle, but I'm curious as to how much one is stretching the term "doomsday." If we never develop true immortality, 100% of all humans will die at some point, and we can be sure we're part of that 100%. I don't think "death" counts as a doomsday event, even if it kills everyone. Similarly, some special virus that kills people 5 minutes before they would otherwise die could kill 100% of the future population, but I wouldn't really think of it as a doomsday virus. Doomsday need not kill everyone in one go, but I don't think it can take centuries (unless it's being limited by the speed of light) and still be properly called a doomsday event. That said, I'm still curious as to what evidence supports any claim of such an event actually happening without narrowing down anything about how or when it will happen.
0Alicorn
Unless I missed something, "doomsday" just means the extinction of the human species.
1prase
Doesn't it refer to the day of the extinction? "Doomsmillenium" doesn't sound nearly as good, I think.
0Alicorn
Sure. But the human species can go extinct on one day without a vast number of humans dying on that day. Maybe it's just one little old lady who took a damn long time to kick the bucket, and then finally she keels over and that's "doomsday".
0prase
That's what Psychohistorian was saying shouldn't be called doomsday, and I tend to agree.
0eirenicon
Yes, and the doomsday argument is not in regards to whether or not doomsday will occur, but when.
[-]neq1-20

The primary reason SIA is wrong is because it counts you as special only after seeing that you exist (i.e., after peeking at the data)

My detailed explanation is here.

[-]Mallah-20

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%.

Sure.

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later

... (read more)
0Academian
No; you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you're behind a blue door, and if you survive you should take it as evidence that "yay!" the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors: P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01 So SIA + Bayesian updating happens to agree with the "quantum measure" heuristic in this case. However, I am with Nick Bodstrom in rejecting SIA in favor of his "Observation Equation" derived from "SSSA", precisely because that is what maximizes the total wealth of your reference class (at least when you are not choosing whether to exist or create dupcicates).
-5Mallah
[+]Mallah-70