I upvoted this and I think you proved SIA in a very clever way, but I still don't quite understand why SIA counters the Doomsday argument.
Imagine two universes identical to our own up to the present day. One universe is destined to end in 2010 after a hundred billion humans have existed, the other in 3010 after a hundred trillion humans have existed. I agree that knowing nothing, we would expect a random observer to have a thousand times greater chance of living in the long-lasting universe.
But given that we know this particular random observer is alive in...
I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction
Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you're in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs - then you end up in the same place as after the update.
My paper, Past Longevity as Evidence for the Future, in the January 2009 issue of Philosophy of Science, contains a new refutation to the Doomsday Argument, without resort to SIA.
The paper argues that the Carter-Leslie Doomsday Argument conflates future longevity and total longevity. For example, the Doomsday Argument’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity. My argument has some similarities to that in Dieks 2007, but does not rely on the Self-Sampling Assumption.
I'm relatively green on the Doomsday debate, but:
The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).
Isn't this inserting a hidden assumption about what kind of observers we're talking about? What definition of "observer" do you get to use, and why? In order to "observe", all...
It seems understressed that the doomsday argument is as an argument about max entropy priors, and that any evidence can change this significantly.
Yes, you should expect with p = 2/3 to be in the last 2/3 of people alive. Yes, if you wake up and learn that there have only been tens of billions of people alive but expect most people to live in universes that have more people, you can update again and feel a bit relieved.
However, once you know how to think straight about the subject, you need to be able to update on the rest of the evidence.
If we've never see...
What bugs me about the doomsday argument is this: it's a stopped clock. In other words, it always gives the same answer regardless of who applies it.
Consider a bacterial colony that starts with a single individual, is going to live for N doublings, and then will die out completely. Each generation, applying the doomsday argument, will conclude that it has a better than 50% chance of being the final generation, because, at any given time, slightly more than half of all colony bacteria that have ever existed currently exist. The doomsday argument tells the bacteria absolutely nothing about the value of N.
The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a
view of both possible outcomes. But since there aren't the same number of people for both outcomes, it
isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit,
rather than mysterious.
I suspect this is also why the doomsday argument fails. Since it's not possible to define a...
At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.
After that, the probability remains at 50% all the way through.
The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply "wanting" the Doomsday argument to be incorrect.
weighted according to the probability of that observer existing
Existence is relative: there is a fact of the matter (or rather: procedure to find out) about which things exist where relative to me, for example in the same room, or in the same world, but this concept breaks down when you ask about "absolute" existence. Absolute existence is inconsistent, as everything goes. Relative existence of yourself is a trivial question with a trivial answer.
(I just wanted to state it simply, even though this argument is a part of a huge standard narrativ...
The wikipedia on the SIA points out that it is not an assumption, but a theorem or corollary. You have simply shown this fact again. Bostrom probably first named it an assumption, but it is neither an axiom or an assumption. You can derive it from these assumptions:
I don't see how the SIA refutes the complete DA (Doomsday Argument).
The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when "considering myself as a random observer drawn from the space of all possible observers" - it's not really all possible observers.
How small is this set? Well, if we rely on just the argument given here for SIA, it's very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies...
As we are discussing SIA, I'd like to remind about counterfactual zombie thought experiment:
Omega comes to you and offers $1, explaining that it decided to do so if and only if it predicts that you won't take the money. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don't exist!
Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.
This shows that inference "I think therefore I e...
The doomsday assumption makes the assumptions that:
(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is...
99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.
Might it make a difference that in scenario F, there is an actual process (namely, the coin toss) which could have given rise to the alternative outcome? Note the lack of any analogous mechanism for "bringing into existence" one out of all the possible worlds. One might maintain that this metaphysical disanalogy also makes an epistemic difference. (Compare cousin_it's...
Final edit: I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)
I don't understand what precisely is wrong with the following intuitive argument, which contradicts the p=.99 result of SIA:
In scenarios E and F, I first wake up after the other people are killed (or not created) based on the coin flip. No...
I'm not sure about the transition from A to B; it implies that, given that you're alive, the probability of the coin having come up heads was 99%. (I'm not saying it's wrong, just that it's not immediately obvious to me.)
The rest of the steps seem fine, though.
Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).
What about rejecting the assumption that there will be finitely many humans? In the infinite case, the argument doesn't hold.
Your justification of the SIA requires a uniform prior over possible universes. (If the coin is biased, the odds are no longer 99:1.) I don't see why the real-world SIA can assume uniformity, or what it even means. Otherwise, good post.
If continuity of consciousness immortality arguments also hold, then it simply doesn't matter whether doomsdays are close - your future will avoid those scenarios.
SIA self rebuttal.
If many different universes exist, and one of them has infinite number of all possible observers, SIA imply that I must be in it. But if infinite number of all possible observers exist, the condition that I may not be born is not working in this universe and I can't apply SIA to the Earth's fate. Doomsday argument is on.
Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past.
Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0.
I just wanted to follow up on this remark I made. There is a suble an...
The crucial step in your argumentation is from A to B. Here you are changing your a-priori probabilities. Counterintuitively, the probability of dying is not 1/2.
This paradox is known as the Monty Hall Problem: http://en.wikipedia.org/wiki/Monty_Hall_problem
The doomsday example, as phrased, simply doesn't work.
Only about 5-10% of the ever-lived population is alive now. Thus, if doomsday happened, only about that percentage would see it within our generation. Not 66%. 5-10%. Maybe 20%, if it happened in 50 years or so. The argument fails on its own merits: it assumes that because 2/3 of the ever-human population will see doomsday, we should expect with 2/3 probability to see doomsday, except that means we should also expect (with p=.67) that only 10% of the ever-human population will see doomsday. This doesn't...
...A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?
Here, the probability is certainly 99%. But now consider the situation:
B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be kill
SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.
"Other things equal" is a huge obstacle for me. Without formalizing "other things equal", this is a piece of advice, not a theorem to be proved. I accept moving from A->F, but I don't see how you've proved SIA in general.
How do I go about obtaining a probability distribution over all possible universes conditioned on nothing?
How do I get a distribution over universes conditioned on "my" existence? And what do I mean by "me" in universes other than this one?
SIA makes perfect sense to me, but I don't see how it negates the doomsday argument at all. Can you explain further?
I don't feel like reading through 166 comments, so sorry if this has already been posted.
I did get far enough to find that brianm posted this: "The doomsday assumption makes the assumptions that:
Since we're randomly selecting, let's not look at individual people. Let's look at it like taking marbles from a bag. One marble is red. 99 are blue. A guy flips a coin. If it comes up heads, he takes out the red marble. If it comes up tails, he takes out the blue marbles. You then take one of the remaining marbles out at random. Do I even need to say what the probability of getting a blue marble is?
Edit again: OK, I get it. That was kind of dumb.
I read "2/3 of humans will be in the final 2/3 of humans" combined with the term "doomsday" as meaning that there would be 2/3 of humanity around to actually witness/experience whatever ended humanity. Thus, we should expect to see whatever event does this. This obviously makes no sense. The actual meaning is simply that if you made a line of all the people who will ever live, we're probably in the latter 2/3 of it. Thus, there will likely only be so many more people. Thus, some "doom...
A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?
Here, the probability is certainly 99%.
Sure.
...B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later
EDIT: This post has been superceeded by this one.
The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.
There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).
The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).
Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:
A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?
Here, the probability is certainly 99%. But now consider the situation:
B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?
There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:
C - same as B, except the coin is flipped before you are created (the killing still happens later).
D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.
E - same as C, except the people to be killed are killed before awakening.
F - same as C, except the people to be killed are simply not created in the first place.
I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.
If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)
(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)
EDIT: Corrected the language in the presentation of the SIA, after SilasBarta's comments.
EDIT2: There are some objections to the transfer from D to C. Thus I suggest sliding in C' and C'' between them; C' is the same as D, execpt those due to die have the situation explained to them before being killed; C'' is the same as C' except those due to die are told "you will be killed" before having the situation explained to them (and then being killed).