Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Avoiding doomsday: a "proof" of the self-indication assumption

18 Post author: Stuart_Armstrong 23 September 2009 02:54PM

EDIT: This post has been superceeded by this one.

The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.

There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%. But now consider the situation:

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:

C - same as B, except the coin is flipped before you are created (the killing still happens later).

D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.

E - same as C, except the people to be killed are killed before awakening.

F - same as C, except the people to be killed are simply not created in the first place.

I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)

(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)

 

EDIT: Corrected the language in the presentation of the SIA, after SilasBarta's comments.

EDIT2: There are some objections to the transfer from D to C. Thus I suggest sliding in C' and C'' between them; C' is the same as D, execpt those due to die have the situation explained to them before being killed; C'' is the same as C' except those due to die are told "you will be killed" before having the situation explained to them (and then being killed).

Comments (228)

Comment author: jimmy 23 September 2009 07:14:57PM *  4 points [-]

It seems understressed that the doomsday argument is as an argument about max entropy priors, and that any evidence can change this significantly.

Yes, you should expect with p = 2/3 to be in the last 2/3 of people alive. Yes, if you wake up and learn that there have only been tens of billions of people alive but expect most people to live in universes that have more people, you can update again and feel a bit relieved.

However, once you know how to think straight about the subject, you need to be able to update on the rest of the evidence.

If we've never seen an existential threat and would expect to see several before getting wiped out, then we can expect to last longer. However, if we have evidence that there are some big ones coming up, and that we don't know how to handle them, it's time to do worry more than the doomsday argument tells you to.

Comment author: RonPisaturo 23 September 2009 05:38:57PM *  4 points [-]

My paper, Past Longevity as Evidence for the Future, in the January 2009 issue of Philosophy of Science, contains a new refutation to the Doomsday Argument, without resort to SIA.

The paper argues that the Carter-Leslie Doomsday Argument conflates future longevity and total longevity. For example, the Doomsday Argument’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity. My argument has some similarities to that in Dieks 2007, but does not rely on the Self-Sampling Assumption.

Comment author: SilasBarta 23 September 2009 03:37:13PM *  4 points [-]

I'm relatively green on the Doomsday debate, but:

The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Isn't this inserting a hidden assumption about what kind of observers we're talking about? What definition of "observer" do you get to use, and why? In order to "observe", all that's necessary is that you form mutual information with another part of the universe, and conscious entities are a tiny sliver of this set in the observed universe. So the SIA already puts a low probability on the data.

I made a similar point before, but apparenlty there's a flaw in the logic somewhere.

Comment author: KatjaGrace 13 January 2010 06:38:36AM 1 point [-]

SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place).

SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.

Comment author: Technologos 24 September 2009 06:40:48AM 0 points [-]

Wouldn't the principle be independent of the form of the observer? If we said "universes with many human observers are more likely than universes with few," the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.

Comment author: SilasBarta 24 September 2009 04:34:59PM 0 points [-]

If we said "universes with many human observers are more likely than universes with few," the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.

But why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That's a fundamentally different claim!

Comment author: Technologos 24 September 2009 09:00:17PM 0 points [-]

I agree that it's a different claim, and not the one I was trying to make. I was just noting that however one defines "observer," the SIA would suggest that such observers should be many. Thus, I don't think that the SIA is inserting a hidden assumption about the type of observers we are discussing.

Comment author: SilasBarta 24 September 2009 09:05:53PM 1 point [-]

Right, but my point was that your definition of observer has a big impact on your SIA's plausibility. Yes, universes with observers in the general sense are more likely, but why universes with more human observers?

Comment author: Technologos 24 September 2009 09:51:56PM 0 points [-]

Why would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?

Comment author: SilasBarta 24 September 2009 10:01:42PM 0 points [-]

My point was that the SIA(human) is less plausible, meaning you shouldn't base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.

Comment author: Technologos 25 September 2009 04:59:06AM 0 points [-]

That's what I meant, though: you don't calculate the probability of SIA(human) any differently than you would for any other category of observer.

Comment author: Stuart_Armstrong 24 September 2009 09:38:17AM 0 points [-]

The debate about what consitutes an "observer class" is one of the most subtle in the whole area (see Nick Bostrom's book). Technically, SIA and similar will only work as "given this definition of observers, SIA implies...", but some definitions are more sensible than others.

It's obvious you can't seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes?

I tend to work with something like "observers who think they are human", or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.

Comment author: SilasBarta 24 September 2009 02:03:08PM 0 points [-]

Okay, but it's a pretty significant change when you go to "observers who think they are human". Why should you expect a universe with many of that kind of observer? At the very least, you would be conditioning on more than just your own existence, but rather, additional observations about your "suit".

Comment author: Stuart_Armstrong 24 September 2009 02:07:27PM 0 points [-]

As I said, it's a complicated point. For most of the toy models, "observers who think they are human" is enough, and avoids having to go into these issues.

Comment author: SilasBarta 24 September 2009 02:14:12PM 0 points [-]

Not unless you can explain why "universes with many observers who think they are human" are more common than "universes with few observers who think they are human". Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.

Comment author: Stuart_Armstrong 24 September 2009 02:37:08PM 1 point [-]

Er no - they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.

Comment author: SilasBarta 24 September 2009 02:46:35PM *  0 points [-]

Your TL post said:

The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few.

And you just replaced "observers" with "observers who think they are human", so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.

Comment author: Stuart_Armstrong 24 September 2009 02:50:18PM 0 points [-]

Sorry, sloppy language - I meant "you, being an observer, are more likely to exist in a universe with many observers".

Comment author: SilasBarta 24 September 2009 03:20:53PM 1 point [-]

So then the full anthrocentric SIA would be, "you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human".

Is that correct? If so, does your proof prove this stronger claim?

Comment author: Yvain 23 September 2009 07:01:44PM *  7 points [-]

I upvoted this and I think you proved SIA in a very clever way, but I still don't quite understand why SIA counters the Doomsday argument.

Imagine two universes identical to our own up to the present day. One universe is destined to end in 2010 after a hundred billion humans have existed, the other in 3010 after a hundred trillion humans have existed. I agree that knowing nothing, we would expect a random observer to have a thousand times greater chance of living in the long-lasting universe.

But given that we know this particular random observer is alive in 2009, I would think there's an equal chance of them being in both universes, because both universes contain an equal number of people living in 2009. So my knowledge that I'm living in 2009 screens off any information I should be able to get from the SIA about whether the universe ends in 2010 or 3010. Why can you still use the SIA to prevent Doomsday?

[analogy: you have two sets of numbered balls. One is green and numbered from 1 to 10. The other is red and numbered from 1 to 1000. Both sets are mixed together. What's the probability a randomly chosen ball is red? 1000/1010. Now I tell you the ball has number "6" on it. What's the probability it's red? 1/2. In this case, Doomsday argument still applies (any red or green ball will correctly give information about the number of red or green balls) but SIA doesn't (any red or green ball, given that it's a number shared by both red and green, gives no information on whether red or green is larger.)]

Comment author: steven0461 23 September 2009 08:00:58PM 8 points [-]

Why can you still use the SIA to prevent Doomsday?

You just did -- early doom and late doom ended up equally probable, where an uncountered Doomsday argument would have said early doom is much more probable (because your living in 2009 is much more probable conditional on early doom than on late doom).

Comment author: Yvain 23 September 2009 08:56:47PM *  3 points [-]

Whoa.

Okay, I'm clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn't look capable of tilting evidence. I'm not sure why that's the wrong way to look at it, but what you said is definitely right, so I'm making a mistake somewhere. Time to fret over this until it makes sense.

PS: Why are people voting this up?!?

Comment author: Eliezer_Yudkowsky 23 September 2009 09:12:17PM 8 points [-]

I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction

Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you're in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs - then you end up in the same place as after the update.

Comment author: CarlShulman 23 September 2009 09:18:14PM 2 points [-]

If civilization lasts forever, there can be many simulations of 2009, so updating on your sense-data can't overcome the extreme initial SIA update.

Comment author: Eliezer_Yudkowsky 23 September 2009 11:08:27PM 0 points [-]

Simulation argument is a separate issue from the Doomsday Argument.

Comment author: SilasBarta 24 September 2009 04:32:09PM *  4 points [-]

What? They have no implications for each other? The possibility of being in a simulation doesn't affect my estimates for the onset of Doomsday?

Why is that? Because they have different names?

Comment author: Eliezer_Yudkowsky 25 September 2009 08:31:47PM 0 points [-]

Simulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can't tell if you're in a simulation or not - assuming that statement is even meaningful - then you're very likely "in" such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).

Comment author: Unknowns 25 September 2009 08:51:51PM 1 point [-]

The probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.

Comment author: CarlShulman 29 June 2010 12:43:10PM 0 points [-]

The regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.

Comment author: DanielLC 12 April 2011 07:31:17PM 1 point [-]

If you start out certain that humanity will continue forever, won't you conclude that all evidence that you're in 2009 is flawed? Humanity must have been going on for longer than that.

Comment author: Mitchell_Porter 24 September 2009 04:08:25AM -1 points [-]

"On SIA, you start out certain that humanity will continue forever due to SIA"

SIA doesn't give you that. SIA just says that people from a universe with a population of n don't mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.

Comment author: KatjaGrace 13 January 2010 06:29:17AM *  1 point [-]

If you are in a universe SIA tells you it is most likely the most populated one.

Comment author: Mitchell_Porter 13 January 2010 06:41:40AM 1 point [-]

If there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.

Comment author: KatjaGrace 13 January 2010 08:45:06AM 1 point [-]

We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.

Comment author: Mitchell_Porter 13 January 2010 09:00:07AM 0 points [-]

That still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win.

Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart's "intuitive" form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The "non-intuitive SIA" seems to be a principle for choosing among theories about multiverse demographics but I'm not convinced of its validity.

Comment author: KatjaGrace 13 January 2010 09:44:34AM 2 points [-]

Intuitive SIA = consider yourself a random sample out of all possible people

SSA = consider yourself a random sample from people in each given universe separately

e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you

Comment author: RobinHanson 24 September 2009 04:44:21PM 0 points [-]

Yes this is exactly right.

Comment author: Vladimir_Nesov 23 September 2009 09:00:58PM 1 point [-]

Okay, I'm clearly confused. Time to think about this until the apparently correct statement you just said makes intuitive sense.

A great principle to live by (aka "taking a stand against cached thought"). We should probably have a post on that.

Comment author: wedrifid 24 September 2009 03:11:48AM 0 points [-]

It seems to be taking time to cache the thought.

Comment author: wedrifid 23 September 2009 08:26:13PM *  2 points [-]

So it does. I was sufficiently caught up in Yvain's elegant argument that I didn't even notice that it supported that the opposite conclusion to that of the introduction. Fortunately that was the only part that stuck in my memory so I still upvoted!

Comment author: Stuart_Armstrong 24 September 2009 09:45:35AM 0 points [-]

I think I've got a proof somewhere that SIA (combined with the Self Sampling Assumption, ie the general assumption behind the doomsday argument) has no consequences on future events at all.

(Apart from future events that are really about the past; ie "will tomorrow's astonomers discover we live in a large universe rather than a small one").

Comment author: Nubulous 24 September 2009 02:32:16PM *  2 points [-]

The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a view of both possible outcomes. But since there aren't the same number of people for both outcomes, it isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit, rather than mysterious.

I suspect this is also why the doomsday argument fails. Since it's not possible to define a set of people who "might have had" either outcome, the argument can't be constructed in the first place.

As usual, apologies if this is already known, obvious or discredited.

Comment author: Vladimir_Nesov 23 September 2009 04:31:33PM *  2 points [-]

weighted according to the probability of that observer existing

Existence is relative: there is a fact of the matter (or rather: procedure to find out) about which things exist where relative to me, for example in the same room, or in the same world, but this concept breaks down when you ask about "absolute" existence. Absolute existence is inconsistent, as everything goes. Relative existence of yourself is a trivial question with a trivial answer.

(I just wanted to state it simply, even though this argument is a part of a huge standard narrative. Of course, a global probability distribution can try to represent this relativity in its conditional forms, but it's a rather contrived thing to do.)

Comment author: Eliezer_Yudkowsky 23 September 2009 06:20:03PM 1 point [-]

Absolute existence is inconsistent

Wha?

Comment author: Vladimir_Nesov 23 September 2009 08:29:15PM *  1 point [-]

In the sense that "every mathematical structure exists", the concept of "existence" is trivial, as from it follows every "structure", which is after a fashion a definition of inconsistency (and so seems to be fair game for informal use of the term). Of course, "existence" often refers to much more meaningful "existence in the same world", with reasonably constrained senses of "world".

Comment author: cousin_it 24 September 2009 08:19:44AM 0 points [-]

"every mathematical structure exists"

How do you know that?

Comment author: loqi 25 September 2009 03:52:33AM 0 points [-]

An ensemble-type definition of existence seems more like an attempt to generalize the term than it does an empirical statement of fact. What would it even mean for a mathematical structure to not exist?

Comment author: Unknowns 24 September 2009 06:38:01AM 3 points [-]

At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.

After that, the probability remains at 50% all the way through.

The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply "wanting" the Doomsday argument to be incorrect.

Comment author: Stuart_Armstrong 24 September 2009 02:46:17PM 0 points [-]

Then put a situation C' between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact).

Then how does telling these people something just before putting them to death change anything for the survivors?

Comment author: Unknowns 24 September 2009 03:19:20PM *  1 point [-]

In C', the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way.

Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.)

In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.

Comment author: prase 24 September 2009 07:28:11PM *  0 points [-]

I don't see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and

P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%.

Not 50%.

Comment author: Unknowns 25 September 2009 06:26:43AM *  0 points [-]

This calculation is incorrect because "you" are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, "you" would have been chosen from the other survivors. So you can't update on survival in that way.

You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.

Comment author: prase 25 September 2009 02:54:29PM *  0 points [-]

had the coin flip went the other way, "you" would have been chosen from the other survivors

Thanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution.

I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns "your" consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F.

What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don't count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias).

In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?

Comment author: Unknowns 25 September 2009 03:24:20PM 3 points [-]

Right, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you.

This is clearly unreasonable, because "you" could not have found yourself to be one of the non-real people. "You" is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people.

In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.

Comment author: prase 25 September 2009 05:31:31PM *  0 points [-]

So, what if the setting is the same as in B or C, except that "you" know that only "you" are told the rules?

Comment author: Unknowns 25 September 2009 06:45:49PM *  0 points [-]

That's a complicated question, because in this case your estimate will depend on your estimate of the reasons why you were selected as the one to know the rules. If you are 100% certain that you were randomly selected out of all the persons, and it could have been a person killed who was told the rules (before he was killed), then your probability of being behind a blue door will be 99%.

If you are 100% certain that you were deliberately chosen as a survivor, and if someone else had survived and you had not, the other would have been told the rules and not you, then your probability will be 50%.

To the degree that you are uncertain about how the choice was made, your probability will be somewhere between these two values.

Comment author: KatjaGrace 13 January 2010 07:30:11AM -1 points [-]

You could have been one of those who didn't learn the rules, you just wouldn't have found out about it. Why doesn't the fact that this didn't happen tell you anything?

Comment author: Stuart_Armstrong 24 September 2009 06:38:08PM 0 points [-]

What is your feeling in the case where the victims are first told they will be killed, then the situation is explained to them and finally they are killed?

Similarly, the survivors are first told they will survive, and then the situation is explained to them.

Comment author: Unknowns 25 September 2009 06:33:52AM 2 points [-]

This is basically the same as C'. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive.

There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don't know, then we're talking about an entirely different situation.

Comment author: Stuart_Armstrong 25 September 2009 07:20:45AM 0 points [-]

PS: Thanks for your assiduous attempts to explain your position, it's very useful.

Comment author: Stuart_Armstrong 25 September 2009 07:19:48AM 0 points [-]

A rather curious claim, I have to say.

There is a group of people, and you are clearly not in their group - in fact the first thing you know, and the first thing they know, is that you are not in the same group.

Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says "maybe I told them, maybe I didn't -I forget" that "I forget" contains the difference between a 99% and a 50% chance of you being blue-doored.

To push it still further, if there were to be two experiments, side by side - world C'' and world X'' - with world X'' inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: "you are a survivor/victim of world C''/X'' with 99% blue/red doors", and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in!

Unless you can explain why "being in world C''/X'' " is a permissible piece of info to put you in a different class, while "you are a survivor/victim" is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.

Comment author: Unknowns 25 September 2009 03:16:25PM 0 points [-]

I don't understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in?

The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.

Comment author: Stuart_Armstrong 26 September 2009 11:52:05AM 0 points [-]

Let's simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed.

Do you feel these extra observers will change anything from the probability perspective.

Comment author: Unknowns 26 September 2009 09:11:58PM 0 points [-]

No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.

Comment author: Stuart_Armstrong 27 September 2009 05:40:47PM 1 point [-]

Actually, strike that - we haven't reached the limit of useful argument!

Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game.

Under your rules, this would mean that the probability of the coin flip is exactly 50-50.

Then, you are told you are not an extra observer, and won't be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% - 1% again. These trillion extra observers have brought you back close to SIA odds again.

Comment author: Stuart_Armstrong 27 September 2009 05:25:52PM *  0 points [-]

I think we've reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won't be able to convince you, if I haven't so far. And I haven't been convinced.

But do consider the oddity of your position - you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you "you know those people who were killed - who never shared the current subjective experience that you have now, and who are dead - well, before they died, I told them/didn't tell them..." then your probability estimate of your current state would change depending on what he told these dead people.

But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.

Comment author: casebash 11 January 2016 10:21:00AM 0 points [-]

The manner in C' depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.

Comment author: Stuart_Armstrong 11 January 2016 11:51:09AM 0 points [-]

Which shows how odd and arbitrary reference classes are.

Comment author: entirelyuseless 11 January 2016 02:50:14PM 0 points [-]

I don't think it is arbitrary. I responded to that argument in the comment chain here and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)

Comment author: CronoDAS 23 September 2009 04:56:43PM 3 points [-]

What bugs me about the doomsday argument is this: it's a stopped clock. In other words, it always gives the same answer regardless of who applies it.

Consider a bacterial colony that starts with a single individual, is going to live for N doublings, and then will die out completely. Each generation, applying the doomsday argument, will conclude that it has a better than 50% chance of being the final generation, because, at any given time, slightly more than half of all colony bacteria that have ever existed currently exist. The doomsday argument tells the bacteria absolutely nothing about the value of N.

Comment author: Eliezer_Yudkowsky 23 September 2009 06:19:23PM 8 points [-]

But they'll be well-calibrated in their expectation - most generations will be wrong, but most individuals will be right.

Comment author: cousin_it 24 September 2009 08:15:57AM *  3 points [-]

Woah, Eliezer defends the doomsday argument on frequentist grounds.

Comment author: JamesAndrix 24 September 2009 05:46:03AM 1 point [-]

So we might well be rejecting something based on long-standing experience, but be wrong because most of the tests will happen in the future? Makes me want to take up free energy research.

Comment author: brian_jaress 24 September 2009 07:37:17AM *  -1 points [-]

Only because of the assumption that the colony is wiped out suddenly. If, for example, the decline mirrors the rise, about two-thirds will be wrong.

ETA: I mean that 2/3 will apply the argument and be wrong. The other 1/3 won't apply the argument because they won't have exponential growth. (Of course they might think some other wrong thing.)

Comment author: Stuart_Armstrong 24 September 2009 09:49:23AM 1 point [-]

They'll be wrong about the generation part only. The "exponential growth" is needed to move from "we are in the last 2/3 of humanity" to "we are in the last few generations". Deny exponential growth (and SIA), then the first assumption is still correct, but the second is wrong.

Comment author: brian_jaress 24 September 2009 03:22:53PM 0 points [-]

They'll be wrong about the generation part only.

But that's the important part. It's called the "Doomsday Argument" for a reason: it concludes that doomsday is imminent.

Of course the last 2/3 is still going to be 2/3 of the total. So is the first 2/3.

Imminent doomsday is the only non-trivial conclusion, and it relies on the assumption that exponential growth will continue right up to a doomsday.

Comment author: gjm 23 September 2009 06:15:35PM *  4 points [-]

The fact that every generation gets the same answer doesn't (of itself) imply that it tells the bacteria nothing. Suppose you have 65536 people and flip a coin 16 [EDITED: for some reason I wrote 65536 there originally] times to decide which of them will get a prize. They can all, equally, do the arithmetic to work out that they have only a 1/65536 chance of winning. Even the one of them who actually wins. The fact that one of them will in fact win despite thinking herself very unlikely to win is not a problem with this.

Similarly, all our bacteria will think themselves likely to be living near the end of their colony's lifetime. And most of them will be right. What's the problem?

Comment author: Cyan 23 September 2009 06:44:24PM 2 points [-]

flip a coin 65536 times

I think you mean 16 times.

Comment author: gjm 24 September 2009 07:27:01AM 0 points [-]

Er, yes. I did change my mind a couple of times about what (2^n,n) pair to use, but I wasn't ever planning to have 2^65536 people so I'm not quite sure how my brain broke. Thanks for the correction.

Comment author: R0k0 23 September 2009 04:36:17PM 2 points [-]

Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

What about rejecting the assumption that there will be finitely many humans? In the infinite case, the argument doesn't hold.

Comment author: Vladimir_Nesov 23 September 2009 04:55:43PM 1 point [-]

But in the finite case it supposedly does. See least convenient possible world.

Comment author: wedrifid 24 September 2009 03:19:40AM 0 points [-]

Similarly, physics as I know it prohibits an infinite number of humans. This world is inconvenient.

Still, I do think R0k0's point would be enough to discourage the absolute claim of exclusivity quoted.

Comment author: AngryParsley 24 September 2009 05:30:58PM 0 points [-]

This is a bit off-topic, but are you the same person as Roko? If not, you should change your name.

Comment author: brianm 24 September 2009 09:47:49AM *  1 point [-]

The doomsday assumption makes the assumptions that:

  1. We are randomly selected from all the observers who will ever exist.
  2. The observers increase expoentially, such that there are 2/3 of those who have ever lived at any particular generation
  3. They are wiped out by a catastrophic event, rather than slowly dwindling or other

(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is immortal, they're guaranteed to be the last, and we're much less likely depending on how long people are likely to survive after that. Alternatively growth could plateau or fluctuate around the carrying capacity of a planet if most potential observers never expand beyond this) However, assuming they hold, I think the argument is valid.

I don't think your situation alters the argument, it just changes some of the assumptions. At point D, it reverts back to the original doomsday scenario, and the odds switch back.

At D, the point you're made aware, you know that you're in the proportion of people who live. Only 50% of the people who ever existed in this scenario learn this, and 99% of them are blue-doors. Only looking at the people at this point is changing the selection criteria - you're only picking from survivors, never from those who are now dead despite the fact that they are real people we could have been. If those could be included in the selection (as they are if you give them the information and ask them before they would have died), the situation would remain as in A-C.

Making creating the losing potential people makes this more explicit. If we're randomly selecting from people who ever exist, we'll only ever pick those who get created, who will be predominantly blue-doors if we run the experiment multiple times.

Comment author: SilasBarta 24 September 2009 04:38:43PM 1 point [-]

The doomsday assumption makes the assumptions that:

We are randomly selected from all the observers who will ever exist.

Actually, it requires that we be selected from a small subset of these observers, such as "humans" or "conscious entities" or, perhaps most appropriate, "beings capable of reflecting on this problem".

They are wiped out by a catastrophic event, rather than slowly dwindling

Well, for the numbers to work out, there would have to be a sharp drop-off before the slow-dwindling, which is roughly as worrisome as a "pure doomsday".

Comment author: Stuart_Armstrong 24 September 2009 11:34:53AM 1 point [-]

At D, the point you're made aware, you know that you're in the proportion of people who live.

Then what about introducing a C' between C and D: You are told the initial rules. Then, later you are told about the killing, and then, even later, that the killing had already happened and that you were spared.

What would you say the odds were there?

Comment author: brianm 24 September 2009 12:55:18PM 2 points [-]

Thinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)

Comment author: tadamsmar 26 May 2010 04:08:27PM *  1 point [-]

The wikipedia on the SIA points out that it is not an assumption, but a theorem or corollary. You have simply shown this fact again. Bostrom probably first named it an assumption, but it is neither an axiom or an assumption. You can derive it from these assumptions:

  1. I am a random sample
  2. I may never have been born
  3. The pdf for the number of humans is idependent of the pdf for my birth order number
Comment author: DanArmak 25 September 2009 07:18:33PM *  1 point [-]

I don't see how the SIA refutes the complete DA (Doomsday Argument).

The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when "considering myself as a random observer drawn from the space of all possible observers" - it's not really all possible observers.

How small is this set? Well, if we rely on just the argument given here for SIA, it's very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies of you; if tails, he does nothing. However, these extra copies are not created inside rooms at all. You know you're not one of them, because you're in one of the rooms. The outcome of the second coin flip is made known to you. But it clearly doesn't influence your bet on their doors' colors, even when it increases the number of observers in your universe 10^8 times, and even though these extra observers are complete copies of your life up to this point, who are only placed in a different situation from you in the last second.

Now, the DA can be reformulated: instead of the set of all humans ever to live, consider the set of all humans (or groups of humans) who would never confuse themselves with one another. In this set the SIA doesn't apply (we don't predict that a bigger set is more likely). The DA does apply, because humans from different eras are dissimilar and can be indexed as the DA requires. To illustrate, I expect that if I were taken at any point in my life and instantly placed at some point of Leonardo da Vinci's life, I would very quickly realize something was wrong.

Presumed conclusion: if humanity does not become extinct totally, expect other humans to be more and more similar to yourself as time passes, until you survive only in a universe inhabited by a Huge Number of Clones

It also appears that I should assign very high probability to the chance that a non-Friendly super-intelligent AI destroys the rest of humanity to tile the universe with copies of myself in tiny life-support bubbles. Or with simulators running my life up to then in a loop forever.

Comment deleted 25 September 2009 08:05:27PM *  [-]
Comment deleted 25 September 2009 08:15:57PM [-]
Comment author: Vladimir_Nesov 25 September 2009 09:10:21AM *  1 point [-]

As we are discussing SIA, I'd like to remind about counterfactual zombie thought experiment:

Omega comes to you and offers $1, explaining that it decided to do so if and only if it predicts that you won't take the money. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don't exist!

Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.

This shows that inference "I think therefore I exist" is, in general, invalid. You can't update on your own existence (although you can use more specific info as parameters in your strategy).

Rather, you should look at yourself as an implication: "If I exist in this situation, then my actions are as I now decide".

Comment author: Jack 05 October 2009 10:46:37PM 1 point [-]

But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don't exist!

No. It just means you are a simulation. These are very different things. "I think therefore I am" is still deductively valid (and really, do you want to give the predicate calculus that knife in the back?). You might not be what you thought you were but all "I" refers to is the originator of the utterance.

Comment author: Vladimir_Nesov 05 October 2009 11:07:41PM *  1 point [-]

No. It just means you are a simulation.

Remember: there was no simulation, only prediction. Distinction with a difference.

Comment author: Jack 05 October 2009 11:25:24PM 0 points [-]

Then if you take the money Omega was just wrong. Full stop. And in this case if you take the dollar expected gain is a dollar.

Or else you need to clarify.

Comment author: Vladimir_Nesov 06 October 2009 12:15:31PM 1 point [-]

Then if you take the money Omega was just wrong. Full stop.

Assuming that you won't actually take the money, what would a plan to take the money mean? It's a kind of retroactive impossibility, where among two options one is impossible not because you can't push that button, but because you won't be there to push it. Usual impossibility is just additional info for the could-should picture of the game, to be updated on, so that you exclude the option from consideration. This kind of impossibility is conceptually trickier.

Comment author: Jack 06 October 2009 04:11:59PM *  2 points [-]

I don't see how my non-existence gets implied. Why isn't a plan to take the money either a plan that will fail to work (you're arm won't respond to your brain's commands, you'll die, you'll tunnel to the Moon etc.) or a plan that would imply Omega was wrong and shouldn't have made the offer?

My existence is already posited one you've said that Omega has offered me this deal. What happens after that bears on whether or not Omega is correct and what properties I have (i.e. what I am).

There exists (x) &e there exists (y) such that Ox & Iy & ($xy <--> N$yx)

Where O= is Omega, I= is me, $= offer one dollar to, N$= won't take dollar from. I don't see how one can take that, add new information, and conclude ~ there exists (y).

Comment author: Stuart_Armstrong 25 September 2009 11:34:51AM 0 points [-]

I don't get it, I have to admit. All the experiment seems to be saying is that "if I take the $1, I exist only as a short term simulation in Omega's mind". It says you don't exist as a long-term seperate individual, but doesn't say you don't exist in this very moment...

Comment author: Vladimir_Nesov 25 September 2009 11:38:33AM *  0 points [-]

Simulation is a very specific form of prediction (but the most intuitive, when it comes to prediction of difficult decisions). Prediction doesn't imply simulation. At this very moment I predict that you will choose to NOT cut your own hand off with an axe when asked to, but I'm not simulating you.

Comment author: Stuart_Armstrong 25 September 2009 12:48:23PM 0 points [-]

In that case (I'll return to the whole simulation/prediction issue some other time), I don't follow the logic at all. If Omega offers you that deal, and you take the money, all that you have shown is that Omega is in error.

But maybe its a consequence of advanced decision theory?

Comment author: Vladimir_Nesov 25 September 2009 01:30:52PM *  0 points [-]

That's the central issue of this paradox: the part of the scenario before you take the money can actually exist, but if you choose to take the money, it follows that it doesn't. The paradox doesn't take for granted that the described scenario does take place, it describes what happens (could happen) from your perspective, in a way in which you'd plan your own actions, not from the external perspective.

Think of your thought process in the case where in the end you decide not to take the money: how you consider taking the money, and what that action would mean (that is, what's its effect in the generalized sense of TDT, like the effect of you cooperating in PD on the other player or the effect of one-boxing on contents of the boxes). I suggest that the planned action of taking the money means that you don't exist in that scenario.

Comment author: Stuart_Armstrong 26 September 2009 11:56:31AM 3 points [-]

I see it, somewhat. But this sounds a lot like "I'm Omega, I am trustworthy and accurate, and I will only speak to you if I've predicted you will not imagine a pink rhinoceros as soon as you hear this sentence".

The correct conclusion seems to be that Omega is not what he says he is, rather than "I don't exist".

Comment author: Eliezer_Yudkowsky 26 September 2009 06:21:58PM 1 point [-]

The decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there's no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so - and conclude that Omega incorrectly executed the equation in the context outside my own mind.

Even if we suppose that "cogito ergo sum" presents an extra bit of evidence to me, whereby I truly know that I am the "real" me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me.

If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn't following the stated algorithm. Just like if Omega says "I will offer you this $1 only if 1 = 2" and then offers you the $1.

Comment author: Johnicholas 26 September 2009 05:16:29PM *  1 point [-]

When the problem contains a self-contradiction like this, there is not actually one "obvious" proposition which must be false. One of them must be false, certainly, but it is not possible to derive which one from the problem statement.

Compare this problem to another, possibly more symmetrical, problem with self-contradictory premises:

http://en.wikipedia.org/wiki/Irresistible_force_paradox

Comment author: RichardChappell 24 September 2009 06:24:20AM 1 point [-]

99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

Might it make a difference that in scenario F, there is an actual process (namely, the coin toss) which could have given rise to the alternative outcome? Note the lack of any analogous mechanism for "bringing into existence" one out of all the possible worlds. One might maintain that this metaphysical disanalogy also makes an epistemic difference. (Compare cousin_it's questioning of a uniform prior across possible worlds.)

In other words, it seems that one could consistently maintain that self-indication principles only hold with respect to possibilities that were "historically possible", in the sense of being counterfactually dependent on some actual "chancy" event. Not all possible worlds are historically possible in this sense, so some further argument is required to yield the SIA in full generality.

(You may well be able to provide such an argument. I mean this comment more as an invitation than a criticism.)

Comment author: Stuart_Armstrong 24 September 2009 10:10:16AM 0 points [-]

This is a standard objection, and one that used to convince me. But I really can't see that F is different from E, and so on down the line. Where exactly does this issue come up? Is it in the change from E to F, or earlier?

Comment author: RichardChappell 24 September 2009 03:46:56PM 0 points [-]

No, I was suggesting that the difference is between F and SIA.

Comment author: Stuart_Armstrong 24 September 2009 05:58:30PM 1 point [-]

Ah, I see. This is more a question about the exact meaning of probability; ie the difference between a frequentist approach and a Bayesian "degree of belief".

To get a "degree of belief" SIA, extend F to G: here you are simply told that one of two possible universes happened (A and B), in which a certain amount of copies of you were created. You should then set your subjective probability to 50%, in the absence of other information. Then you are told the numbers, and need to update your estimate.

If your estimates for G differ from F, then you are in the odd position of having started with a 50-50 probability estimate, and then updating - but if you were ever told that the initial 50-50 comes from a coin toss rather than being an arbitrary guess, then you would have to change your estimates!

I think this argument extends it to G, and hence to universal SIA.

Comment author: RichardChappell 24 September 2009 07:04:05PM 0 points [-]

Thanks, that's helpful. Though intuitively, it doesn't seem so unreasonable to treat a credal state due to knowledge of chances differently from one that instead reflects total ignorance. (Even Bayesians want some way to distinguish these, right?)

Comment author: JGWeissman 24 September 2009 07:16:42PM 1 point [-]

What do you mean by "knowledge of chances"? There is no inherent chance or probability in a coin flip. The result is deterministically determined by the state of the coin, its environment, and how it is flipped. The probability of .5 for heads represents your own ignorance of all these initial conditions and your inability, even if you had all that information, to perform all the computation to reach to logical conclusion of what the result will be.

Comment author: RichardChappell 24 September 2009 07:30:52PM 0 points [-]

I'm just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions.

Comment author: JGWeissman 24 September 2009 07:46:11PM 2 points [-]

By "a coin is fair", do you mean that if we considered all the possible environments in which the coin could be flipped (or some subset we care about), and all the ways the coin could be flipped, then in half the combinations the result will be heads, and in the other half the result will be tails?

Why should that matter? In the actual coin flip whose result we care about, the whole system is not "fair", there is one result that it definitely produces, and our probabilities just represent our uncertainty about which one.

What if I tell you the coin is not fair, but I don't have any clue which side it favors? Your probability for the result of heads is still .5, and we still reach all the same conclusions.

Comment author: RichardChappell 24 September 2009 08:34:53PM 1 point [-]

For one thing, it'll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it's fair, we'll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations.

Comment author: JGWeissman 24 September 2009 09:53:29PM 1 point [-]

For one thing, it'll change how we update.

That is true, but in the scenario, you never learn the result of a coin flip to update on. So why does it matter?

Comment author: DanArmak 23 September 2009 07:41:03PM *  1 point [-]

Final edit: I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)


I don't understand what precisely is wrong with the following intuitive argument, which contradicts the p=.99 result of SIA:

In scenarios E and F, I first wake up after the other people are killed (or not created) based on the coin flip. No-one ever wakes up and is killed later. So I am in a blue room if and only if the coin came up heads (and no observer was created in the red room). Therefore P(blue)=P(heads)=0.5, and P(red)=P(tails)=0.5.

Edit: I'm having problems wrapping my head around this logic... Which prevents me from understanding all the LW discussion in recent months about decision theories, since it often considers such scenarios. Could someone give me a pointer please?

Before the coin is flipped and I am placed in a room, clearly I should predict P(heads)=0.5. Afterwards, to shift to P(heads)=0.99 would require updating on the evidence that I am alive. How exactly can I do this if I can't ever update on the evidence that I am dead? (This is the scenario where no-one is ever killed.)

I feel like I need to go back and spell out formally what constitutes legal Bayesian evidence. Is this written out somewhere in a way that permits SIA (my own existence as evidence)? I'm used to considering only evidence to which there could possibly be alternative evidence that I did not in fact observe. Please excuse a rookie as these must be well understood issues.

Comment author: Unknowns 24 September 2009 03:34:28PM 1 point [-]

There's nothing wrong with this argument. In E and F (and also in D in fact), the probability is indeed 50%.

Comment author: JamesAndrix 25 September 2009 04:38:34AM 0 points [-]

How would you go about betting on that?

Comment author: Unknowns 25 September 2009 03:38:10PM 1 point [-]

If I were actually in situation A, B, or C, I would expect a 99% chance of a blue door, and in D, E, or F, a 50%, and I would actually bet with this expectation.

There is really no practical way to implement this, however, because of the assumption that random events turn out in a certain way, e.g. it is assumed that there is only a 50% chance that I will survive, yet I always do, in order for the case to be the one under consideration.

Comment author: JamesAndrix 25 September 2009 05:19:47PM 1 point [-]

Omega runs 10,000 trials of scenario F, and puts you in touch with 100 random people still in their room who believe there is a %50 chance they have red doors, and will happily take 10 to 1 bets that they are.

You take these bets, collect $1 each from 98 of them, and pay out $10 each to 2.

Were their bets rational?

Comment author: Unknowns 25 September 2009 06:37:42PM *  1 point [-]

You assume that the 100 people have been chosen randomly from all the people in the 10,000 trials. This is not valid. The appropriate way for these bets to take place is to choose one random person from one trial, then another random person from another trial, and so on. In this way about 50 of the hundred persons will be behind red doors.

The reason for this is that if I know that this setup has taken place 10,000 times, my estimate of the probability that I am behind a blue door will not be the same as if the setup has happened only once. The probability will slowly drift toward 99% as the number of trials increases. In order to prevent this drift, you have to select the persons as stated above.

Comment author: JamesAndrix 25 September 2009 07:14:35PM 0 points [-]

If you find yourself in such a room, why does your blue door estimate go up with the number of trials you know about? Your coin was still 50-50.

How much does it go up for each additional trial? ie what are your odds if omega tells you you're in one of two trials of F?

Comment author: Unknowns 25 September 2009 07:37:25PM 2 points [-]

The reason is that "I" could be anyone out of the full set of two trials. So: there is a 25% chance there both trials ended with red-doored survivors; a 25% chance that both trials ended with blue-doored survivors; and a 50% chance that one ended with a red door, one with a blue.

If both were red, I have a red door (100% chance). If both were blue, I have a blue door (100% chance). But if there was one red and one blue, then there are a total of 100 people, 99 blue and one red, and I could be any of them. So in this case there is a 99% chance I am behind a blue door.

Putting these things together, if I calculate correctly, the total probability here (in the case of two trials) is that I have a 25.5% chance of being behind a red door, and a 74.5% chance of being behind a blue door. In a similar way you can show that as you add more trials, your probability will get ever closer to 99% of being behind a blue door.

Comment author: JamesAndrix 25 September 2009 08:25:07PM 0 points [-]

But if there was one red and one blue, then there are a total of 100 people, 99 blue and one red, and I could be any of them. So in this case there is a 99% chance I am behind a blue door.

You could only be in one trial or the other.

What if Omega says you're in the second trial, not the first?

Or trial 3854 of 10,000?

Comment author: Unknowns 25 September 2009 08:33:58PM 1 point [-]

"I could be any of them" in the sense that all the factors that influence my estimate of the probability, will influence the estimate of the probability made by all the others. Omega may tell me I am in the second trial, but he could equally tell someone else (or me) that he is in the first trial. There are still 100 persons, 99 behind blue doors and 1 behind red, and in every way which is relevant, I could be any of them. Thinking that the number of my trial makes a difference would be like thinking that if Omega tells me I have brown eyes and someone else has blue, that should change my estimate.

Likewise with trial 3854 out of 10,000. Naturally each person is in one of the trials, but the persons trial number does not make a significant contribution to his estimate. So I stand by the previous comments.

Comment author: DanArmak 25 September 2009 07:48:59PM 0 points [-]

Thanks! I think this comment is the best so far for demonstrating the confusion (well, I was confused :-) about the different possible meanings of the phrase "you are an observer chosen from such and such set". Perhaps a more precise and unambiguous phrasing could be used.

Comment author: JamesAndrix 24 September 2009 05:08:20AM -2 points [-]

Replace death with the light in the room being shut off.

Comment author: DanArmak 24 September 2009 10:39:26AM 0 points [-]

That's not applicable to scenarios E and F which is where I have a problem. The observers there never wake up or are never created (depending on the coin toss), I can't replace that with a conscious observer and the light going off.

Whereas in scenarios A through D, you don't need SIA to reach the (correct) p=.99 conclusion, you don't even need the existence of observers other than yourself. Just reformulate as: I was moved to a room at random; the inhabitants of some rooms, if any, were killed based on a coin flip; etc.

Comment author: JamesAndrix 24 September 2009 03:30:56PM 0 points [-]

I can't replace that with a conscious observer and the light going off.

Do it anyway. Take a scenario in which the light is shut off while you are sleeping, or never turned on. What does waking up with the lights on (or off) tell you about the color of the door?

Even in A thru D, the dead can't update.

Comment author: DanArmak 24 September 2009 07:21:39PM 0 points [-]

The state of the lights tells me nothing about the color of the door. Whatever color room I happen to be in, the coin toss will turn my lights on or off with 50% probability.

I don't see what you intend me to learn from this example...

Comment author: JamesAndrix 25 September 2009 12:53:26PM *  1 point [-]

That dead or alive you are still most likely behind a blue door. You can use the lights being on as evidence just as well as your being alive.

That in B through D you are already updating based on your continued existence.

Beforehand you would expect a 50% chance of dying. Later, If you are alive, then the coin probably came up heads. In E and F, You wake up, You know the coin flip is in your past, You know that most 'survivors' of situations like this come out of blue doors.

If you play Russian roulette and survive, you can have a much greater than 5/6 confidence that the chamber wasn't loaded.

You can be very certain that you have great grandparents, given only your existence and basic knowledge about the world.

Comment author: DanArmak 25 September 2009 05:41:21PM *  0 points [-]

That dead or alive you are still most likely behind a blue door. In A-D this is correct. I start out being probably behind a blue door (p=.99), and dying or not doesn't influence that.

In E-F this is not correct. Your words "dead or alive" simply don't apply: the dead observers never were alive (and conscious) in these scenarios. They were created and then destroyed without waking up. There is no possible sense in which "I" could be one of them; I am by definition alive now or at least were alive at some point in the past. Even under the assumptions of the SIA, a universe with potential observers that never actually materialize isn't the same as one with actual observers.

I still think that in E-F, I'm equally likely to be behind a blue or a red door.

That in B through D you are already updating based on your continued existence.

Correct. The crucial difference is that in B-D I could have died but didn't. In other Everett branches where the coin toss went the other way I did die. So I can talk about the probability of the branch where I survive, and update on the fact that I did survive.

But in E-F I could never have died! There is no branch of possibility where any conscious observer has died in E-F. That's why no observer can update on being alive there; they are all alive with p=1.

You can be very certain that you have great grandparents, given only your existence and basic knowledge about the world.

Yes, because in our world there are people who fail to have grandchildren, and so there are potential grandchildren who don't actually come to exist.

But in the world of scenarios E and F there is no one who fails to exist and to leave a "descendant" that is himself five minutes later...

Comment author: DanArmak 25 September 2009 06:37:08PM 1 point [-]

I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)

Comment author: JamesAndrix 25 September 2009 06:59:51PM *  2 points [-]

I wouldn't delete, if nothing else it serves as a good example of working through the dissonance.

edit It would also be helpful if you explained from your own perspective why you changed your mind.

Comment author: wedrifid 25 September 2009 07:30:11PM 1 point [-]

Second James's preference and note that I find it useful as a reader to see an edit note of some sort in comments that are no longer supported.

Comment author: CronoDAS 23 September 2009 04:41:35PM *  1 point [-]

I'm not sure about the transition from A to B; it implies that, given that you're alive, the probability of the coin having come up heads was 99%. (I'm not saying it's wrong, just that it's not immediately obvious to me.)

The rest of the steps seem fine, though.

Comment author: gjm 23 September 2009 06:19:05PM 1 point [-]

Pr(heads|alive) / Pr(tails|alive) = {by Bayes} Pr(alive|heads) / Pr(alive|tails) = {by counting} (99/100) / (1/100) = {by arithmetic} 99, so Pr(heads|alive) = 99/100. Seems reasonable enough to me.

Comment author: eirenicon 23 September 2009 08:49:22PM *  0 points [-]

It doesn't matter how many observers are in either set if all observers in a set experience the same consequences.

(I think. This is a tricky one.)

Comment author: cousin_it 23 September 2009 03:37:34PM *  1 point [-]

Your justification of the SIA requires a uniform prior over possible universes. (If the coin is biased, the odds are no longer 99:1.) I don't see why the real-world SIA can assume uniformity, or what it even means. Otherwise, good post.

Comment author: Stuart_Armstrong 24 September 2009 10:06:33AM 0 points [-]

Note the line "weighted according to the probability of that observer existing".

Imagine flipping a coin twice. If the coin comes heads first, a universe A with one observer is created. If it comes up TH, a universe B with two observers is created, and if it comes up TT, a universe with four observers is created.

From outside, the probabilities are A:1/2, B:1/4, C:1/4. Updating with SIA gives A:1/4, B:1/4, C:1/2.

No uniform priors assumed or needed.

Comment author: jimmy 23 September 2009 07:06:25PM 0 points [-]

His prior is uniform because uniform is max entropy. If your prior is less than max entropy, you must have had information to update on. What is your information?

Comment author: cousin_it 24 September 2009 08:09:24AM *  1 point [-]

No, you don't get it. The space of possible universes may be continuous instead of discrete. What's a "uniform" prior over an arbitrary continuous space that has no canonical parameterization? If you say Maxent, why? If you say Jeffreys, why?

Comment author: jimmy 24 September 2009 04:43:45PM *  0 points [-]

It's possible to have uniform distributions on continuous spaces. It just becomes probability density instead of probability mass.

The reason for max entropy is that you want your distribution to match your knowlege. When you know nothing, thats maxiumum entropy, by definition. If you update on information that you don't have, you probabilistically screw yourself over.

If you have a hard time drawing the space out and assigning the maxent prior, you can still use the indifference prinicple when asked about the probability of being in a larger universe vs a smaller universe.

Consider "antipredictions". Say I ask you "is statement X true? (you can't update on my psychology since I flipped a coin to determine whether to change X to !X). The max entropy answer is 50/50 and it's just the indifference principle.

If I now tell you that X = "I will not win the lottery if I buy a ticket?" and you know nothing about what ball will come up, just that the number of winning numbers is small and the number of not winning numbers is huge, you decide that it is very likely to be true. We've only updated on which distribution we're even talking about. If you're too confused to make that jump in a certain case, then don't.

Or you could just say that for any possible non uniformity, it's possible that there's an opposite non uniformity that cancels it out. Whats the direction of the error?

Does that explain any better?

Comment author: cousin_it 24 September 2009 08:09:50PM *  4 points [-]

No, it doesn't. In fact I don't think you even parsed my question. Sorry.

Let's simplify the problem: what's your uninformative prior for "proportion of voters who voted for an unknown candidate"? Is it uniform on (0,1) which is given by maxent? What if I'd asked for your prior of the square of this value instead, masking it with some verbiage to sound natural - would you also reply uniform on (0,1)? Those statements are incompatible. In more complex real world situations, how exactly do you choose the parameterization of the model to feed into maxent? I see no general way. See this Wikipedia page for more discussion of this problem. In the end it recommends the Jeffreys rule for use in practice, but it's not obviously the final word.

Comment author: jimmy 26 September 2009 05:58:42PM 0 points [-]

I see what you're saying, but I don't think it matters here. That confusion extends to uncertainty about the nth digit of pi as well-it's nothing new about different universes. If you put a uniform prior on the nth digit of pi instead of uniform of the square of the nth digit or Jeffreys prior, why don't you do the same in the case of different universes? What prior do you use?

The point I tried to make in the last comment is that if you're asked any question, you start with the indifference principle. which is uniform in nature, and upon receiving new information, (perhaps the possibility that the original phrasing wasn't the 'natural' way to phrase it, or however you solve the confusion) then you can update. Since the problem never mentioned a method of parameterizing a continuous space of possible universes, it makes me wonder how you can object to assigning uniform priors given this parameterization or even say that he required it.

Changing the topic of our discussion, it seems like your comment is also orthogonal to the claim being presented. He basically said "given this discrete set of two possible universes (with uniform prior) this 'proves' SIA (worded the first way)". Given SIA, you know to update on your existence if you find yourself in a continuous space of possible universes, even if you don't know where to update from.

Comment author: rosyatrandom 23 September 2009 03:14:28PM 1 point [-]

If continuity of consciousness immortality arguments also hold, then it simply doesn't matter whether doomsdays are close - your future will avoid those scenarios.

Comment author: PlaidX 23 September 2009 03:29:06PM *  2 points [-]

It "doesn't matter" only to the extent that you care only about your own experiences, and not the broader consequences of your actions. And even then, it still matters, because if the doomsday argument holds, you should still expect to see a lot of OTHER people die soon.

Comment author: JamesAndrix 24 September 2009 05:40:39AM *  0 points [-]

you should still expect to see a lot of OTHER people die soon.

Not if the world avoiding doomsday is more likely than me, in particular, surviving doomsday. I'd guess most futures in which I live have a lot of people like me living too.

Comment author: turchin 24 August 2015 04:37:37PM 0 points [-]

SIA self rebuttal.

If many different universes exist, and one of them has infinite number of all possible observers, SIA imply that I must be in it. But if infinite number of all possible observers exist, the condition that I may not be born is not working in this universe and I can't apply SIA to the Earth's fate. Doomsday argument is on.

Comment author: Skeptityke 20 August 2014 06:32:47PM 0 points [-]

Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past.

Comment author: Mallah 07 April 2010 05:50:08PM *  0 points [-]

Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you'd probably find yourself afterwards in either case; and the case we're really interested in, the SIA, is the limit when the time before goes to 0.

I just wanted to follow up on this remark I made. There is a suble anthropic selection effect that I didn't include in my original analysis. As we will see, the result I derived applies if the time after is long enough, as in the SIA limit.

Let the amount of time before the killing be T1, and after (until all observers die), T2. So if there were no killing, P(after) = T2/(T2+T1). It is the ratio of the total measure of observer-moments after the killing divided by the total (after + before).

If the 1 red observer is killed (heads), then P(after|heads) = 99 T2 / (99 T2 + 100 T1)

If the 99 blue observers are killed (tails), then P(after|tails) = 1 T2 / (1 T2 + 100 T1)

P(after) = P(after|heads) P(heads) + P(after|tails) P(tails)

For example, if T1 = T2, we get P(after|heads) = 0.497, P(after|tails) = 0.0099, and P(after) = 0.497 (0.5) + 0.0099 (0.5) = 0.254

So here P(tails|after) = P(after|tails) P(tails) / P(after) = 0.0099 (.5) / (0.254) = 0.0195, or about 2%. So here we can be 98% confident to be blue observers if we are after the killing. Note, it is not 99%.

Now, in the relevant-to-SIA limit T2 >> T1, we get P(after|heads) ~ 1, P(after|tails) ~1, and P(after) ~1.

In this limit P(tails|after) = P(after|tails) P(tails) / P(after) ~ P(tails) = 0.5

So the SIA is false.

Comment author: Jonathan_Graehl 23 September 2009 09:20:15PM 0 points [-]

SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

"Other things equal" is a huge obstacle for me. Without formalizing "other things equal", this is a piece of advice, not a theorem to be proved. I accept moving from A->F, but I don't see how you've proved SIA in general.

How do I go about obtaining a probability distribution over all possible universes conditioned on nothing?

How do I get a distribution over universes conditioned on "my" existence? And what do I mean by "me" in universes other than this one?

Comment author: CronoDAS 24 September 2009 04:20:58AM 1 point [-]

How do I go about obtaining a probability distribution over all possible universes conditioned on nothing?

Nobody really knows, but some people have proposed Kolmogorov complexity as the basis of such a prior. In short, the longer the computer program required to simulate something, the less probable it is. (The choice of which programming language to use is still a problem, though.)

Comment author: cousin_it 24 September 2009 08:22:39AM *  0 points [-]

That's not the only problem. We don't even know whether our universe is computable, e.g. physical constants can have uncomputable decimal expansions, like Chaitin's Omega encoded into G. Are you really damn confident in assigning this possibility a prior of zero?

Comment author: Jonathan_Graehl 24 September 2009 08:38:42PM 0 points [-]

It amazes me that people will start with some particular prior over universes, then mention offhand that they also give significant probability to simulation from prior universes nearly unrelated to our own (except as much as you generically expect simulators to prefer conditions close to their own). Then, should I believe that most universes that exist are simulations in infinite containing universes (that have room for all simulations of finite universes)? Yudkowsky's recent "meta crossover" fan fiction touched on this.

Simulation is sexy in the same way that creation by gods used to be. Are there any other bridges that explain our universe in terms of some hidden variable?

How about this: leading up to the big crunch, some powerful engineer (or collective) tweaks the final conditions so that another (particular) universe is born after (I vaguely recall Asimov writing this). Does the idea of universes that restart periodically with information leakage between iterations change in any way our prior for universes-in-which-"we"-exist?

In my opinion, I only exist in this particular universe. Other universes in which similar beings exist are different. So p(universe|me) needs to be fleshed out better toward p(universe|something-like-me-in-that-xyz).

I guess we all realize that any p(universe|...) we give is incredibly flaky, which is my complaint. At least, if you haven't considered all kinds of schemes for universes inside or caused by other universes, then you have to admit that your estimates could change wildly any time you encounter a new such idea.

Comment author: Stuart_Armstrong 24 September 2009 10:13:20AM 0 points [-]

How do I go about obtaining a probability distribution over all possible universes conditioned on nothing?

I don't need to. I just need to show that if we do get such a distribution (over possible universes, or over some such subset), then SIA update these probabilities. If we can talk, in anyway, about the relative likelyhood of universe Y versus J, then SIA has a role to play.

Comment author: PlaidX 23 September 2009 03:12:07PM 0 points [-]

SIA makes perfect sense to me, but I don't see how it negates the doomsday argument at all. Can you explain further?

Comment author: R0k0 23 September 2009 04:42:11PM 1 point [-]

If the human race ends soon, there will be fewer people. Therefore, assign a lower prior to that. This cancels exactly the contribution from the doomsday argument.

Comment author: PlaidX 23 September 2009 04:45:02PM 0 points [-]

Oh, I see. How are we sure it cancels exactly, though?

Comment author: R0k0 24 September 2009 05:58:39PM 1 point [-]

see Bostrom's paper

Comment author: PlaidX 28 September 2009 03:58:40AM 2 points [-]

Ah, that makes sense. In retrospect, this is quite simple:

If you have a box of ten eggs, numbered 1 through 10, and a box of a thousand eggs, numbered 1 through 1000, and the eggs are all dumped out on the floor and you pick up one labeled EGG 3, it's just as likely to have come from the big box as the small one, since they both have only one egg labeled EGG 3.

I don't buy bostrom's argument against the presumptuous philosopher though. Does anyone have a better one?

Comment author: neq1 12 April 2010 03:17:45PM *  -2 points [-]

The primary reason SIA is wrong is because it counts you as special only after seeing that you exist (i.e., after peeking at the data)

My detailed explanation is here.

Comment author: Mallah 30 March 2010 03:34:33AM -1 points [-]

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%.

Sure.

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated.

Wrong. Your epistemic situation is no longer the same after the announcement.

In a single-run (one-small-world) scenario, the coin has a 50% to come up tails or heads. (In a MWI or large universe with similar situations, it would come up both, which changes the results. The MWI predictions match yours but don't back the SIA). Here I assume the single-run case.

The prior for the coin result is 0.5 for heads, 0.5 for tails.

Before the killing, P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99. So far we agree.

P(red|before) = 0.5 (0.01) + 0.5 (0.01) = 0.01

Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.

P(red|after) = 0.5 (0) + 0.5 (1) = 0.5

So after the killing, you should expect either color door to be 50% likely.

This, of course, is exactly what the SIA denies. The SIA is obviously false.

So why does the result seem counterintuitive? Because in practice, and certainly when we evolved and were trained, single-shot situations didn't occur.

So let's look at the MWI case. Heads and tails both occur, but each with 50% of the original measure.

Before the killing, we again have P(heads) =P(tails) = 0.5

and P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99.

Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.

Huh? Didn't I say it was different? It sure is, because afterwards, we no longer have P(heads) = P(tails) = 0.5. On the contrary, most of the conscious measure (# of people) now resides behind the blue doors. We now have for the effective probabilities P(heads) = 0.99, P(tails) = 0.01.

P(red|after) = 0.99 (0) + 0.01 (1) = 0.01

Comment author: Academian 06 April 2010 09:07:08PM *  0 points [-]

P(red|after) = 0.5 (0) + 0.5 (1) = 0.5

So after the killing, you should expect either color door to be 50% likely.

No; you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you're behind a blue door, and if you survive you should take it as evidence that "yay!" the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors:

P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01

So SIA + Bayesian updating happens to agree with the "quantum measure" heuristic in this case.

However, I am with Nick Bodstrom in rejecting SIA in favor of his "Observation Equation" derived from "SSSA", precisely because that is what maximizes the total wealth of your reference class (at least when you are not choosing whether to exist or create dupcicates).

Comment author: DanielLC 26 December 2009 08:42:52PM -1 points [-]

I don't feel like reading through 166 comments, so sorry if this has already been posted.

I did get far enough to find that brianm posted this: "The doomsday assumption makes the assumptions that: 1. We are randomly selected from all the observers who will ever exist..."

Since we're randomly selecting, let's not look at individual people. Let's look at it like taking marbles from a bag. One marble is red. 99 are blue. A guy flips a coin. If it comes up heads, he takes out the red marble. If it comes up tails, he takes out the blue marbles. You then take one of the remaining marbles out at random. Do I even need to say what the probability of getting a blue marble is?

Comment author: JamesAndrix 31 December 2009 06:32:01PM 1 point [-]

You have to look at individuals in order to get odds for individuals. Your obvious probability of getting a blue marble is for the group of marbles.

But I think we can still look at individual randomly selected marbles.

Before the coin flip let's write numbers on all the marbles, 1 to 100, without regard to color. And let's say we roll a fair 100 sided die, and get the number 37.

After the flip and extraction of colored marbles. I look in the bag and find that marble 37 is in it. Given that marble 37 survived, what is the probability that it is blue?

Comment author: Psychohistorian 23 September 2009 11:00:32PM *  -1 points [-]

Edit again: OK, I get it. That was kind of dumb.

I read "2/3 of humans will be in the final 2/3 of humans" combined with the term "doomsday" as meaning that there would be 2/3 of humanity around to actually witness/experience whatever ended humanity. Thus, we should expect to see whatever event does this. This obviously makes no sense. The actual meaning is simply that if you made a line of all the people who will ever live, we're probably in the latter 2/3 of it. Thus, there will likely only be so many more people. Thus, some "doomsday" type event will occur before too many more people have existed; it need not affect any particular number of those people, and it need not occur at any particular time.

Comment author: Alicorn 23 September 2009 11:23:30PM 2 points [-]

It's not necessary that 2/3 of the people who ever live be alive simultaneously. It's only necessary that the last humans not a) all die simultaneously and b) constitute more than 2/3 of all humans ever. You can still have a last 2/3 without it being one giant Armageddon that kills them in one go.

Comment author: Psychohistorian 24 September 2009 12:13:51AM *  0 points [-]

I agree in principle, but I'm curious as to how much one is stretching the term "doomsday." If we never develop true immortality, 100% of all humans will die at some point, and we can be sure we're part of that 100%. I don't think "death" counts as a doomsday event, even if it kills everyone. Similarly, some special virus that kills people 5 minutes before they would otherwise die could kill 100% of the future population, but I wouldn't really think of it as a doomsday virus. Doomsday need not kill everyone in one go, but I don't think it can take centuries (unless it's being limited by the speed of light) and still be properly called a doomsday event.

That said, I'm still curious as to what evidence supports any claim of such an event actually happening without narrowing down anything about how or when it will happen.

Comment author: Alicorn 24 September 2009 12:37:02AM 0 points [-]

Unless I missed something, "doomsday" just means the extinction of the human species.

Comment author: prase 24 September 2009 07:04:48PM 1 point [-]

Doesn't it refer to the day of the extinction? "Doomsmillenium" doesn't sound nearly as good, I think.

Comment author: Alicorn 24 September 2009 10:34:27PM 0 points [-]

Sure. But the human species can go extinct on one day without a vast number of humans dying on that day. Maybe it's just one little old lady who took a damn long time to kick the bucket, and then finally she keels over and that's "doomsday".

Comment author: prase 25 September 2009 02:21:50PM 0 points [-]

That's what Psychohistorian was saying shouldn't be called doomsday, and I tend to agree.

Comment author: eirenicon 24 September 2009 12:59:19AM 0 points [-]

Yes, and the doomsday argument is not in regards to whether or not doomsday will occur, but when.