I recently posted a discussion article on the Doomsday Argument (DA) and Strong Self-Sampling Assumption. See http://lesswrong.com/lw/9im/doomsday_argument_with_strong_selfsampling/
This new post is related to another part of the literature concerning the Doomsday Argument - the Self Indication Assumption or SIA. For those not familiar, the SIA says (roughly) that I would be more likely to exist if the world contains a large number of observers. So, when taking into account the evidence that I exist, this should shift my probability assessments towards models of the world with more observers.
Further, on first glance, it looks like the SIA shift can be arranged to exactly counteract the effect of the DA shift. Consider, for instance, these two hypotheses:
H1. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion observers.
H2. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion trillion observers.
Suppose I had assigned a prior probability ratio p_r = P(H1)/P(H2) before considering either SIA or the DA. Then when I apply the SIA, this ratio will shrink by a factor of a trillion i.e. I've become much more confident in hypothesis H2. But then when I observe I'm roughly the 100 billionth human being, and apply the DA, the ratio expands back by exactly the same factor of a trillion, since this observation is much more likely under H1 than under H2. So my probability ratio returns to p_r. I should not make any predictions about "Doom Soon" unless I already believed them at the outset, for other reasons.
Now I won't discuss here whether the SIA is justified or not; my main concern is whether it actually helps to counteract the Doomsday Argument. And it seems quite clear to me that it doesn't. If we choose to apply the SIA at all, then it will instead overwhelming favour a hypothesis like H3 below over either H1 or H2:
H3. Across all of space time, there are infinitely many civilizations of observers, and infinitely many observers in total.
In short, by applying the SIA we wipe out from consideration all the finite-world models, and then only have to look at the infinite ones (e.g. models with an infinite universe, or with infinitely many universes). But now, consider that H3 has two sub-models:
H3.1. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking a suitable limit construction to define the mean) is 200 billion observers.
H3.2. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking the same limit construction) is 200 billion trillion observers.
Notice that while SIA is indifferent between these sub-cases (since both contain the same number of observers), it seems clear that DA still greatly favours H3.1 over H3.2. Whatever our prior ratio r' = P(H3.1)/P(H3.2), DA raises that ratio by a trillion, and so the combination of SIA and DA also raises that ratio by a trillion. SIA doesn't stop the shift.
Worse still, the conclusion of the DA has now become far *stronger*, since it seems that the only way for H3.1 to hold is if there is some form of "Universal Doom" scenario. Loosely, pretty much every one of those infinitely-many civilizations will have to terminate itself before managing to expand away from its home planet.
Looked at more carefully, there is some probability of a civilization expanding p_e which is consistent with H3.1 but it has to be unimaginably tiny. If the population ratio of an expanded civilization to a a non-expanded one is R_e, then H3.1 requires that p_e < 1/R_e. But values of R_e > trillion look right; indeed values of R_e > 10^24 (a trillion trillion) look plausible, which then forces p_e < 10^-12 and plausibly < 10^-24. The believer in the SIA has to be a really strong Doomer to get this to work!
By contrast the standard DA doesn't have to be quite so doomerish. It can work with a rather higher probability p_e of expansion and avoiding doom, as long as the world is finite and the total number of actual civilizations is less than 1 / p_e. As an example, consider:
H4. There are 1000 civilizations of observers in the world, and each has a probability of 1 in 10000 of expanding beyond its home planet. Conditional on a civilization not expanding, its expected number of observers is 200 billion.
This hypothesis seems to be pretty consistent with our current observations (observing that we are the 100 billionth human being). It predicts that - with 90% probability - all observers will find themselves on the home planet of their civilization. Since this H4 prediction applies to all observers, we don't actually have to worry about whether we are a "random" observer or not; the prediction still holds. The hypothesis also predicts that, while the prospect of expansion will appear just about attainable for a civilization, it won't in fact happen.
P.S. With a bit of re-scaling of the numbers, this post also works with observations or observer-moments, not just observers. See my previous post for more on this.
I'm not sure I get this. I think I've grasped the high-level point about UDT (that the epistemic probabilities strictly never update). So that if a UDT agent has a Solomonoff prior, they always use that prior to make decisions, regardless of evidence observed.
However, UDT agents have still got to bet in some cases, and they still observe evidence which can influence their bets. Suppose that each UDT agent is offered a betting slip which pays out 1 utile if the world is consistent with H3.1, and nothing otherwise. Suppose an agent has observed or remembers evidence E. How much in utiles should the agent be prepared to pay for that slip? If she pays x (< 1) utiles, then doesn't that define a form of subjective probability P[H3.1|E] = x? And doesn't that x vary with the evidence?
Let's try to step it through. Suppose in the Solomonoff prior that H3.1 has a probability p31 and H3.2 has a probability p32. Suppose also that the probability of a world containing self-aware agents who have discovered UDT is pu and the probability of an infinite world with such agents is pui.
Suppose now that an agent is aware of its own existence, and has reasoned its way to UDT but doesn't yet know anything much else about the world; it certainly doesn't know yet how many observers there have been yet in its own civilization. Let's call this evidence E0.
Should the agent currently pay p31 for the betting slip, or pay something different as her value of P[H3.1 | E0]? If something different, then what? (A first guess is that it is p31/pu; an alternative guess is p31/pui if the agent is effectively applying SIA). Also, at this point, how much should the agent pay for a bet that pays off if the world is infinite: would that be pui/pu or close to 1?
Now suppose the agent learns that she is the 100 billionth observer in her civilization, creating evidence E1. How much should the agent now pay for the betting slip as the value of P[H3.1| E1]? How much should the agent pay for the infinite bet?
Finally, do the answers depend at all on the form of the utility function, and on whether correct bets by other UDT agents add to that utility function? From my understanding of Stuart Armstrong's paper, the form of the utility function does matter in general, but does UDT make any difference here? (If the utility depends on other agents' correct bets, then we need to be especially careful in the case of worlds with infinitely many agents, since we are summing over infinitely many wins or losses).
That analysis uses standard probability theory and decision theory, but that doesn't work in this sort of situation.
Compare this to Psy-Kosh's non-anthropic problem. Before you are told whether you are a decider, you can see, in the normal way, that it is better to follow the strategy of choosing "nay" rather than "yea" no matter what. If you condition on finding out that you are a decider the same way that you would condition on any piece of evidence, it appears that it would be better to choose "yea", but we can see that som... (read more)