I recently posted a discussion article on the Doomsday Argument (DA) and Strong Self-Sampling Assumption. See http://lesswrong.com/lw/9im/doomsday_argument_with_strong_selfsampling/
This new post is related to another part of the literature concerning the Doomsday Argument - the Self Indication Assumption or SIA. For those not familiar, the SIA says (roughly) that I would be more likely to exist if the world contains a large number of observers. So, when taking into account the evidence that I exist, this should shift my probability assessments towards models of the world with more observers.
Further, on first glance, it looks like the SIA shift can be arranged to exactly counteract the effect of the DA shift. Consider, for instance, these two hypotheses:
H1. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion observers.
H2. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion trillion observers.
Suppose I had assigned a prior probability ratio p_r = P(H1)/P(H2) before considering either SIA or the DA. Then when I apply the SIA, this ratio will shrink by a factor of a trillion i.e. I've become much more confident in hypothesis H2. But then when I observe I'm roughly the 100 billionth human being, and apply the DA, the ratio expands back by exactly the same factor of a trillion, since this observation is much more likely under H1 than under H2. So my probability ratio returns to p_r. I should not make any predictions about "Doom Soon" unless I already believed them at the outset, for other reasons.
Now I won't discuss here whether the SIA is justified or not; my main concern is whether it actually helps to counteract the Doomsday Argument. And it seems quite clear to me that it doesn't. If we choose to apply the SIA at all, then it will instead overwhelming favour a hypothesis like H3 below over either H1 or H2:
H3. Across all of space time, there are infinitely many civilizations of observers, and infinitely many observers in total.
In short, by applying the SIA we wipe out from consideration all the finite-world models, and then only have to look at the infinite ones (e.g. models with an infinite universe, or with infinitely many universes). But now, consider that H3 has two sub-models:
H3.1. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking a suitable limit construction to define the mean) is 200 billion observers.
H3.2. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking the same limit construction) is 200 billion trillion observers.
Notice that while SIA is indifferent between these sub-cases (since both contain the same number of observers), it seems clear that DA still greatly favours H3.1 over H3.2. Whatever our prior ratio r' = P(H3.1)/P(H3.2), DA raises that ratio by a trillion, and so the combination of SIA and DA also raises that ratio by a trillion. SIA doesn't stop the shift.
Worse still, the conclusion of the DA has now become far *stronger*, since it seems that the only way for H3.1 to hold is if there is some form of "Universal Doom" scenario. Loosely, pretty much every one of those infinitely-many civilizations will have to terminate itself before managing to expand away from its home planet.
Looked at more carefully, there is some probability of a civilization expanding p_e which is consistent with H3.1 but it has to be unimaginably tiny. If the population ratio of an expanded civilization to a a non-expanded one is R_e, then H3.1 requires that p_e < 1/R_e. But values of R_e > trillion look right; indeed values of R_e > 10^24 (a trillion trillion) look plausible, which then forces p_e < 10^-12 and plausibly < 10^-24. The believer in the SIA has to be a really strong Doomer to get this to work!
By contrast the standard DA doesn't have to be quite so doomerish. It can work with a rather higher probability p_e of expansion and avoiding doom, as long as the world is finite and the total number of actual civilizations is less than 1 / p_e. As an example, consider:
H4. There are 1000 civilizations of observers in the world, and each has a probability of 1 in 10000 of expanding beyond its home planet. Conditional on a civilization not expanding, its expected number of observers is 200 billion.
This hypothesis seems to be pretty consistent with our current observations (observing that we are the 100 billionth human being). It predicts that - with 90% probability - all observers will find themselves on the home planet of their civilization. Since this H4 prediction applies to all observers, we don't actually have to worry about whether we are a "random" observer or not; the prediction still holds. The hypothesis also predicts that, while the prospect of expansion will appear just about attainable for a civilization, it won't in fact happen.
P.S. With a bit of re-scaling of the numbers, this post also works with observations or observer-moments, not just observers. See my previous post for more on this.
Here's something I've thought about as a refinement of SIA:
A universe's prior probability is proportional to its original, non-anthropic probability, multiplied by its efficiency at converting computation-time to observer-time. You get this by imagining running all universes in parallel, giving them computational resources proportional to their (non-anthropic) prior probability (as in Lsearch). You consider yourself to be a random observer simulated in one of these programs. This solves the problem of infinite universes (since efficiency is bounded) while still retaining the advantages of SIA.
One problem is that our universe appears to be very inefficient at producing consciousness. However this could be compensated for if the universe's prior probability is high enough. Also, I think this system favors the Copenhagen interpretation over MWI, because MWI is extremely inefficient.
Another thought regarding the anthropic principle: you can solve all anthropic questions by just using UDT and maximizing expected utility. That is, you answer the question: "A priori, before I know the laws of the universe, is it better for someone in my situation to do X?". Unfortunately this only works if your utility function knows how to deal with infinite universes, and it leaves lots of questions (such as how to weight many different observers, or whether simulations have moral value) up to the utility function.
On the other hand if you have a good anthropic theory, then you can derive a utility function as E[personal utility | anthropic theory]; that is, what's the utility if you don't know who you are yet? In this case you judge an anthropic theory by P(anthropic theory | random observer experience is yours), using Bayes's rule, and extrapolate your personal utility function to other people using it.
It seems the computation you describe will run for infinite time, and will simulate infinitely many observers, but only finitely many in any given time period. Correct? If so, you still have my SIA problem.
If I am a "random" observer, then for any finite number N, I should expect to be simulated later than N steps into the whole computation. (Well, technically there is no way I could be sampled uniformly at random from a countably-infinite sequence of observers, except through some sort of limit construction; but let's ignore this, and just suppo... (read more)