The discussions of anthropic principles that I've seen on LW and, say, Katja Grace's list, are limited to SIA and SSA. These both give some results that most people find unintuitive (Doomsday for SIA, presumptuous philosopher for SIA), and the debate between them seems to typically focus on which bullets are easier to bite. This seems strange to me, because a good universal prior should tell us which worlds we're most likely to be in without any dependency on intuition. What follows is my attempt to derive the correct anthropic principle in such a manner [1]. With some assumptions I find plausible, the result is something neither SIA nor SSA, resembling the former regarding doomsday and the latter regarding the presumptuous philosopher.

So far, I've only done the reasoning informally, and there are admittedly many places it could be fatally flawed. One assumes that one lives in a multiverse consisting of universes corresponding to programs on a universal Turing machine, each with prior probability inversely proportional to 2 to the power of the length of the program. One then updates on one's evidence, namely, one's experience [2]. I'll make a logical leap and assume that (it's as though) a conscious experience corresponds to a certain substring of output. We then take every possible output string ending with that substring, weight them equally (?),  and take the sum over all such strings of the programs producing that string as a prefix of output, weighted by their prior probability. That is, each program ends up with posterior probability proportional to its prior probability times the number of times it produces the experience you just had.

Reading the wiki article on Solomonoff Induction as I write this, it looks like applying this process with your prior given by Solomonoff Induction will give you SIA (suggesting a new backronym?). This is where I become even more presumptuous. I take issue with the assumption that we necessarily find ourselves in a halting program. As far as I know, cosmologists don't think our universe is going to halt. Instead, I would suggest that all possible programs are running, unless they've halted. (Maybe the halting ones get restarted after they halt, maybe not). There are then infinitely many outputs that end with any given conscious experience, and the sum of their prior probabilities may not be finite. So, you either only consider outputs of length ≤ T, and take the limit as T goes to infinity, or you multiply all probabilities of outputs by a factor inversely proportion to 2 to the power of the length of the output. The chief disadvantages of the former are that I'm not sure that limit has to exist and that it assigns infinitesimal probability to any universe that only produces your experience a finite number of times if any universe produces it an infinite number of times; the chief disadvantage of the latter is that I'm not sure why things should be weighted in such a manner.

Either way, if you look at the anthropic implications, you get that universes with more people with your experience are more likely and that universes with more output not corresponding to your experiences are less likely. The math is easier using the limit method, which gives each universe a likelihood proportional to the proportion of its substrings that correspond to your experience. If we consider programs looking like our universe, and assume the amount of output per time step (that is, one second according to the equations governing that universe) is proportional to the size of the universe, then a universe in which conscious life lasts longer is more likely than one of equal size in which it goes extinct sooner [3], since both produce the same amount of output, but the former universe produces more conscious experiences. I'm pretty sure this works out just right to neutralize the doomsday argument. On the other hand, a universe that's 3^^^3^^^3 times larger produces 3^^^3^^^3 times as many conscious experiences, but they're distributed among 3^^^3^^^3 times as much output, so it does not gain any more likelihood as a consequence of its size.

[1] My actual reasoning process was more chaotic and less reliable than this suggests; I'll elaborate if anyone wants.

[2] Treating a single instant of conscious experience as the only evidence should yield a probability distribution with the correct probability assigned being a Boltzmann brain, but since in any case we can't complete such a calculation without assuming that our working memory's contents are valid, I'll ignore such possibilities in examples.

[3] The observant reader will notice that any universe where conscious life ever goes permanently extinct should have infinitesimal probability. I suggest assuming for the sake of the argument that universes go in grand cycles, and that we're discussing whether life ends sooner or later in each cycle.

New Comment
2 comments, sorted by Click to highlight new comments since:

Your idea looks a little similar to UDASSA which doesn't support the doomsday argument. Our current best idea, UDT, doesn't have that problem either.

[-][anonymous]00

One assumes that one lives in a multiverse consisting of universes corresponding to programs on a universal Turing machine, each with prior probability inversely proportional to 2 to the power of the length of the program.

I have trouble with this kind of "assumptions". What does it mean, exactly, for a universe to correspond to a given program? Correspond how, what's a "universe" for the purposes of establishing this correspondence, and what's a "correspondence"? Why programs, specifically, and not any other kind of (axiomatic definition for a) mathematical structure?

[This comment is no longer endorsed by its author]Reply