Stuart, thanks for this.
If we are using Solomonoff induction, won't the expected population be infinite?
Very crudely, finite worlds of size n will have probability about 2^-K(n) in the Solomonoff prior, where K is the Kolmogorov complexity of the binary representation of n. This works out at about 1/n x 1/log n x 1/log log n x... for most values of n, taking base 2 logarithms, and repeating the logs until we hit a small constant. The probability is higher for the simple "non-random" values of n like a googleplex or 3^^^3.
Then if the expected population in worlds of size n is proportional to n (again this is very crude), we get an expected population proportional to:
Sigma {n=1 to infinity} n x 2^-K(n)
which is at least
Sigma {n=1 to infinity} 1/log n x 1/log log n terms
and that is a divergent sum. So SIA predicts that for any size n, we are almost certainly in a world of size bigger than n and we can't normalize the distribution! Problem.
There might be a better story with a less crude treatment, but frankly I'm doubtful. As I understand it, using SIA is equivalent (in your anthropic decision theory) to having an additive utility function which grows in proportion to the population size (or at least to the population of people whose decisions are linked to yours). And so the utility function is unbounded. And unbounded utility functions are a known problem with Solomonoff induction, at least according to Peter de Blanc (http://arxiv.org/abs/0712.4318). So I think the crude treatment is revealing a real problem here.
If we are using Solomonoff induction, won't the expected population be infinite?... There might be a better story with a less crude treatment, but frankly I'm doubtful.
Looking back at this, I've noticed there is a really simple proof that the expected population size is infinite under Solomonoff induction. Consider the "St Petersburg" hypothesis:
Sh == With probability 2^-n, the population size is 2^n .... for n = 1, 2, 3 etc.
This Sh is a well-defined, computable hypothesis, so under the Solomonoff prior it receives a non zero prior probabili...
It's well known that the Self-Indication Assumption (SIA) has problems with infinite populations (one of the reasons I strongly recommend not using the probability as the fundamental object of interest, but instead the decision, as in anthropic decision theory).
SIA also has problems with arbitrarily large finite populations, at least in some cases. What cases are these? Imagine that we had these (non-anthropic) probabilities for various populations:
p0, p1, p2, p3, p4...
Now let us apply the anthropic correction from SIA; before renormalising, we have these weights for different population levels:
0, p1, 2p2, 3p3, 4p4...
To renormalise, we need to divide by the sum 0 + p1 + 2p2 + 3p3 + 4p4... This is actually the expected population! (note: we are using the population as a proxy for the size of the reference class of agents who are subjectively indistinguishable from us; see this post for more details)
So using SIA is possible if and only if the (non-anthropic) expected population is finite (and non-zero).
Note that it is possible for the anthropic expected population to be infinite! For instance if pj is C/j3, for some constant C, then the non-anthropic expected population is finite (being the infinite sum of C/j2). However once we have done the SIA correction, we can see that the SIA-corrected expected population is infinite (being the infinite sum of some constant times 1/j).