It's well known that the Self-Indication Assumption (SIA) has problems with infinite populations (one of the reasons I strongly recommend not using the probability as the fundamental object of interest, but instead the decision, as in anthropic decision theory).

SIA also has problems with arbitrarily large finite populations, at least in some cases. What cases are these? Imagine that we had these (non-anthropic) probabilities for various populations:

p0, p1, p2, p3, p4...

Now let us apply the anthropic correction from SIA; before renormalising, we have these weights for different population levels:

0, p1, 2p2, 3p3, 4p4...

To renormalise, we need to divide by the sum 0 + p1 + 2p2 + 3p3 + 4p4... This is actually the expected population! (note: we are using the population as a proxy for the size of the reference class of agents who are subjectively indistinguishable from us; see this post for more details)

So using SIA is possible if and only if the (non-anthropic) expected population is finite (and non-zero).

Note that it is possible for the anthropic expected population to be infinite! For instance if pj is C/j3, for some constant C, then the non-anthropic expected population is finite (being the infinite sum of C/j2). However once we have done the SIA correction, we can see that the SIA-corrected expected population is infinite (being the infinite sum of some constant times 1/j).

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 3:25 PM

Clearly what we need to do first is to turn this into a Sleeping Beauty variant.

Let C = 6/pi^2, and suppose that we choose to wake Sleeping Beauty up k times with probability C/k^2. Then Sleeping Beauty is put in the awkward position that the expected number of times she wakes up is infinite. When asked "what is the probability that you were only woken up once?" the SSA, of course, suggests that Sleeping Beauty should answer C = 6/pi^2, while the SIA sort of gives up and maybe suggests 0 as a possible answer (If you object to 0 as a probability, recall that we're dealing with an infinite sample space, which you should also refuse to believe in).

I argue that this is a legitimate answer to give. Why? Recall that in the standard Sleeping Beauty problem, there is a way to specifically elicit SIA-based probabilities for Sleeping Beauty. We ask "Would you rather receive $1 if the coin came up heads (and $0 otherwise), or $1 if the coin came up tails?" By answering "heads", Sleeping Beauty earns $1 twice over the course of the experiment, for a total of $2; by answering tails, she earns $1. We can vary the payoffs to confirm that her best strategy is to act as though the probability the coin came up heads is 1/3.

Now let's consider the infinite case. We ask "Would you rather receive a googolplex dollars if you are to be woken up only once, or $1 if you are to be woken up more than once?" This is actually a bit awkward because money has nonlinear utility; but it's trivial to see that Sleeping Beauty maximizes her expected winnings by choosing the second option. So she is acting as though the probability she's only woken up once is less than 1/googolplex, and similarly for any other large number. The only probability consistent with this is 0.

I'm to lazy to work out how, but it seems very easy to work out an infinite series of bets to give her such that she'll get 0 with certainty no matter what number of times she is woken up if this is the case. And if the experiential really does never end there still never comes a time when she gets to enjoy the money.

Actually, you're right about the infinite series of bets. Let N be the number of times Sleeping Beauty is to be woken up. Suppose (edit: on each day she wakes up) Sleeping Beauty is offered the following bets:

  • $10 if N=1, or $1 otherwise.
  • $10 if N=2, or $0.50 otherwise.
  • $10 if N=3, or $0.25 otherwise.
  • $10 if N=4, or $0.125 otherwise.
  • And so on.

In each individual bet, the second option has an infinite expectation, while the first has a finite expectation. However, if Sleeping Beauty accepts all the first options, she gets $10 every day she wakes up, for a total of $10N; if Sleeping Beauty accepts all the second options, she gets less than $2 every day she wakes up, for a total of $2N. Even though both options yield infinite expected money, this is still clearly inferior.

I suspect though, that this is a problem with the infinite nature of the experiment, not with Sleeping Beauty's betting preferences.

That's not what I meant. I meant... ugh I'm really tired right now and can't think straight.

maybe:

Pot starts at 1$, each iteration she bets the pot against adding one dollar to it if N is greater than the number of iterations so far, with if needed the extra rule that if she gets woken up an infinite number of time she really gets infinite $.

To sleep deprived to check if the math actually works out like I think it does.

I don't believe the first point, but I'm not entirely certain you're wrong, so if you think you have such a construction, I'd like to see it.

As for your second point, the number of times that Sleeping Beauty wakes up is always finite, so no matter what, the experiment does end. It's just that, due to the heavy tail of the distribution, the expected value is infinite (see also: St. Petersburg paradox). Of course, we would have to adjust rewards for inflation; also, the optimal strategy changes if the universe (or Sleeping Beauty) has a finite lifespan. So there's a few implementation problems here, yes.

Some neat points here.

This reminds me of some of the discussion in this paper by Weatherson, which makes a number of infinity-related claims against indifference principles for assigning the probability that you are a certain member of a class of subjectively indistinguishable observers.

It seems to me that is we have infinite population, which include all possible observers, then SIA merges with SSA. For example, in presumptuous philosopher, it would mean that there are two regions of the multiverse: one with trillion observers, and another with trillion of trillions, and it will be not surprising to be located in the larger one. 

SIA in PP becomes absurd only for a finite universe (and no other universes), where only one of two regions exists. But the absurdity is in the definition: it is absurd to think that the universe could be provably finite, as there should be some force above the universe which limits its size. 

Stuart, thanks for this.

If we are using Solomonoff induction, won't the expected population be infinite?

Very crudely, finite worlds of size n will have probability about 2^-K(n) in the Solomonoff prior, where K is the Kolmogorov complexity of the binary representation of n. This works out at about 1/n x 1/log n x 1/log log n x... for most values of n, taking base 2 logarithms, and repeating the logs until we hit a small constant. The probability is higher for the simple "non-random" values of n like a googleplex or 3^^^3.

Then if the expected population in worlds of size n is proportional to n (again this is very crude), we get an expected population proportional to:

Sigma {n=1 to infinity} n x 2^-K(n)

which is at least

Sigma {n=1 to infinity} 1/log n x 1/log log n terms

and that is a divergent sum. So SIA predicts that for any size n, we are almost certainly in a world of size bigger than n and we can't normalize the distribution! Problem.

There might be a better story with a less crude treatment, but frankly I'm doubtful. As I understand it, using SIA is equivalent (in your anthropic decision theory) to having an additive utility function which grows in proportion to the population size (or at least to the population of people whose decisions are linked to yours). And so the utility function is unbounded. And unbounded utility functions are a known problem with Solomonoff induction, at least according to Peter de Blanc (http://arxiv.org/abs/0712.4318). So I think the crude treatment is revealing a real problem here.

If we are using Solomonoff induction, won't the expected population be infinite?... There might be a better story with a less crude treatment, but frankly I'm doubtful.

Looking back at this, I've noticed there is a really simple proof that the expected population size is infinite under Solomonoff induction. Consider the "St Petersburg" hypothesis:

Sh == With probability 2^-n, the population size is 2^n .... for n = 1, 2, 3 etc.

This Sh is a well-defined, computable hypothesis, so under the Solomonoff prior it receives a non zero prior probability p > 0. This means that, under the Solomonoff prior we have:

E[Population Size] = p.E[Population Size| Sh] + (1-p).E[Population Size| ~Sh]

Assuming the second term is >= 0 (for example, that no prior hypothesis gives a negative population size), this means that E[Population Size] >= p.E[Population Size| Sh].

But E[Population Size| Sh] is infinite, so under the Solomonoff prior, E[Population Size] is also infinite.

This shows that SIA is incompatible with Solomonoff induction, as it stands. The only way to achieve compatibility is to use an approximation to Solomonoff induction which rules out hypotheses like Sh e.g. by imposing a hard upper bound on population size. But what is the rational justification for that?

Wow, someone who's read my paper! :-) it is because of considerations like the ones you mention that I'm tempted to require bounded utilities. Or unbounded utilities but only finitely many choices to be faced (which is equivalent with a bounded utility). It's the combination - unbounded utility, unboundedly many options - that is the problem.

I'm interested in how you'd apply the bound.

One approach is just to impose an arbitrary cut-off on all worlds above a certain large size (ignore everything bigger than 3^^^3 galaxies say), and then scale utility with population all the way up to the cut-off. That would give a bounded utility function, and an effect very like SIA. Most of your decisions would be weighted towards the assumption that you are living in one of the largest worlds, with size just below the cut-off. If you'd cut-off at 4^^^^4 galaxies, you'd assume you were in one of those worlds instead. However, since there don't seem to be many decisions that are critically affected by whether we are one of 3^^^3 or one or 4^^^^4 galaxies, this probably works.

Another approach is to use a bounded utility function of more self-centered construction. Let's suppose you care a lot about yourself and your family, a good measure about your friends and colleagues, a little bit (rather dilutely) about anyone else on Earth now, and rather vaguely about future generations of people. But not much at all about alien civilizations, future AIs etc. In that case your utility for a world of 3^^^3 alien civilizations is clearly not going to be much bigger than your utility for a world containing only the Earth, Sun and nearby planets (plus maybe a few nearby stars to admire at night). And so your decisions won't be heavily weighted towards such big worlds. A betting coupon which cost a cent, and paid off a million dollars if your planet was the only inhabited one in the universe would look like a very good deal. This then looks more like SSA reasoning than SIA.

This last approach looks more consistent to me, and more in-line with the utility functions humans actually have, rather than the ones we might wish them to have.

If you sum up C/j^2, that's finite, with the value given by Riemann's zeta function. Zeta of 2 is around 1.645.

It's C/j that goes up to infinity, logarithmically.

[This comment is no longer endorsed by its author]Reply

If you sum up C/j^2, that's finite, with the value given by Riemann's zeta function. Zeta of 2 is around 1.645.

...which is pi^2/6, making the sum equal to 1, which it should be, as a probability measure.

It's C/j that goes up to infinity, logarithmically.

And that sum is the expectation value.

Oh freddled gruntbuggly I misread the OP. DUM de DUM.