Clearly what we need to do first is to turn this into a Sleeping Beauty variant.
Let C = 6/pi^2, and suppose that we choose to wake Sleeping Beauty up k times with probability C/k^2. Then Sleeping Beauty is put in the awkward position that the expected number of times she wakes up is infinite. When asked "what is the probability that you were only woken up once?" the SSA, of course, suggests that Sleeping Beauty should answer C = 6/pi^2, while the SIA sort of gives up and maybe suggests 0 as a possible answer (If you object to 0 as a probability, recall that we're dealing with an infinite sample space, which you should also refuse to believe in).
I argue that this is a legitimate answer to give. Why? Recall that in the standard Sleeping Beauty problem, there is a way to specifically elicit SIA-based probabilities for Sleeping Beauty. We ask "Would you rather receive $1 if the coin came up heads (and $0 otherwise), or $1 if the coin came up tails?" By answering "heads", Sleeping Beauty earns $1 twice over the course of the experiment, for a total of $2; by answering tails, she earns $1. We can vary the payoffs to confirm that her best strategy is to act as though the probability the coin came up heads is 1/3.
Now let's consider the infinite case. We ask "Would you rather receive a googolplex dollars if you are to be woken up only once, or $1 if you are to be woken up more than once?" This is actually a bit awkward because money has nonlinear utility; but it's trivial to see that Sleeping Beauty maximizes her expected winnings by choosing the second option. So she is acting as though the probability she's only woken up once is less than 1/googolplex, and similarly for any other large number. The only probability consistent with this is 0.
I'm to lazy to work out how, but it seems very easy to work out an infinite series of bets to give her such that she'll get 0 with certainty no matter what number of times she is woken up if this is the case. And if the experiential really does never end there still never comes a time when she gets to enjoy the money.
It's well known that the Self-Indication Assumption (SIA) has problems with infinite populations (one of the reasons I strongly recommend not using the probability as the fundamental object of interest, but instead the decision, as in anthropic decision theory).
SIA also has problems with arbitrarily large finite populations, at least in some cases. What cases are these? Imagine that we had these (non-anthropic) probabilities for various populations:
p0, p1, p2, p3, p4...
Now let us apply the anthropic correction from SIA; before renormalising, we have these weights for different population levels:
0, p1, 2p2, 3p3, 4p4...
To renormalise, we need to divide by the sum 0 + p1 + 2p2 + 3p3 + 4p4... This is actually the expected population! (note: we are using the population as a proxy for the size of the reference class of agents who are subjectively indistinguishable from us; see this post for more details)
So using SIA is possible if and only if the (non-anthropic) expected population is finite (and non-zero).
Note that it is possible for the anthropic expected population to be infinite! For instance if pj is C/j3, for some constant C, then the non-anthropic expected population is finite (being the infinite sum of C/j2). However once we have done the SIA correction, we can see that the SIA-corrected expected population is infinite (being the infinite sum of some constant times 1/j).