I don't believe the first point, but I'm not entirely certain you're wrong, so if you think you have such a construction, I'd like to see it.
As for your second point, the number of times that Sleeping Beauty wakes up is always finite, so no matter what, the experiment does end. It's just that, due to the heavy tail of the distribution, the expected value is infinite (see also: St. Petersburg paradox). Of course, we would have to adjust rewards for inflation; also, the optimal strategy changes if the universe (or Sleeping Beauty) has a finite lifespan. So there's a few implementation problems here, yes.
It's well known that the Self-Indication Assumption (SIA) has problems with infinite populations (one of the reasons I strongly recommend not using the probability as the fundamental object of interest, but instead the decision, as in anthropic decision theory).
SIA also has problems with arbitrarily large finite populations, at least in some cases. What cases are these? Imagine that we had these (non-anthropic) probabilities for various populations:
p0, p1, p2, p3, p4...
Now let us apply the anthropic correction from SIA; before renormalising, we have these weights for different population levels:
0, p1, 2p2, 3p3, 4p4...
To renormalise, we need to divide by the sum 0 + p1 + 2p2 + 3p3 + 4p4... This is actually the expected population! (note: we are using the population as a proxy for the size of the reference class of agents who are subjectively indistinguishable from us; see this post for more details)
So using SIA is possible if and only if the (non-anthropic) expected population is finite (and non-zero).
Note that it is possible for the anthropic expected population to be infinite! For instance if pj is C/j3, for some constant C, then the non-anthropic expected population is finite (being the infinite sum of C/j2). However once we have done the SIA correction, we can see that the SIA-corrected expected population is infinite (being the infinite sum of some constant times 1/j).