If your utility function depends on the size of the universe---for example, if you are an "average utilitarian" who averages across all of the possible holes in reality where there might be an observer moment---then you may not run into these problems. When you learn that the universe supports TREE(100) computational steps, you downgrade the value of the life you lived so far to something negligible. But the value of the universe that lies ahead of you is still 1, just the same as ever.
(The instantiation I have in mind is UDASSA; I was prompted to post this by the recent discussion of Pascal's mugging.)
I could totally imagine this having some other wacky behavior though...
True. Of course, average utilitarianism seems whacky to me as such, in particular if physics allows you to influence the size of the universe; it might lead you to choose a small universe if you have a better chance of filling it with people having fun.
Followup to: Pascal's Mugging: Tiny probabilities of vast utilities; The Lifespan Dilemma
This is Pascal's Mugging: Someone comes to you and says, "Give me five dollars, and I'll use my powers from outside the matrix to grant you 4^^^^4 years of fun." And they're lying, of course, but under a Solomonoff prior, the probability that they're not, though surely very small, isn't going to be less than one in 3^^^3; and so if you shut up and multiply, it's clear that the expected utility of paying up outweighs the expected utility of anything sensible you might be doing with those five dollars, and therefore—
Well, fortunately, if you're afraid that your utility-maximizing AI will end up paying all its money to the first clever mugger to come along and ask: never to worry! It will do so only if it can't think of anything better to do with five dollars, after all. So to avoid being mugged, all it has to do is to think of a harebrained scheme for spending $5 that has more than a one-in-4^^^4 chance of providing 5^^^^5 years of fun. Problem solved.
If, however, you would like to be there be a chance greater than one-in-hell that your AI ends up doing something actually useful, you'll need to do something else. And the simplest answer is to adopt a bounded utility function: any positive singularity gives at least 50 utils, a billion years gives 80 utils, a googol years gives 99 utils, a googolplex years gives 99.9 utils, and 4^^^^4 years of fun give 100 utils (minus epsilon).
This will, indeed, solve the problem. Probability of getting mugged: used to be one (minus epsilon, of course); has now been brought down to zero. That's right: zero.
(Plus epsilon.)
But let's suppose that the impossible happens, and the universe turns out to be able to support TREE(100) years of fun, and we've already lived out 4^^^^4 of them, and the AI has long since folded up operations and faded out of existence because humanity has become sufficiently sane that we no longer need it—
And lo, someone comes to you and says, "Alas, you're not really experiencing 4^^^^4 years of fun here; you're really a mere billion-year-old living in a very convincing simulation. Give me five dollars, and I'll use my powers from outside the matrix to extend your lifespan to a googol years."
And they're lying, of course — but it has been a long time indeed since you last faced a choice that could make a difference of nineteen whole utils...
*
If you truly have a bounded utility function, you must agree that in this situation, paying up is exactly what you'd want to do. Even though it means that you will not experience 4^^^^4 years of fun, even conditional on the universe being capable of supporting TREE(100) of them.
[ETA: To clarify, by "4^^^^4", I really mean any number so large that your utility function assigns (100 - epsilon) utils to it. It's possible to have a utility function where this is only true for infinite numbers which are so incredibly infinite that, given a particular formal language, their definition is so long and complicated that no mere human-sized mind could comprehend it. See this comment thread for discussion of bounded utility functions that assign significant weight to very large lifetimes.]