Followup to: Pascal's Mugging: Tiny probabilities of vast utilities; The Lifespan Dilemma
This is Pascal's Mugging: Someone comes to you and says, "Give me five dollars, and I'll use my powers from outside the matrix to grant you 4^^^^4 years of fun." And they're lying, of course, but under a Solomonoff prior, the probability that they're not, though surely very small, isn't going to be less than one in 3^^^3; and so if you shut up and multiply, it's clear that the expected utility of paying up outweighs the expected utility of anything sensible you might be doing with those five dollars, and therefore—
Well, fortunately, if you're afraid that your utility-maximizing AI will end up paying all its money to the first clever mugger to come along and ask: never to worry! It will do so only if it can't think of anything better to do with five dollars, after all. So to avoid being mugged, all it has to do is to think of a harebrained scheme for spending $5 that has more than a one-in-4^^^4 chance of providing 5^^^^5 years of fun. Problem solved.
If, however, you would like to be there be a chance greater than one-in-hell that your AI ends up doing something actually useful, you'll need to do something else. And the simplest answer is to adopt a bounded utility function: any positive singularity gives at least 50 utils, a billion years gives 80 utils, a googol years gives 99 utils, a googolplex years gives 99.9 utils, and 4^^^^4 years of fun give 100 utils (minus epsilon).
This will, indeed, solve the problem. Probability of getting mugged: used to be one (minus epsilon, of course); has now been brought down to zero. That's right: zero.
(Plus epsilon.)
But let's suppose that the impossible happens, and the universe turns out to be able to support TREE(100) years of fun, and we've already lived out 4^^^^4 of them, and the AI has long since folded up operations and faded out of existence because humanity has become sufficiently sane that we no longer need it—
And lo, someone comes to you and says, "Alas, you're not really experiencing 4^^^^4 years of fun here; you're really a mere billion-year-old living in a very convincing simulation. Give me five dollars, and I'll use my powers from outside the matrix to extend your lifespan to a googol years."
And they're lying, of course — but it has been a long time indeed since you last faced a choice that could make a difference of nineteen whole utils...
*
If you truly have a bounded utility function, you must agree that in this situation, paying up is exactly what you'd want to do. Even though it means that you will not experience 4^^^^4 years of fun, even conditional on the universe being capable of supporting TREE(100) of them.
[ETA: To clarify, by "4^^^^4", I really mean any number so large that your utility function assigns (100 - epsilon) utils to it. It's possible to have a utility function where this is only true for infinite numbers which are so incredibly infinite that, given a particular formal language, their definition is so long and complicated that no mere human-sized mind could comprehend it. See this comment thread for discussion of bounded utility functions that assign significant weight to very large lifetimes.]
"If you truly have a bounded utility function"
A bounded utility function that doesn't assign any significant marginal weight, e.g. 0.00000000001, to TREE(100), "largest number I can formulate in a year," or infinite years of fun.
It can only apply significant marginal weight for so long. Just weight until it gets to 100-epsilon utils, and then mug.
Marginal weight at infinite years is interesting. That would likely mean that, after a certain amount of fun, you just put all your resources to trying to get infinite fun.