An elementary question that has probably been discussed for 300 years, but I don't know what the right keyword to use to google it might be.
How, theoretically, do you deal with (in decision theory/AI alignment) a "noncompact" utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?
E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?
I guess the obvious answer is "such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent's true utility function is", but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn't cause the agent to be stupid.
Basic answer: there are no infinities in the real world. whatever resource t you're talking about is actually finite - find the limit and your problem goes away (that problem, at least; many others remain when you do math on utility, rather than doing the math on resources/universe-states, and then simply transforming to utility).
Suppose the function U(t) is increasing fast enough, e.g. if the probability of reaching t is exp(-t), then let U(t) be exp(2t), or whatever.
I don't think the question can be dismissed that easily.