An elementary question that has probably been discussed for 300 years, but I don't know what the right keyword to use to google it might be.
How, theoretically, do you deal with (in decision theory/AI alignment) a "noncompact" utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?
E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?
I guess the obvious answer is "such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent's true utility function is", but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn't cause the agent to be stupid.
Infinite t does not necessarily deliver infinite utility.
Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.
It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.
(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)