An elementary question that has probably been discussed for 300 years, but I don't know what the right keyword to use to google it might be.
How, theoretically, do you deal with (in decision theory/AI alignment) a "noncompact" utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?
E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?
I guess the obvious answer is "such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent's true utility function is", but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn't cause the agent to be stupid.
What you do is you pick a turing machine that you think halts eventually, but takes a really long time to do so, and then expend the resource when that TM stops.
After all, a computer with an N bit program can't delay longer than BB(N). In practice, if most of the bits aren't logically correlated with long running turing machines, then it can't run nearly that long. This inevitably becomes a game of logical uncertainty about which turing machines halt.
If the computer has a finite amount of memory and can't build more, this puts a 2n bound on how long it can weight. If it can build more, it will.
The point is that it needs to pick some long running computation that it can be fairly sure halts eventually.
This gets into details about exactly how the AI is handling logical uncertainty.