Suppose, for a moment, that somebody has written the Utility Function. It takes, as its input, some Universe State, runs it through a Morality Modeling Language, and outputs a number indicating the desirability of that state relative to some baseline, and more importantly, other Universe States which we might care to compare it to.
Can I feed the Utility Function the state of my computer right now, as it is executing a program I have written? And is a universe in which my program halts superior to one in which my program wastes energy executing an endless loop?
If you're inclined to argue that's not what the Utility Function is supposed to be evaluating, I have to ask what, exactly, it -is- supposed to be evaluating? We can reframe the question in terms of the series of keys I press as I write the program, if that is an easier problem to solve than what my computer is going to do.
OrphanWilde appears to be talking about morality, not decision theory. The moral Utility Function of utilitarianism is not necessarily the decision-theoretic utility function of any agent, unless you happen to have a morally perfect agent lying around, so your procedure would not work.