Suppose, for a moment, that somebody has written the Utility Function. It takes, as its input, some Universe State, runs it through a Morality Modeling Language, and outputs a number indicating the desirability of that state relative to some baseline, and more importantly, other Universe States which we might care to compare it to.
Can I feed the Utility Function the state of my computer right now, as it is executing a program I have written? And is a universe in which my program halts superior to one in which my program wastes energy executing an endless loop?
If you're inclined to argue that's not what the Utility Function is supposed to be evaluating, I have to ask what, exactly, it -is- supposed to be evaluating? We can reframe the question in terms of the series of keys I press as I write the program, if that is an easier problem to solve than what my computer is going to do.
If the set of Universe States is finite, then yes, there will be a computable utility function for any VNM-rational preferences (the program can be just a lookup table).
If the set of possible Universe States is countably infinite, and you can meaningfully encode every universe state as a finite string, then no, not every utility function is computable. Counterexample: number the possible universes and assign a utility of $1 to every universe whose number describes a halting turing machine, and $0 for every universe whose number describes a non-halting turing machine.
If the set of possible Universe States is uncountably infinite, or you cannot meaningfully encode every universe state as a finite string, then no, the utility functions might not be remotely computable.
What does Morality Modeling Language do? If you allow it to describe only computable utility functions, then you can make it describe only computable utility functions!
Ooops, not totally correct, because the probabilities in the lotteries could be uncomputable.