I find it quite plausible that would ensure this (60-80% credence?), but it's not obvious. In particular the way you normally prove that there's a utility function is that you construct it, and you use the continuity axiom to do this.
Without the continuity axiom, maybe you can prove some representation with something satisfying the axioms for the reals ... but it looks hard.
It seems that you do have that result, see here: http://link.springer.com/article/10.1007%2FBF01766393
However this seems to require a strengthening of the independence axiom, so that the implication goes in the opposite direction in some cases (see axiom 6, page 71).
In a previous post, I left a somewhat cryptic comment on the continuity/Archimedean axiom of vNM expected utility.
Here I'll explain briefly what I mean by it. Let's drop that axiom, and see what could happen. First of all, we could have a utility function with non-standard real value. This allows some things to be infinitely more important than others. A simple illustration is lexicographical ordering; eg my utility function consists of the amount of euros I end up owning, with the amount of sex I get serving as a tie-breaker.
There is nothing wrong with such a function! First, because in practice it functions as a standard utility function (I'm unlikely to be able to indulge in sex in a way that has absolutely no costs or opportunity costs, so the amount of euros will always predominate). Secondly because, even if it does make a difference... it's still expected utility maximisation, just a non-standard version.
But worse things can happen if you drop the axiom. Consider this decision criteria: I will act so that, at some point, there will have been a chance of me becoming heavy-weight champion of the world. This is compatible with all the other vNM axioms, but is obviously not what we want as a decision criteria. In the real world, such decision criteria is vacuous (there is a non-zero chance of me becoming heavyweight champion of the world right now), but it certainly could apply in many toy models.
That's why I said that the continuity axiom is protecting us from "I could have been a contender (and that's all that matters)" type reasoning, not so much from "some things are infinitely important (compared to others)".
Also notice that the quantum many-worlds version of the above decision criteria - "I will act so that the measure of type X universe is non-zero" - does not sound quite as stupid, especially if you bring in anthropics.