The question isn't well-defined. Utility is a measure of value for different states of the world. You can't just "give x utility", you have to actually alter some state of the world, so to be meaningful the question needs to be formulated in terms of concrete effects in the world - lives saved, dollars gained, or whatever.
Humans also seem to have bounded utility functions (as far as they can be said to have such at all), so the "1 utility" needs to be defined so that we know how to adjust for our bounds.
I think this kind of criticism makes sense if only if you postulate that there's some kind of extra, physical restrictions on utilities. Perhaps humans have bounded utility functions, but do all agents? It sure seems like decision theory should be able to handle agents with unbounded utility functions. If this is impossible for some reason, well that's interesting in it's own right. To figure out why it's impossible, we first have to notice our own confusion.
This problem was invented by Armok from #lesswrong. Discuss.