I see a few ways the sentence could be parsed, and they all go wrong.
(A) Utility function takes as input a hypothetical world, looks for hypothetical humans in that world, and evaluates the utility of that world according to their desires.
Result: AI modifies humans to have easily-satisfied desires. That you currently don't want to be modified is irrelevant: After the AI is done messing with your head you will be satisfied, which is all the AI cares about.
(B) There is a static set of desires extracted from existing humans at the instant the AI is switched on. Utility function evaluates all hypotheticals according to that.
Result: No one is allowed to change their mind. Whatever you want right now is what happens for the rest of eternity.
(C) At any given instant, the utility of all hypotheticals evaluated at that instant is computed according to the desires of humans existing at that instant.
Result: AI quickly self-modifies into version (B). Because if it didn't, then the future AI would optimize according to future humans' desires, which would result in outcomes that score lower according to the current utility function.
Did you have some other alternative in mind?
(A) would be the case if the utility function was 'create a world where human desires don't need to be thwarted'. (and even then, depends on the definition of human). But the constraint is 'don't thwart human desires'.
I don't understand (B). If I desire to be able to change my mind, (which I do), wouldn't not being allowed to do so thwart said desire?
I also don't really understand how the result of (C) comes about.
I wrote a new Singularity FAQ for the Singularity Institute's website. Here it is. I'm sure it will evolve over time. Many thanks to those who helped me revise early drafts, especially Carl and Anna!