That sentence is still there so my comment still stands as far as I can tell. I can also tell I'm failing to convey it so maybe someone else can step in and explain it differently.
Thanks for putting in the work to write this FAQ.
I see a few ways the sentence could be parsed, and they all go wrong.
(A) Utility function takes as input a hypothetical world, looks for hypothetical humans in that world, and evaluates the utility of that world according to their desires.
Result: AI modifies humans to have easily-satisfied desires. That you currently don't want to be modified is irrelevant: After the AI is done messing with your head you will be satisfied, which is all the AI cares about.
(B) There is a static set of desires extracted from existing humans at the instant the AI is switched o...
I wrote a new Singularity FAQ for the Singularity Institute's website. Here it is. I'm sure it will evolve over time. Many thanks to those who helped me revise early drafts, especially Carl and Anna!