If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Does the story actually says the Superhappies really know humanity's utility function better? As in, does an omniscient narrator tell it, or is it a Superhappy or one of the crew that says this? That changes a lot, to me. Of course the Superhappies would believe they know our utility function better than we do. Just like how the humans assumed they knew what was better for the Babyeaters.
Similarly, the Superhappies are moral, for their idea of morality. They were perfectly willing to use force (not physical, but force nonetheless) to encourage humans to see their point of view. They threatened humanity and were willing to forcibly change human children, even if the adults could continue to feel pain. While humans also employs threats and force to change behavior, in most cases we would be hard-pressed to call that "moral."
From a meta-perspective, I'd findit odd if Yudkowsky wrote it like that. He's not careless enough to make that mistake and as far as I know, he thinks humanity's utility function goes beyond mere bliss.
The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function. Which I personally would find very hard to understand.
Um, that's the opposite of how utility functions work. They don't have sacred components. You can and should trade off one component for a larger gain in another component. That's exactly what the super happies were offering.