What's the fundamental difference between those two cases? I don't see it, do you?
One fundamental difference is that I don't care about Felix's further happiness. After some point, I may even resent it, which would make his additional happiness of negative utility to me.
Another difference is that happiness may be best represented as a percentage with an upper bound of e.g. 100% happy, rather than be an integer you can keep adding to without end.
I think Felix's case may be an interesting additional scenario to consider, in order to be sure that AIs don't fall victims to it (e.g. by creating a superintelligence and making it super-happy, to the expense of normal human happiness). But it's not the same scenario as the specks.
Happiness, as a state of mind in humans, seems less to me about how strong the "orgasms" are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of 'human' and 'happiness' to a computer)?
I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people's intuitions about the Felix and Wireheading problems.
I laughed: SMBC comic.