Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn't matter how a system state came about, only what properties the system state has, e.g. it shouldn't matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I'm not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.
When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.
The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.
Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?
Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?
Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.
If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.
"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."
Well, that's what a real utility monster looks like.
The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.
1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.
2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.