I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same.
For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I'm not saying it's bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I'm just saying it's bad that there are people who want to devote their entire life too it and do nothing more ambitious.
To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber.
Call that desire what you will, perhaps "altruism", or "bettering the world". It's the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who "us" is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
Hm. That's actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I'd take that, even for relatively low values of money. (I'm not quite sure of what the cuttoff is though, but it's low). On the other hand, I think I'd be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie....
When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.
The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.
Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?
Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?
Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.
If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.
"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."
Well, that's what a real utility monster looks like.
The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.
1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.
2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.