The value of saved vs. new vs. cloned lives is a worthwhile question (and yes, it's only one example) - to introspect on.
I'd gain more satisfaction out of saving a group of people by defeating the cause directly - safely killing or capturing the kidnappers rather than paying the ransom. I'd rather save all those at risk by defeating the entire threat, permanently. If I can only save a small fraction of the group threatened by a single cause, that's less satisfying. But maybe in what you'd think would be a nearly-linear region (you can save a few people from starvation today, for sure), I'd be more than half as satisfied by helping one identifiable person and being able to monitor the consequences than I would by helping two (out of an ocean of a billion). Further, in those "drop in a bucket" cases, I'd expect some desire to save people from diverse threats, as long as the reduced efficiency wasn't too high to justify the thrill of novelty. This desire would be in tension with conserving research/decision effort (just save one more life in the way already researched, prepared, and tested), consistency, a desire for complete victory (but I postulated that my maximal impact was too small - but becoming part of an alliance that achieves complete victory would be nice).
Part of the value of saving existing lives is that I feel a sense of security knowing that I and people like me are fighting such threats as might someday affect me - a reflexive feeling of having allies in the world who might help me - not as a result of anonymous charity (which would be irrational), but as a result of my being the type of person who, when having resources to spare, helps where it's needed more.
But I'm convinced by mathematical arguments that utility should be additive. If the value of N things in the real world is not N times the value of 1 thing, then I handle that in how I assign utility to world states. I want to use additive utility, and as far as I can tell I'm immune to arguments about nonlinearity of objects.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm guessing mostly at the meme level.
It seems pretty obvious, doesn't it? Utilitarianism makes a carrier believe that they should act to maximize social welfare and that more people believing utilitarianism would help toward that goal, so carriers think they should try to propagate the meme. Also, many egoists may believe that utilitarians would be more willing to contribute to the production of public goods, which they can free ride upon, so they would tend to not argue publicly against utilitarianism, which further contributes to its propagation.
Your just-so story is more complicated than you seem to think. It involves an equilibrium of at least two memes: an evangelical utilitarianism which damages the host but propagates the meme, plus a cryptic egoism which presumably benefits the host but can't successfully propagate (it repeatedly arises by spontaneous generation, presumably).
I could critique your story on grounds of plausibility (which strategy do crypto-egoists suggest to their own children?) but instead I will ask why someone infected by the evangelical utilitarianism meme would argue as you suggested in the great-grandparent:
Isn't it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?