It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:
- Everybody has preference function assigning real values (utilons) to states of reality
- Preference function is a given and shouldn't be manipulated
- People try to act to maximize number of utilons, that's how we find about their preference function
- People are happier when they get more utilons
- We should give everybody as much utilons as we can
There are a few obivous problems here, that I won't be bothering with today:
- Any affine transformation of preference function leaves what is essentially the same preference function, but it matters when we try to aggregate them. If we multiply one person's preference function values by 3^^^3, they get to decide everything in every utilitarian scenario
- Problem of total vs average number of utilons
- People don't really act consistently with "maximizing expected number of utilons" model
- Time discounting is a horrible mess, especially since we're hyperbolic so inconsistent by definition
But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.
Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.
There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?
Making explicit something implicit in steven0461's comment: the term "delta function" has a technical meaning, and it doesn't have anything to do with what you're describing. You might therefore prefer to avoid using that term in this context.
(The "delta function" is a mathematical object that isn't really even a function; handwavily it has f(x)=0 when x isn't 0, f(x) is infinite when x is 0, and the total area under the graph of f is 1. This turns out to be a very useful gadget in some areas of mathematics, and one can turn the handwaving into actual mathematics at some cost in complexity. When handwaving rather than mathematics is the point, one sometimes hears "delta function" used informally to denote anything that starts very small, rapidly becomes very large, and then rapidly becomes very small again. Traffic at a web site when it gets a mention in some major media outlet, say. That's the "Dirac delta" Steven mentioned; the "Kronecker delta" is a function of two variables that's 1 when they're equal and 0 when they aren't, although most of the time when it's used it's actually denoting something hairier than that. This isn't the place for more details.)