It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:
- Everybody has preference function assigning real values (utilons) to states of reality
- Preference function is a given and shouldn't be manipulated
- People try to act to maximize number of utilons, that's how we find about their preference function
- People are happier when they get more utilons
- We should give everybody as much utilons as we can
There are a few obivous problems here, that I won't be bothering with today:
- Any affine transformation of preference function leaves what is essentially the same preference function, but it matters when we try to aggregate them. If we multiply one person's preference function values by 3^^^3, they get to decide everything in every utilitarian scenario
- Problem of total vs average number of utilons
- People don't really act consistently with "maximizing expected number of utilons" model
- Time discounting is a horrible mess, especially since we're hyperbolic so inconsistent by definition
But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.
Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.
There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?
It seems that it is possible to compare happiness of two different people; ie I can say that giving the cake to Mary would give her twice as much happiness as it would give Fred. I think that's all you need to counter your first objection. You'd need something much more formal if you were actually trying to calculate it out rather than use it as a principle, but as far as I know no one does this.
This is a big problem. I personally solve it by not using utilitarianism on situations that create or remove people. This is an inelegant hack, but it works.
This is why I said I am a descriptive emotivist but a normative utilitarian. The fact that people don't act in accordance with a system doesn't mean the system isn't moral. I'd be pretty dubious of any moral system that said people were actually doing everything right.
Yeah, tell me about it. Right now I'm thinking that a perfectly rational person has no essential discounts, but ends up with a very hefty discount because she can't make future plans with high effectiveness. For example, investing all my money now and donating the sum+interest to charity in a thousand years only works if I'm sure both the banking system and human suffering will last a millennium.
"Utilons don't make people happier" is a weird way of putting things. It sounds to me a lot like "meters don't make something longer." If you're adding meters to something, and it's not getting longer, you're using the word "meter" wrong.
I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, you'll want to take that into account.
Probably an intelligent theory of utilon x seconds would end up looking completely different from modern consumer culture. Probably anyone who applied it would also be much much happier than a modern consumer. If people can't calculate what does and doesn't provide them with utilon x seconds, they either need to learn to do so, ask someone who has learned to do so to help manage their life, or resign themselves to being less than maximally happy.
I have a feeling this is very different from the way economists think about utility, but that's not necessarily a bad thing.