It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:
- Everybody has preference function assigning real values (utilons) to states of reality
- Preference function is a given and shouldn't be manipulated
- People try to act to maximize number of utilons, that's how we find about their preference function
- People are happier when they get more utilons
- We should give everybody as much utilons as we can
There are a few obivous problems here, that I won't be bothering with today:
- Any affine transformation of preference function leaves what is essentially the same preference function, but it matters when we try to aggregate them. If we multiply one person's preference function values by 3^^^3, they get to decide everything in every utilitarian scenario
- Problem of total vs average number of utilons
- People don't really act consistently with "maximizing expected number of utilons" model
- Time discounting is a horrible mess, especially since we're hyperbolic so inconsistent by definition
But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.
Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.
There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?
Conchis, take a look at Krister Bykvist's paper, "The Good, the Bad and the Ethically Neutral" for a convincing argument that Broome should embrace a form of consequentialism.
(As an aside, the paper contains this delightful line: "My advice to Broome is to be less sadistic.")
Thanks for the link.
As far as I can tell, Bykvist seems to be making an argument about where the critical level should be set within a critical-level utilitarian framework rather than providing an explicit argument for that framework. (Indeed, the framework is one that Broome appears to accept already.)
The thing is, if you accept critical-level utilitarianism you've already given up the intuition of neutrality, and I'm still wondering whether that's actually necessary. In particular, I remain somewhat attracted to a modified version of Dasgupta's "r... (read more)