We should give everybody as much utilions as we can
Not at all. We're all just trying to maximize our own utilions. My utility function has a term int it for other people's happiness. Maybe it has a term for other people's utilions (I'm not sure about that one though). But when I say I want to maximize utility, I'm just maximizing one utility function: mine. Consideration for others is already factored in.
In fact I think you're confusing two different topics: decision theory and ethics. Decision theory tells us how to get more of what we want (inclu...
This post seems reflect a confabulation between "utilons" and "wealth", as well as a confabulation between "utilons" and happiness.
We have orders of magnitude more wealth per person than our ancestors. We are not particularly good at turning wealth into happiness. This says very, very little about how good we are at achieving any goals that we have that are unrelated to happiness. For example, the world is far less dangerous than it used to be. Even taking into account two world wars, people living in the twentieth century we...
It seems that it is possible to compare happiness of two different people; ie I can say that giving the cake to Mary would give her twice as much happiness as it would give Fred. I think that's all you need to counter your first objection. You'd need something much more formal if you were actually trying to calculate it out rather than use it as a principle, but as far as I know no one does this.
This is a big problem. I personally solve it by not using utilitarianism on situations that create or remove people. This is an inelegant hack, but it works.
T
It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics.
I'd be interested to know if that's true. I don't accept utilitarianism as a basis for ethics. Alicorn's recent post suggests she doesn't either. I think quite a few rationalists are also libertarian leaning and several critiques of utilitarianism come from libertarian philosophies.
Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier.
Current research suggests it does:
The facts about income and happiness turn out to be much simpler than first realized:
1) Rich people are happier than poor people.
2) Richer countries are happier than poorer countries.
3) As countries get richer, they tend to get happier.
But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly.
If you give someone more utilons, and they do not get happier, you're doing it wrong by definition. Conversely, someone cannot get happier without acquiring more utilons by definition.
You've rejected a straw man. You're probably right to reject said straw man, but it doesn't relate to utilitarianism.
This reminds me of a talk by Peter Railton I attended several years ago. He described happiness as a kind of delta function: we are as happy as our difference from our set point, but we drift back to our set point if we don't keep getting new input. Increasing one's set point will make one "happier" in the way you seem to be using the word, and it's probably possible (we already treat depressed people, who have unhealthily low set points and are resistant to more customary forms of experiencing positive change in pleasure).
I think it's pretty clear we should have a term in our social utility function that gives value to complexity (of the universe, of society, of the environment, of our minds). That makes me more than just a preference utilitarian. It's an absolute objective value. It may even, with interpretation, be sufficient by itself.
It seems that it is possible to compare happiness of two different people; ie I can say that giving the cake to Mary would give her twice as much happiness as it would give Fred. I think that's all you need to counter your first objection. You'd need something much more formal if you were actually trying to calculate it out rather than use it as a principle, but as far as I know no one does this.
This is a big problem. I personally solve it by not using utilitarianism on situations that create or remove people. This is an inelegant hack, but it works.
This is why I said I am a descriptive emotivist but a normative utilitarian. The fact that people don't act in accordance with a system doesn't mean the system isn't moral. I'd be pretty dubious of any moral system that said people were actually doing everything right.
Yeah, tell me about it. Right now I'm thinking that a perfectly rational person has no essential discounts, but ends up with a very hefty discount because she can't make future plans with high effectiveness. For example, investing all my money now and donating the sum+interest to charity in a thousand years only works if I'm sure both the banking system and human suffering will last a millennium.
"Utilons don't make people happier" is a weird way of putting things. It sounds to me a lot like "meters don't make something longer." If you're adding meters to something, and it's not getting longer, you're using the word "meter" wrong.
I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, you'll want to take that into account.
Probably an intelligent theory of utilon x seconds would end up looking completely different from modern consumer culture. Probably anyone who applied it would also be much much happier than a modern consumer. If people can't calculate what does and doesn't provide them with utilon x seconds, they either need to learn to do so, ask someone who has learned to do so to help manage their life, or resign themselves to being less than maximally happy.
I have a feeling this is very different from the way economists think about utility, but that's not necessarily a bad thing.
...I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, y
It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:
There are a few obivous problems here, that I won't be bothering with today:
But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.
Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.
There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?