Lukas_Gloor comments on Humans are utility monsters - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (213)
The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.
The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.
In Practical Ethics, Peter Singer advocated a position he called "prior-existence preference utilitarianism". He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.
If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the decent life?
In addition, nearly everyone would consider it bad to create lives that are miserable. But if the good parts of a decent life can make up for the bad parts in it, why doesn't a life consisting solely of good parts constitute something that is important to create? (This point applies most forcefully for those who adhere to a reductionist/dissolved view on personal identity.)
One way out of the dilemma is what Singer called the "moral ledger model of preferences". He proposed an analogy between preferences and debts. It is good if existing debts are paid, but there is nothing good about creating new debts just so they can be paid later. In fact, debts are potentially bad because they may remain unfulfilled, so all things being equal, we should try to avoid making debts. The creation of new sentience (in form of "preference-bundles" or newly created utility functions) would, according to this view, be at most neutral (if all the preferences will be perfectly fulfilled), and otherwise negative to the extent that preferences get frustrated.
Singer himself rejected this view because it would imply voluntary human extinction being a good outcome. However, something about the "prior-existence" alternative he offered seems obviously flawed, which is arguably a much bigger problem than something being counterintuitive.
Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this "let's just take the average!" come from?
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, "If you prefer no monster to a happy monster why don't you kill the monster." The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be "no monster" is for it to never exist in the first place.
That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24/7), creating someone with slightly less negative utility (ie they are tortured 23/7) is better than creating nobody.
In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population's total utility is higher. "Take the average utility of the population" sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out "munchkin" ways to manipulate the average, like adding moderately miserable people to a super-miserable world..
In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn't as horrible as AU.
In that view, does someone already counts as part of the average even before they are born?
I would think so. Of course, that's not to say we know that they count... my confidence that someone who doesn't exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn't exist is going to exist.
This should in no way be understood as endorsing the more general formulation.
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.
For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn't burst until after we died. If we don't value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.
For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth's carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)
For philosophical purposes, there's an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say "I'm not the same person I was a decade ago", and expect that the same will be true a decade from now. So if I want to value my future self, there's a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
Presumably, only if they get born. Although that's tweakable.
If I kill someone in their sleep so they don't experience death, and nobody else is affected by it (maybe it's a hobo or something), is that okay under the timeless view because their prior utility still "counts"?
If we're talking preference utilitarianism, in the "timeless sense" you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference.
It's because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.
The obvious way to avoid this is to weight each person by their measure, e.g. the amount of time they spend alive.
I think total utilitarianism already does that.
Yes, that's my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea's question of how exactly the timeless view considers the situation.
No, because they'll be deprived of any future utility they might have otherwise received by remaining alive.
So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth.
By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth.
Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.
In real life, this would tend to make the remaining people less happy.