What your article basically shows is that to keep utilitarianism consistent with our moral intuition we have to introduce a fudge factor that favors people (such as us) who are or were alive. Having made this explicit next we should ask if this preference is morally justified. For me, however, it doesn't seem all that far from someone saying "I'm a utilitarian but my intuition strongly tells me that people with characteristic X are more important than everyone else so I'm going to amend utilitarianism by giving greater weight to the welfare of X-men." Although since the "Repugnant Conclusion" has never seemed repugnant to me, I'm probably an atypical utilitarian.
For me, however, it doesn't seem all that far from someone saying "I'm a utilitarian but my intuition strongly tells me that people with characteristic X are more important than everyone else so I'm going to amend utilitarianism by giving greater weight to the welfare of X-men."
There is a huge difference between discriminatory favoritism, and valuing continued life over adding new people,
In discriminatory favoritism people have a property that makes them morally valuable (i.e the ability to have preferences, or to feel pleasure and pain). They also have an additional property that does not affect their morally valuable property in any significant way (i.e skin color, family relations). Discriminatory favoritism argues that this additional property means that the welfare of these people is less important, even though that additional property does not affect the morally valuable property in any way.
By contrast, in the case of valuing continuing life over creating new people, the additional property (nonexistance) that the new people have does have a significant effect on their morally significant property. Last I checked never having existed had a large effect on your ability to have preferences, and your ability to feel please and pain. If the person did exist in the past, or will exist in the future, that will change, but if they never existed, don't exist, and never will exist, then I think that is significant. Arguing that it shouldn't be is like arguing you shouldn't break a rock because "if the rock could think, it wouldn't want you to."
We can illustrate it further by thinking about individual preferences instead of people. If I become addicted to heroin I will have a huge desire to take heroin far stronger than all the desires I have now. This does not make me want to be addicted to heroin. At all. I do not care in the slightest that the heroin addicted me would have a strong desire for heroin. Because that desire does not exist and I intend to keep it that way. And I see nothing immoral about that.
No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
That's true, but note that if e.g. 20 billion people have died up to this point, then that penalty of -20 billion gets applied equally to every possibly future state, so it won't alter the relative ordering of those states. So the fact that we're getting an infinite amount of disutility from people who are already dead isn't a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don't care about their death.
That's an interesting idea, but it wasn't what I had in mind. As you point out, there are some pretty bad problems with that model.
Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)
Oddly enough, right before I noticed this thread I posted a question about this on the Stupid Questions Thread.
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don't know what the answer is. I wonder if SisterY or one of the other antinatalists who frequents LW does.
What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?
I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.
The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means that negative preference utilitarians are opposed to having children, as doing so will create more unsatisfied preferences. And they are opposed to people dying under normal circumstances, because someone's death will prevent them from satisfying their existing preferences.
So what happens when you create someone who is going to die, and has an unbounded utility function? The amount of preferences they have is essentially infinite, does that mean that if such a person is created it is impossible to do any more harm, since an infinite amount of unsatisfied preferences have just been created? Does that mean that we should be willing to torture everyone on Earth for a thousand years if doing so will prevent the creation of such a person?
The problem doesn't go away if you assume humans have bounded utility functions. Suppose we have a bounded utility function, so living an infinite number of years, or a googolplex number of years, is equivalent to living a mere hundred billion years for us. That still means that creating someone who will live a normal 70 year lifespan is a titanic harm, a harm that everyone alive on Earth today should be willing to die to prevent it, as it would create 99,999,999,930 years worth of unsatisfied preferences!
My question is, how do negative preference utilitarians deal with this? The ones I've encountered online make an effort to avoid having children, but they don't devote every waking minute of their lives to it. And I don't think akrasia is the cause, because I've heard some of them admit that it would be acceptable to have a child if doing so reduced the preference frustration/suffering of a very large amount of existing people.
So with that introduction out of the way, my questions, on a basic level are:
How much suffering/preference frustration would an antinatalist be willing to inflict on existing people in order to prevent a birth? How much suffering/preference frustration would a birth have to stop in order for it to be justified? For simplicity's sake, let's assume the child who is born has a normal middle class life in a 1st world country with no exceptional bodily or mental health problems.
How exactly did they go about calculating the answer to question 1?
There has to be some answer to this question, there wouldn't be whole communities of anti-natalists online if their ideology could be defeated with a simple logic problem.
Indeed.
It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.
Stuart Armstrong's proposed aggregation function has essentially the same problem, since while it disincentivizes reducing the number of people, it doesn't disincentivizes it much at any significant population level.
BTW: all flavors of utilitarianism suffer from the fact that there is no known satisfactory way of comparing the utility of different people. Without interpersonal utility comparison, the point is moot.
It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.
No it isn't. This can be demonstrated fairly simply. Imagine a population consisting of 100 people. 99 of those people have great lives, 1 of those people has a mediocre one.
At the time you are considering doing the killing the person with the mediocre life, he has accumulated 25 utility. If you let him live he will accumulate 5 more utility. The 99 people with great lives will accumulate 100 utility over the course of their lifetimes.
If you kill the guy now average utility will be 99.25. If you let him live and accumulate 5 more utility average utility will be 99.3. A small, but definite improvement.
I think the mistake you're making is that after you kill the person you divide by 99 instead of 100. But that's absurd, why would someone stop counting as part of the average just because they're dead? Once someone is added to the population they count as part of it forever.
It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.
It's true that some sort of normalization assumption is needed to compare VNM utility between agents. But that doesn't defeat utilitarianism, it just shows that you need to include a meta-moral obligation to make such an assumption (and to make sure that assumption is consistent with common human moral intuitions about how such assumptions should be made).
As it happens, I do interpersonal utility comparisons all the time in my day-to-day life using the mental capacity commonly referred to as "empathy." The normalizing assumption I seem to be making is to assume that others people's minds are similar to mine, and match their utility to mine on a one to one basis, doing tweaks as necessary if I observe that they value different things than I do.
This is interesting. I wonder what a CEV-implementing AI would do with such cases. There seems to be a point where you're inevitably going to hit the bottom of it. And in a way, this is at the same time going to be a self-fulfilling prophecy, because once you start identifying with this new image/goal of yours, it becomes your terminal value. Maybe you'd have to do separate evaluations of the preferences of all agent-moments and then formalise a distinction between "changing view based on valid input" and "changing view because of a failure of goal-preservation". I'm not entirely sure whether such a distinction will hold up in the end.
I wonder what a CEV-implementing AI would do with such cases.
Even if it does turn out that my current conception of personal identity isn't the same as my old one, but is rather I similar concept I adopted after realizing my values were incoherent, the AI might still find that the CEVs of my past and present selves concur. This is because, if I truly did adopt a new concept of identity because of it's similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones that are as similar as possible to the original. If this is the case the AI would extrapolate that meta-value and give me a nice new coherent sense of personal identity, like the one I currently possess.
Of course, if I am right and my current conception of personal identity is based on my simply figuring out what I meant all along by "identity," then the AI would just extrapolate that.
Something you wrote in a comment further above:
This also entails accepting the Sadistic Conclusion, but that is an unavoidable part of all types of Negative Utilitarianism, whether they are of the normal variety, or the weird "sometimes negative sometimes positive depending on the context" variety I employ.
I don't think so, neither negative preference nor negative hedonistic utilitarianism implies the Sadistic Conclusion. Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:
The Sadistic Conclusion: In some circumstances, it would be better with respect to utility to add some unhappy people to the world (people with negative utility), rather than creating a larger number of happy people (people with positive utility).
Now, according to classical utilitarianism, the large number of happy beings would each be of "positive utility". However, given the evaluation function of the negative view, their utility is neutral if their lives were perfect, and worse than neutral if their lives contain suffering. The Sadistic Conclusion is avoided, although only persuasively so if you find the axiology of the negative view convincing. Otherwise, you're still left with an outcome that seems counterintuitive, but this seems to be much less worrisome than having something that seems to be messed up even on the theoretical level. You say you're okay with the Sadistic Conclusion because there are no alternatives, but I would assume that, if you did not yet know that there are no alternatives (you'd want to go with), then you would have a strong inclination to count it as a serious deficiency of your stated view.
Addressing the comment right above now:
How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born?
Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering. Whether this is going to happen to a new person that you'll bring into existence, or whether it is going to happen to a person that already exists, does not make a difference. (No presence-bias, as I said above.) So a negative preference utilitarian should be indifferent between killing an existing person and bringing a new person (fully developed, with memories and life-goals) into existence if this later person is going to die / be killed soon as well. (Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)
This implies that the preferences of existing people may actually lead to it being the best action to bring new people into existence. If humans have a terminal value of having children, then these preferences of course count as well, and if the children are guaranteed perfect lives, you should bring them all into existence. You should even bring them into existence if some of them are going to suffer horribly, as long as the existing people's preferences would, altogether, contain even more frustrations.
A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?
You will need some way of normalizing all preferences, setting the difference between "everything fulfilled" and "everything frustrated" equal for beings of the same "type". Then the question is whether all sentient beings fall under the same type, or whether you want to discount according to intensity of sentience, or some measure of agency or something like that. I have not yet defined my intuitions here, but I think I'd go for something having to do with sentience.
Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:
When I read the formulation of the Sadistic Conclusion I interpreted "people with positive utility" to mean either a person whose life contained no suffering, or a person whose satisfied preferences/happiness outweighed their suffering. So I would consider adding a small population of terrible lives instead of a large population of almost ideal lives to be the Sadistic Conclusion.
If I understand you correctly, you are saying that negative utilitarianism technically avoids the Sadistic Conclusion because it considers a life with any suffering at all to be a life of negative utility, regardless of how many positive things that life also contains. In other words, it avoid the SC because it's criterion for what makes a life positive and negative are different than the criterion Arrenhius used when he first formulated the SC. I suppose that is true. However, NU does not avoid the (allegedly) unpleasant scenario Arrenhius wanted to avoid (adding a tortured life instead of a large amount of very positive lives).
Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering....(Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)
Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die? In which case you might as well do whatever afterwards, since infinite harm has already occurred? Should you torture everyone on Earth for decades to prevent such a person from being added? That seems weird.
The best solution I can currently think of is to compare different alternatives, rather than try to measure things in absolute terms. So if a person who would have lived to 80 dies at 75 that generates 5 years of unsatisfied preferences, not infinity, even if the person would have preferred to live forever. But that doesn't solve the problem of adding people who wouldn't have existed otherwise.
What I'm trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die. So how many unsatisfied preferences should adding a new person count as creating? How big a disutility is it compared to other disutilities, like thwarting existing preferences and inflicting pain on people.
A couple possibilities that occurs to me off the top of my head. One would be to find the difference in satisfaction between the new people and the old people, and then compare it to the difference in satisfaction between the old people and the counter-factual old people in the universe where the new people were never added.
Another possibility would be to set some sort of critical level based on what the maximum level of utility it is possible to give the new people given our society's current level of resources, without inflicting greater disutilities on others than you give utility to the new people. Then weigh the difference between the new peoples actual utility and their "critical possible utility" and compare that to the dissatisfaction the existing people would suffer if the new people are not added.
Do either of these possibilities sound plausible to you, or do you have another idea?
I consider nearly all arguments of the form "X is not a coherent concept, therefore we ought not to care about it" to be invalid.
I agree, I'm not saying you ought not care about it. My reasoning is different: I claim that people's intuitive notion of personal identity is nonsense, in a similar way as the concept of free will is nonsense. There is no numerically identical thing existing over time, because there is no way such a notion could make sense in the first place. Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of "same person" and care about that, even though it is now different from what they thought it was, or they can conclude that actually, now that they know it is something else, they don't really care about it at all.
I think your view is entirely coherent, by the way. I agree that a reductionist account of personal identity still leaves room for preferences, and if you care about preferences as opposed to experience-moments, you can keep a meaningful and morally important notion of personal identity via preferences (although this would be an empirical issue -- you could imagine beings without future-related preferences).
I guess the relevance for personal identity on the question of hedonism or preferences for me comes from a boost in intuitiveness of the hedonistic view after having internalized empty individualism.
It seems obvious to me that they're both relevant.
I'm 100% sure that there is something I mean by "suffering", and that it matters. I'm only maybe 10-20% sure that I'd also want to care about preferences if I knew everything there is to know.
Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of "same person" and care about that, even though it is now different from what they thought it was
I don't know if your analysis is right or not, but I can tell you that that isn't what it felt like I was doing when I was developing my concepts of personal identity and preferences. What it felt like I was doing was elucidating a concept I already cared about, and figured out exactly what I meant when I said "same person" and "personal identity." When I thought about what such concepts mean I felt a thrill of discovery, like I was learning something new about myself I had never articulated before.
It might be that you are right and that my feelings are illusory, that what I was really doing was realizing a concept I cared about was incoherent and reaching about until I found a concept that was similar, but coherent. But I can tell you that's not what it felt like.
EDIT: Let me make an analogy. Ancient people had some weird ideas about the concept of "strength." They thought that it was somehow separate from the body of a person, and could be transferred by magic, or by eating a strong person or animal. Now, of course, we understand that that is not how strength works. It is caused by the complex interaction of a system of muscles, bones, tendons, and nerves, and you can't transfer that complex system from one entity to another without changing many of the properties of the entity you're sending it to.
Now, considering that fact, would you say that ancient people didn't want anything coherent when they said they wanted to be strong? I don't think so. They were mistaken about some aspects about how strength works, but they were working from a coherent concept. Once they understood how strength worked better they didn't consider their previous desire for strength to be wrong.
I see personal identity as somewhat analagous to that. We had some weird ideas about it in the past, like that it was detached from physical matter. But I think that people have always cared about how they are going to change from one moment to the next, and had concrete preferences about it. And I think when I refined my concepts of personal identity I was making preferences I already had more explicit, not swapping out some incoherent preferences and replacing them with similar coherent ones.
I'm 100% sure that there is something I mean by "suffering", and that it matters. I'm only maybe 10-20% sure that I'd also want to care about preferences if I knew everything there is to know.
I am 100% certain that there are things I want to do that will make me suffer (learning unpleasant truths for instance), but that I want to do anyway, because that is what I prefer to do.
Suffering seems relevant to me too. But I have to admit, sometimes when something is making me suffer, what dominates my thoughts is not a desire for it to stop, but rather annoyance that this suffering is disrupting my train of thought and making it hard for me to think and get the goals I have set for myself accomplished. And I'm not talking about mild suffering, the example in particular that I am thinking of is throwing up two days after having my entire abdomen cut open and sewn back together.
I did not mention it because I didn't want to belabor my view, but no, I wouldn't. I think that one of the important Ideals that people seem to value is that a smaller population of people with highly satisfied preferences is better than a larger population with lives barely worth living, even if the total amount of preference satisfaction is higher in the large population.
It seems to me like your view is underdetermined in regard to population ethics. You introduce empirical considerations about which types of preferences people happen to have in order to block normative conclusions. What if people actually do want to bite the bullet, would that make it okay to do it? Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not. Would this ever be ok according to your view? If not, you seem to not intrinsically value the creation of satisfied preferences.
I agree with your analysis of "selfish".
If not, you seem to not intrinsically value the creation of satisfied preferences.
You're right that I do not intrinsically value the creation of all satisfied preferences. This is where my version of Moore's Ideal Utilitarianism comes in. What I value is the creation of people with satisfied preferences if doing so also fulfills certain moral ideals I (and most other people, I think) have about how the world ought to be. In cases where the creation of a person with satisfied preferences would not fulfill those ideals I am essentially a negative preference utilitarian, I treat the creation of a person who doesn't fulfill those ideals the same way a negative preference utilitarian would.
I differ from Moore in that I think the only way to fulfill an ideal is to create (or not create) a person with certain preferences and satisfy those preferences. I don't think, like he did, that you can (for example) increase the beauty in the world by creating pretty objects no one ever sees.
I think a good analogy would again be Parfit's concept of global preferences. If I read a book, and am filled with a mild preference to read more books with the same characters, such a desire is in line with my global preferences, so it is good for it to be created. By contrast, being addicted to heroin would fill me with a strong preference to use heroin. This preference is not in line with my global preferences, so I would be willing to hurt myself to avoid creating it.
Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not.
I have moral ideals about many things, which include how many people there should be, their overall level of welfare, and most importantly, what sort of preferences they ought to have. It seems likely to me that the scenario with the torture+new people scenario would violate those ideals, so I probably wouldn't go along with it.
To give an example where creating the wrong type of preference would be a negative, I would oppose the creation of a sociopath or a paperclip maximizer, even if their life would have more satisfied preferences than not. Such a creature would not be in line with my ideals about what sort of creatures should exist. I would even be willing to harm myself or others, to some extent, to prevent their creation.
This brings up a major question I have about negative preference utilitarianism, which I wonder if you could answer since you seem to have thought more about the subject of negative utilitarianism than I have. How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born? For instance, suppose you had a choice between torturing every person on Earth for the rest of their lives, or creating one new person who will live the life of a rich 1st world person with a high happiness set point? Surely you wouldn't torture everyone on Earth? A hedonist negative utilitarian wouldn't of course, but we're talking about negative preference utilitarianism.
A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?
The best thing I can come up with is to give the creation of such a creature a utility penalty equal to "However much utility the creature accumulates over its lifetime, minus x," where x is a moderately sized number. However, it occurs to me that someone whose thought more about the subject than me might have figured out something better.
Doesn't many-worlds solve this neatly? Thinking of it as 99.9999999% of the mes sacrificing ourselves so that the other 0.00000001% can live a ridiculously long time makes sense to me. The problem comes when you favor this-you over all the other instances of yourself.
Or maybe there's a reason I stay away from this kind of thing. <shrug>
Whatever Omega is doing that might kill you might not be tied to the mechanism that divides universes. It might be that the choice is between huge chance of all of the yous in every universe where you're offered this choice dying, vs. tiny chance they'll all survive.
Also, I'm pretty sure that Eliezer's argument is intended to test our intuitions in an environment without extraneous factors like MWI. Bringing MWI into the problem is sort of like asking if there's some sort of way to warn everyone off the tracks so no one dies in the Trolley Problem.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
But not in any absolute sense, just because this is consistent with your moral intuition.
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?
Yes, but I would argue that the fact that they can't actually do that yet makes a difference.
Yes, if I was actually going to be addicted. But it was a bad thing that I was addicted in the first place, not a good thing. What I meant when I said I "do not care in the slightest" was that the strength of that desire was not a good reason to get addicted to heroin. I didn't mean that I wouldn't try to satisfy that desire if I had no choice but to create it.
Similarly, in the case of adding lots of people with short lives, the fact that they would have desires and experience pain and pleasure if they existed is not a good reason to create them. But it is a good reason to try to help them extend their lives, and lead better ones, if you have no choice but to create them.
Thinking about it further, I realized that you were wrong in your initial assertion that "we have to introduce a fudge factor that favors people (such as us) who are or were alive." The types of "fudge factors" that are being discussed here do not, in fact do that.
To illustrate this, imagine Omega presents you with the following two choices:
Everyone who currently exists receives a small amount of additional utility. Also, in the future the amount of births in the world will vastly increase, and the lifespan and level of utility per person will vastly decrease. The end result will be the Repugnant Conclusion for all future people, but existing people will not be harmed, in fact they will benefit from it.
Everyone who currently exists loses a small amount of their utility. In the future far fewer people will be born than in Option 1, but they will live immensely long lifespans full of happiness. Total utility is somewhat smaller than in Option 1, but concentrated in a smaller amount of people.
Someone using the fudge factor Kaj proposes in the OP would choose 2, even though it harms every single existing person in order to benefit people who don't exist yet. It is not biased towards existing persons.
I basically view adding people to the world in the same light as I view adding desires to my brain. If a desire is ego-syntonic (i.e. a desire to read a particularly good book) then I want it to be added and will pay to make sure it is. If a desire is ego-dystonic (like using heroin) I want it to not be added and will pay to make sure it isn't. Similarly, if adding a person makes the world more like my ideal world (i.e. a world full of people with long eudaemonic lives) then I want that person to be added. If it makes it less like my ideal world (i.e. Repugnant Conclusion) I don't want that person to be added and will make sacrifices to stop it (for instance, I will spend money on contraceptives instead of candy).
As long as the people we are considering adding are prevented from ever having existed, I don't think they have been harmed in the same way that that discriminating against an existing person for some reason like skin color or gender harms someone, and I see nothing wrong with stopping people from being created if it makes the world more ideal.
Of course, needless to say, if we fail and these people are created anyway, we have just as much moral obligation towards them as we would towards any preexisting person.