In response to The Lifespan Dilemma
Comment author: CronoDAS 15 September 2009 01:54:51AM *  15 points [-]

My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living. In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to commit suicide, even if their life contains more sorrow than happiness. If we would be saddened to hear the news that such a person existed, we shouldn't kill them, but we should not voluntarily create such a person in an otherwise happy world. So each time we voluntarily add another person to Parfit's world, we have a little celebration and say with honest joy "Whoopee!", not, "Damn, now it's too late to uncreate them."

And then the rest of the Repugnant Conclusion - that it's better to have a million lives very worth celebrating, than a billion lives slightly worth celebrating - is just "repugnant" because of standard scope insensitivity. The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations. Alternatively, average utilitarians - I suspect I am one - may just reject the very first step, in which the average quality of life goes down.

This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren't worth living than it is to create a large number of lives that are barely worth living.

Average utilitarianism also tends to choke horribly under other circumstances. Consider a population whose average welfare is negative. If you then add a bunch of people whose welfare was slightly less negative than the average, you improve average welfare, but you've still just created a bunch of people who would prefer not to have existed. That can't be good.

There are several "impossibility" theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.

Comment author: Ghatanathoah 29 May 2014 01:51:15AM 0 points [-]

This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren't worth living than it is to create a large number of lives that are barely worth living.

I think that the Sadistic Conclusion is correct. I argue here that it is far more in line with typical human moral intuitions than the repugnant one.

There are several "impossibility" theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.

If you take the underlying principle of the Sadistic Conclusion, but change the concrete example to something smaller scale and less melodramatic than "Create lives not worth living to stop the addition of lives barely worth living," you will find that it is very intuitively appealing.

For instance, if you ask people if they should practice responsible family planning or spend money combating overpopulation they agree. But (if we assume that the time and money spent on these efforts could have been devoted to something more fun) this is the same principle. The only difference is that instead creating a new life not worth living we are instead subtracting an equivalent amount of utility from existing people.

Comment author: TheOtherDave 06 May 2014 01:03:06PM 0 points [-]

It's worth noting that the question of what is a better way of evaluating such prospects is distinct from the question of how I in fact evaluate them. I am not claiming that having multiple incomensurable metrics for evaluating the value of lived experience is a good design, merely that it seems to be the way my brain works.

Given the way my brain works, I suspect repeating a typical day as you posit would add disvalue, for reasons similar to #2.

Would it be better if I instead evaluated it as per #1? Yeah, probably.

Still better would be if I had a metric for evaluating events such that #1 and #2 converged on the same answer.

Comment author: Ghatanathoah 07 May 2014 03:02:45AM *  0 points [-]

It's worth noting that the question of what is a better way of evaluating such prospects is distinct from the question of how I in fact evaluate them.

Good point. What I meant was closer to "which method of evaluation does the best job of capturing how you intuitively assign value" rather than which way is better in some sort of objective sense. For me #1 seems to describe how I assign value and disvalue to repeating copies better than #2 does, but I'm far from certain.

So I think that from my point of view Omega offering to extend the length of a repeated event so it contains a more even mixture of good and bad is the same as Omega offering to not repeat a bad event and repeat a good event instead. Both options contain zero value, I would rather Omega leave me alone and let me go do new things. But they're better than him repeating a bad event.

Comment author: TheOtherDave 03 May 2014 08:56:16PM 0 points [-]

Yup, that makes sense, but doesn't seem to describe my own experience.

For my own part, I think the parts of my psyche that judge the kinds of negative scenarios we're talking about use a different kind of evaluation than the parts that judge the kinds of positive scenarios we're talking about.

I seem to treat the "bad stuff" as bad for its own sake... avoiding torture feels worth doing, period end of sentence. But the "good stuff" feels more contingent, more instrumental, feels more like it's worth doing only because it leads to... something. This is consistent with my experience of these sorts of thought experiments more generally... it's easier for me to imagine "pure" negative value (e.g., torture, suffering, etc in isolation.) than "pure" positive value (e.g., joy, love, happiness, satisfaction in isolation). It's hard for me to imagine some concrete thing that I would actually trade for a year of torture, for example, though in principle it seems like some such thing ought to exist.

And it makes some sense that there would be a connection between how instrumental something feels, and how I think about the prospect of repeating it. If torture feels bad for its own sake, then when I contemplate repetitions of the same torture, it makes sense that I would "add up the badness" in my head... and if good stuff doesn't feel good for its own sake, it makes sense that I wouldn't "add up the goodness" in my head in the same way.

WRT #4, what I'm saying is that copying the good moments feels essentially valueless to me, while copying the bad moments has negative value. So I'm being offered a choice between "bad thing + valueless thing" and "bad thing", and I don't seem to care. (That said, I'd probably choose the former, cuz hey, I might be wrong.)

Comment author: Ghatanathoah 06 May 2014 03:43:43AM 0 points [-]

I think I understand your viewpoint. I do have an additional question though, which is what you think about how to to evaluate moments that have a combination of good and bad.

For instance, let's suppose you have the best day ever, except that you had a mild pain in your leg for the most of the day. All the awesome stuff you did during the day more than made up for that mild pain though.

Now let's suppose you are offered the prospect of having a copy of you repeat that day exactly. We both agree that doing this would add no additional value, the question is whether it would be valueless, or add disvalue?

There are two possible ways I see to evaluate this:

  1. You could add up all the events of the day and decide they contain more good than bad, therefore this was a "good" day. "Good" things have no value when repeated, so you would assign zero value to having a copy relive this day. You would not pay to have it happen, but you also wouldn't exert a great effort to stop it.

  2. You could assign value to the events first before adding them up, assigning zero value to all the good things and a slight negative value to the pain in your leg. Therefore you would assign negative value to having a copy relive this day and would pay to stop it from happening.

To me (1) seems to be an intuitively better way of evaluating the prospect of a copy reliving the day than (2). It also lines up with my intuition that it wouldn't be bad news if MWI was true. But I wonder if you would think differently?

Comment author: TheOtherDave 02 May 2014 05:00:55PM 0 points [-]

I agree with you that my preferences aren't inconsistent, I just value repetition differently for +v and -v events.

For my own part, I share your #1 and #2, don't share your #3 (that is, I'd rather Omega not reproduce the bad stuff, but if they're going to do so, it makes no real difference to me whether they reproduce the good stuff as well), and share your indifference in #4.

Comment author: Ghatanathoah 02 May 2014 11:45:21PM *  0 points [-]

For my own part, I share your #1 and #2, don't share your #3 (that is, I'd rather Omega not reproduce the bad stuff, but if they're going to do so, it makes no real difference to me whether they reproduce the good stuff as well)

One thing that makes me inclined towards #3 is the possibility that the multiverse is constantly reproducing my life over and over again, good and bad. I do not think that I would consider it devastatingly bad news if it turns out that the Many-Worlds interpretation is correct.

If I really believed that repeated bad experiences could not ever be compensated for by repeated good ones, I would consider the Many Worlds Interpretation to be the worst news ever, since there were tons of me out in the multiverse having a mix of good and bad experiences, but the good ones "don't count" because they already happened somewhere else. But I don't consider it bad news. I don't think that if there was a machine that could stop the multiverse from splitting that I would pay to have it built.

One way to explain my preferences in this regard would be that I believe that repeated "good stuff" can compensate for repeated "bad stuff," but that it can't compensate for losing brand new "good stuff" or experiencing brand new "bad stuff."

However, I am not certain about this. There may be some other explanation for my preferences. Another possibility that I think is likely is that I think that repeated "good stuff" only loses its value for copies that have a strong causal connection to the current me. Other mes who exist somewhere out in the multiverse have no connection to this version of me whatsoever, so my positive experiences don't detract from their identical ones. But copies that I pay to have created (or to not be) are connected to me in such a fashion, so I (and they) do feel that their repeated experiences are less valuable.

This second explanation seems a strong contender as well, since I already have other moral intuitions in regards to causal connection (for instance, if there was a Matrioshka brain full of quintillions environmentalists in a part of the multiverse so far off they will never interact with us, I would not consider their preferences to be relevant when forming environmental policy, but I would consider the preferences of environmentalists here on Earth right now to be relevant). This relates to that "separability" concept we discussed a while ago.

Or maybe both of these explanations are true. I'm not sure.


Also, I'm curious, why are you indifferent in case 4? I think I might not have explained it clearly. What I was going for was that Omega say "I'm making a copy of you in a bad time of your life. I can either not do it at all, or extend the copy's lifespan so that it is now a copy of a portion of your life that had both good and bad moments. Both options cost $10." I am saying that I think I might be indifferent about what I spend $10 on in that case.

Comment author: TheOtherDave 12 December 2011 02:53:59AM 1 point [-]

Heh. This is another case where I'd like to know up and down votes rather than their sum.

Anyway, to answer your question: I have no idea what I would say after a year of torture, but speaking right now: I have at least some interest in avoiding a year's worth of torture for an observer, so given the option I'd rather you didn't do it. So, no, I wouldn't say the same thing.

But that doesn't seem to depend on the fact that the observer in question is a simulation of me from a year ago.

Comment author: Ghatanathoah 02 May 2014 03:56:50AM 0 points [-]

I don't see anything inconsistent about believing that a good life loses values with repetition, but a bad life does not lose disvalue. It's consistent with the Value of Boredom, which I thoroughly endorse.

Now, there's a similar question where I think my thoughts on the subject might get a little weird. Imagine you have some period of your life that started out bad, but then turned around and then became good later so that in the end that period of life was positive on the net. I have the following preferences in regards to duplicating it:

  1. I would not pay to have a simulation that perfectly relived that portion of my life.

  2. If Omega threatened to simulate the bad first portion of that period of life, but not the good parts that turned it around later, I would pay him not to.

  3. If Omega threatened to simulate the bad first portion of that period of life, but not the good parts that turned it around later, I would probably pay him to extend the length of the simulation so that it also encompassed the compensating good part of that period of life.

  4. If the cost of 2 and 3 was identical I think would probably be indifferent. I would not care whether the simulation never occurred, or if it was extended.

So it seems like I think that repeated good experiences can sometimes "make up for" repeated bad ones, at least if they occur in the same instance of simulation. But all they can do is change the value I give to the simulation from "negative" to "zero." They can't make it positive.

These preferences I have do strike me as kind of weird. But on the other hand, the whole situation is kind of weird, so maybe any preferences I have about it will end up seeming weird no matter what they are.

Comment author: Viliam_Bur 14 March 2014 10:11:34AM *  10 points [-]

Having children is easy. Any idiot can do it; many of them do; some of them have dozen children.

Is it more beneficial for a society when smart people have children? Yes, it is... but good luck explaining why without saying something politically offensive.

Are people with better genes and higher IQ inherently more worthy? Nice try, Hitler! Do smart people provide better education and other support for their children? This should be solved by social engineering; we should provide better schools for everyone, maybe give everyone free books, etc.

If you are not allowed to specifically praise smart people (and only smart people!) for having more children, then having children cannot provide the same status for smart people as their careers can. A smart person in IT can proudly say they do something that 99% of people don't understand. They even don't have to say it; everyone already knows. A smart parent with well-mannered and educated smart children... is still perceived on the same level as an average parent with the same number of average children. You can do a better work, but most people won't recognize it, so it will not give you status. There is no "best parent in the city" award you could show everyone; no official ladder to climb.

You cannot say "X is better at being a parent than Y" without saying "children of X are better than children of Y". And the latter is very offensive. "My child is better than your child" is more offensive than "my understanding of quantum physics is better than your understanding of quantum physics".

Comment author: Ghatanathoah 15 March 2014 02:21:16AM 1 point [-]

It seems like there's an easy way around this problem. Praise people who are responsible and financially well-off for having more kids. These traits are correlated with good genes and IQ, so it'll have the same effect.

It seems like we already do this to some extent. I hear others condemning people with who are irresponsible and low-income for having too many children fairly frequently. It's just that we fail to extend this behavior in the other direction, to praising responsible people for having children.

I'm not sure why this is. It could be for one of the reasons listed in the OP. Or it could just be because the tendency to praise and the tendency to condemn are not correlated.

Comment author: Kaj_Sotala 26 January 2014 07:32:59AM 1 point [-]

Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires.

Is that really how preference utilitarianism works? I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment. Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones.

Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want.

I'm confused. How is wanting to live forever in a situation where you don't think that living forever is possible, different from any other unsatisfiable preference?

If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that.

That doesn't sound right. The disutility is huge, yes, but the probability is so low that focusing your efforts on practically anything with a non-negligible chance of preventing further births would be expected to prevent many times more disutility. Like supporting projects aimed at promoting family planning and contraception in developing countries, pro-choice policies and attitudes in your own country, rape prevention efforts to the extent that you think rape causes unwanted pregnancies that are nonetheless carried to term, anti-natalism in general (if you think you can do it in a way that avoids the PR disaster for NU in general), even general economic growth if you believe that the connection between richer countries and smaller families is a causal and linear one. Worrying about vanishingly low-probability scenarios, when that worry takes up cognitive cycles and thus reduces your chances of doing things that could have an even bigger impact, does not maximize expected utility.

I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."

I don't know. At least I personally find it very difficult to compare experiences of such differing magnitudes. Someone could come up with a number, but that feels like trying to play baseball with verbal probabilities - the number that they name might not have anything to do with what they'd actually choose in that situation.

Comment author: Ghatanathoah 27 January 2014 10:00:56PM *  -1 points [-]

I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment

I don't think that would be the case. The main intuitive advantage negative preference utilitarianism has over negative hedonic utilitarianism is that it considers death to be a bad thing, because it results in unsatisfied preferences. If it only counted immediate consciously held goals it might consider death a good thing, since it would prevent an agent from developing additional unsatisfied preferences in the future.

However, you are probably onto something by suggesting some method of limiting which unsatisfied preferences count as negative. "What a person is thinking about at any given moment" has the problems I pointed out earlier, but another formulation could well work better.

Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones.

I believe Total Preference Utilitarianism typically avoids this by regarding the creation of at most types of unsatisfied preferences as neutral rather than negative. While there are some preferences whose dissatisfaction typically counts as negative, such as the preference not to be tortured, most preference creations are neutral. I believe that under TPU, if a person spends the majority of their life not preferring to be dead then their life is considered positive no matter how many unsatisfied preferences they have.

At least I personally find it very difficult to compare experiences of such differing magnitudes.

I feel like I could try to get some sort of ballpark by figuring how much I'm willing to pay to avoid each thing. For instance, if I had an agonizing migraine I knew would last all evening, and had a choice between paying for an instant cure pill, or a device that would magically let me avoid traffic for the next two months, I'd probably put up with the migraine.

I'd be hesitant to generalize across the whole population, however, because I've noticed that I don't seem to mind pain as much as other people, but find boredom far more frustrating than average.

Comment author: James_Miller 24 January 2014 01:47:11PM 0 points [-]

I basically view adding people to the world in the same light as I view adding desires to my brain.

Interesting way to view it. I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.

Comment author: Ghatanathoah 24 January 2014 08:30:22PM *  0 points [-]

I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.

I used to think that way, but it resulted in what I considered to be too many counterintuitive conclusions. The biggest one, that I absolutely refuse to accept, being that we ought to kill the entire human race and use the resources doing that would free up to replace them with creatures whose desires are easier to satisfy. Paperclip maximizers or wireheads for instance. Humans have such picky, complicated goals, after all..... I consider this conclusion roughly a trillion times more repugnant than the original Repugnant Conclusion.

Naturally, I also reject the individual form of this conclusion, which is that we should kill people who want to read great books, climb mountains, run marathons, etc. and replace them with people who just want to laze around. If I was given a choice between having an ambitious child with a good life, or an unambitious child with a great life, I would pick the ambitious one, even though the total amount of welfare in the world would be smaller for it. And as long as the unambitious child doesn't exist, never existed, and never will exist I see nothing wrong with this type of favoritism.

Comment author: Kaj_Sotala 24 January 2014 06:33:24PM *  2 points [-]

(To the extent that I'm negative utilitarian, I'm a hedonistic negative utilitarian, so I can't speak for the preference NUs, but...)

So what happens when you create someone who is going to die, and has an unbounded utility function?

Note that every utilitarian system breaks once you introduce even the possibility of infinities. E.g. a hedonistic total utilitarian will similarly run into the problem that, if you assume that a child has the potential to live for an infinite amount of time, then the child can be expected to experience both an infinite amount of pleasure and an infinite amount of suffering. Infinity minus infinity is undefined, so hedonistic total utilitarianism would be incapable of assigning a value to the act of having a child. Now saving lives is in this sense equivalent to having a child, so the value every action that has even a remote chance of saving someone's life becomes undefined as well...

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

The ones I've encountered online make an effort to avoid having children, but they don't devote every waking minute of their lives to it.

I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself. One possible answer is that loudly advocating "you shouldn't have children, it's literally infinitely bad" is a horrible PR strategy that will just get your movement discredited, and e.g. talking about NU in the abstract and letting people piece the full implications themselves may be more effective.

Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account.

How much suffering/preference frustration would an antinatalist be willing to inflict on existing people in order to prevent a birth? How much suffering/preference frustration would a birth have to stop in order for it to be justified? For simplicity's sake, let's assume the child who is born has a normal middle class life in a 1st world country with no exceptional bodily or mental health problems.

Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the answer to that question", you're probably not going to get a very satisfying answer, either.

Comment author: Ghatanathoah 24 January 2014 08:08:18PM 0 points [-]

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

Yes, and that is my precise point. Even if we assume a bounded utility function for human preferences, I think it's reasonable assume that it's a pretty huge function. Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires. I find this massively counter-intuitive and want to know how the antinatalist community addresses this.

I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself.

If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that.

Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account

Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want.

Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the answer to that question", you're probably not going to get a very satisfying answer, either.

Well, of course I'm not expecting an exact answer. But a ballpark would be nice. Something like "no more than x, no less than y." I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."

Comment author: RomeoStevens 23 January 2014 09:50:20PM *  1 point [-]

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering. Death is bad and causes negative experiences. I want to solve death before we have more kids, but I recognize this isn't realistic. It's worth pointing out that negative utilitarianism is incoherent. Prioritarianism makes slightly more sense.

Comment author: Ghatanathoah 24 January 2014 04:31:43AM 1 point [-]

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering.

If I understand you correctly, the problem with doing this with negative utilitarianism is that it suggests we should painlessly kill everyone ASAP. The advantage of negative preference utilitarianism is that it avoids this because people have a preference to keep on living that killing would thwart.

It's worth pointing out that negative utilitarianism is incoherent.

Why? For the reason I pointed out, or for a different one? I'm not a negative utilitarian personally, but I think a few aspects of it have promise and would like to see them sorted out.

View more: Prev | Next