Has anyone here ever addressed the question of why we should prefer

(1) Life Extension: Extend the life of an existing person 100 years
to
(2) Replacement: Create a new person who will live for 100 years?


I've seen some discussion of how the utility of potential people fits into a utilitarian calculus. Eliezer has raised the Repugnant Conclusion, in which 1,000,000 people who each have 1 util is preferable to 1,000 people who each have 100 utils. He rejected it, he said, because he's an average utilitarian.

Fine. But in my thought experiment, average utility remains unchanged. So an average utilitarian should be indifferent between Life Extension and Replacement, right? Or is the harm done by depriving an existing person of life greater in magnitude than the benefit of creating a new life of equivalent utility? If so, why?

Or is the transhumanist indifferent between Life Extension and Replacement, but feels that his efforts towards radical life extension have a much greater expected value than trying to increase the birth rate?

 

(EDITED to make the thought experiment cleaner. Originally the options were: (1) Life Extension: Extend the life of an existing person for 800 years, and (2) Replacement: Create 10 new people who will each live for 80 years. But that version didn't maintain equal average utility.)


*Optional addendum: Gustaf Arrhenius is a philosopher who has written a lot about this subject; I found him via this comment by utilitymonster. Here's his 2008 paper, "Life Extension versus Replacement," which explores an amendment to utilitarianism that would allow us to prefer Life Extension. Essentially, we begin by comparing potential outcomes according to overall utility, as usual, but we then penalize outcomes if they make any existing people worse off.

So even though the overall utility of Life Extension is the same as Replacement, the latter is worse, because the existing person is worse off than he would have been in Life Extension. By contrast, the potential new person is not worse off in Life Extension, because in that scenario he doesn't exist, and non-existent people can't be harmed. Arrhenius goes through a whole list of problems with this moral theory, however, and by the end of the paper we aren't left with anything workable that would prioritize Life Extension over Replacement.

 

New Comment
99 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here's his 2008 paper, "Life Extension versus Replacement," which explores an amendment to utilitarianism that would allow us to prefer Life Extension

I feel like the thing that should allow us to prefer life extension is the thing that makes people search for amendments to utilitarianism that would allow us to prefer life extension.

9Julia_Galef
When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than "I want to."
4Luke_A_Somers
I think the point is, Utilitarianism is very very flexible, and whatever it is about us that tells us to prefer life extension should already be there - the only question is, how do we formalize that?
0TheOtherDave
Presumably that depends on how we came to think we held that moral theory in the first place. If I assert moral theory X because it does the best job of reflecting my moral intuitions, for example, then when I discover that my moral intuitions in a particular case contradict X, it makes sense to amend X to better reflect my moral intuitions. That said, I certainly agree that if I assert X for some reason unrelated to my moral intuitions, then modifying X based on my moral intuitions is a very questionable move. It sounds like you're presuming that the latter is generally the case when people assert utilitarianism?
6Julia_Galef
Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former's a general intuition, the latter's an intuition about a specific case. So it's not a priori clear which intuition to modify (general or specific) when the two conflict.
2TheOtherDave
I don't agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be. Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J. Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S. In this case, I wouldn't call my preference for T a moral intuition at all. I'm simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point. If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn' instead, there's no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions... but I haven't done that. I've merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn). If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.
-1[anonymous]
Well if you view moral theories as if they were scientific hypothesis, you could reason in the following way: If a moral theory/hypothesis makes a counter intuitive prediction you could 1) reject the your intuition or 2) reject the hypothesis ("I want to") 3) revise your hypothesis. It would be practical if one could actually try out an moral theory, but I don't see how one could go about doing that. . .
5Julia_Galef
Right -- I don't claim any of my moral intuitions to be true or correct; I'm an error theorist, when it comes down to it. But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something's gotta give.

I'm not comfortable spending my time and mental resources on these utilitarian puzzles until I am shown a method (or even a good reason to believe there is such a method) for interpersonal utility comparison. If such a method has already been discussed on Less Wrong, I would appreciate a link to it. Otherwise, why engage in metaphysical speculation of this kind?

6steven0461
This is most obviously a problem for preference utilitarians. The same preference ordering can be represented by different utility functions, so it's not clear which one to pick. But utilitarians needn't be preference utilitarians. They can instead maximize some other measure of quality of life. For example, lifetime hiccups would be easy to compare interpersonally. And if utility can be any measure of quality of life, then interpersonal utility comparison isn't the sort of question you get to refuse to answer. Whenever you make a decision that affects multiple people, and you take their interests into account, you're implicitly doing an interpersonal utility comparison. It's not like you can tell reality it's philosophically mistaken in posing the dilemma.
5Jayson_Virissimo
I don't think this will work; it sweeps the difficult part under the rug. When you identify utility with a particular measure of welfare (for example, lifetime hiccups) there really is no good reason to think we all get the same amount of (dis)satisfaction for a single hiccup. Some would be extremely distressed by a hiccup, some would be only slightly bothered, and others will laugh because they think hiccups are funny. If people actually do get different amount of (dis)satisfaction from the units of our chosen measure of welfare (which seems to me very likely), then even if we minimize (I'm assuming hiccups are supposed to be bad) the total (or average) number of lifetime hiccups between us, we still don't have very good reason to think that this state of affairs really provides the "the greatest amount of good for the greatest number" like Bentham and Mill were hoping for.
0steven0461
The assumption wasn't that minimizing hiccups maximizes satisfaction, but that it's hiccups rather than satisfaction that matters. Obviously we both agree this assumption is false. We seem to have some source of information telling us lifetime hiccups are the wrong utility function. Why not ask this source what is the right utility function?
4Jayson_Virissimo
We could settle this dispute on the basis of mere intuition if out intuitions didn't conflict so often. But they do, so we can't.
2[anonymous]
As a first rough approximation, one could compare fMRIs of people's pleasure or pain centers. But no, I largely agree with you. If one chooses the numbers so that the average utility of both scenarios is the same, then I don't see any reason to prefer one to the other. If instead one is trying to make some practical claim, it seems clear that in the near future humanity overwhelmingly prefers making new life to researching life extension.

As a first rough approximation, one could compare fMRIs of people's pleasure or pain centers.

Hedons are not utilons. If they were, wireheading (or entering the experience machine) would be utility-maximizing.

1[anonymous]
Oh. Right.
0[anonymous]
In order for this to be true, it would have to be sustainable enough that the pleasure gain outweighs the potential pleasure loss from a possibly longer life without wireheading/experience machine. For utilitarians, externalities of one person's wireheading affecting other lives would have to be considered as well.
0endoself
1. Create an upload of Jayson Virissimo (for the purpose of getting more time to think). 2. Explain to him, in full detail, the mental states of two people. 3. Ask him how he would choose if he could either cause the first person to exist with probability p or the second person to exist with probability q, in terms of p and q.
5Jayson_Virissimo
At best, this is a meta-method, rather than a method for interpersonal utility comparisons, since I still don't know which method my uploaded-self would use when choosing between the alternatives. At worst, this would only tell us how much utility my uploaded-self gets from (probably) causing a person to exist with a particular mental state and is not actually an interpersonal utility comparison between the two persons.
0torekp
In some senses of "utility", your uploaded-self's utility rankings of "create person A" and "create person B" are strongly dependent on his estimates of how much A's life has utility for A, and B's has for B. At least if you have a typical level of empathy. But then, this just reinforces your meta-method point. However ... dig deeper on empathy, and I think it will lead you to steven0461's point.
0endoself
This is at least useful for creating thought experiments where different ideas have different observable consequences, showing that this isn't meaningless speculation. We have reason to care about the definition of 'utility function' that is used to describe decisions, since those are, by definition, how we decide. Hedonic or preferential functions are only useful insofar as our decision utilities take them into account.

A currently living person doesn't want to die, but a potentially living person doesn't yet want to live, so there's an asymmetry between the two scenarios.

Is that still true in Timeless Decision Theory?

0tondwalkar
I'd prefer never having existed to death at the moment. This might change later if I gain meaningful accomplishments, but I'm not sure how likely that is.
3Julia_Galef
I agree, and that's why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you're diverging from utilitarianism, what are you replacing it with?
4[anonymous]
That birth doesn't create any utility for the person being born (since it can't be said to satisfy their preferences), but death creates disutility for the person who dies. Birth can still create utility for people besides the one being born, but then the same applies to death and disutility. All else being equal, this makes death outweigh birth.
1endoself
To make this more precise think about what you would do if you had to choose between Life Extension and Replacement for a group of people, none of whom yet exist. I think the intuition in favour of Life Extension is the same, but I am not sure (I also find it very likely that I am actually indifferent ceteris paribus, for some value of 'actually' and sufficiently large values of 'paribus').
1Lightwave
Current people would prefer to live for as long as possible, but should they, really? What if they prefer it in the same sense that some prefer dust specks over torture? How can you justify extension as opposed to replacement apart from current people just wanting it?
0Peter Wildeford
I thought everything in utilitarianism was justified by what people want, as in what maximizes their utility... How is the fact that people want extension as opposed to replacement not a justification?
0Lightwave
What maximizes their utility might not be what they (currently) want, e.g. a drug addict might want more drugs, but you probably wouldn't argue that just giving him more drugs maximizes his utility. There's a general problem that people can change what they want as they think more about it, become less biased/irrational, etc, so you have to somehow capture that. You can't just give everyone what they, at that current instant, want.
0Peter Wildeford
But wouldn't more life maximize the individual utility generally? It's not like people are mistaken about the value of living longer. I get your argument, but the fact that people want to live longer (and would still want to even after ideally rational and fully informed) means that the asymmetry is still there.
0Lightwave
Let me try to explain it this way: Let's say you create a model of (the brain of) a new person on a computer, but you don't run the brain yet. Can you say the person hasn't been "born" yet? Are we morally obliged to run his brain (so that he can live)? Compare this to a person who is in a coma. He currently has no preferences, he would've preferred to live longer, if he were awake, but the same thing applies to the brain in the computer that's not running. Additionally, it seem life extensionists also should commit to the resurrection of everyone who's ever lived, since they also wanted to continue living, and it could be said that being "dead" is just a temporary state.
0Peter Wildeford
I'm going to get hazy here, but I think the following answers are at least consistent: Yes. No. They are not equivalent, because the person in the coma did live. Yes, I do think life extensionists are committed to this. I think this is why they endorse Cryonics.
0Lightwave
Well it seems it comes down to the above being something like a terminal value (if those even exist). I personally can't see how it's justified that a certain mind that had happened (by chance) to exist at some point in time is more morally significant than other minds that would equally like to be alive, but hadn't had the chance to have been created. It's just arbitrary.
0Peter Wildeford
Upon further reflection, I think I was much too hasty in my discussion here. You said that "Compare this to a person who is in a coma. He currently has no preferences". How do we know the person in the coma has no pereferences? I'm going to agree that if the person has no preferences, then there is nothing normatively significant about that person. This means we don't have to turn the robot on, we don't have to resurrect dead people, we don't have to oppose all abortion, and we don't have to have as much procreative sex as possible. On this further reflection, I'm confused as to what your objection is or how it makes life extension and replacement even. As the original comment says, life extension satisfies existing preferences whereas replacement does not, because no such preferences exist.

I am an average utilitarian with one modification: Once a person exists, they are always counted in the number of people I average over, even if they're dead. For instance, a world where 10 people are born and each gets 50 utility has 10X50/10=50 utility. A world where 20 people are born, then 10 of them die and the rest get 50 utility each has (10X50+10X0)/20=25 utility. AFAICT, this method has several advantages:

  1. It avoids the repugnant conclusion.
  2. It avoids the usual argument against average utilitarianism, namely that it advocates killing off people experiencing low (positive) utility.
  3. It favors life extension over replacement, which fits both my intuitions and my interests. It also captures the badness of death in general.
  4. A society that subscribed to it would revive cryopreserved people.
[-]Larks140

This doesn't seem to be monotonic in pareto improvements.

Suppose I had the choice between someone popping into existence for 10 years on a distant planet, living a worthwhile life, and then disappearing. They would prefer this to happen, and so might everyone else in the universe; however, if other's utilities were sufficiently high, this person's existence might lower the average utility of the world.

4Normal_Anomaly
That is . . . a pretty solid criticism. Half of the reason I posted this was to have people tear holes in it. I'm looking for some way of modeling utilitarianism that adequately expresses the badness of death and supports resurrecting the dead, but maybe this isn't it. Perhaps a big negative penalty for deaths or "time spent dead," though that seems inelegant. EDIT: Looking at this again later, I'm not sure what counts as a pareto improvement. Someone popping into existence, living happily for one day, and then disappearing would not be a good thing according to my (current conception of) my values. That implies there's some length of time or amount of happiness experienced necessary for a life to be worth creating.
0jhuffman
Isn't there something a little bit broken about trying to find a utility system that will produce the conclusions you presently hold? How would you ever know if your intuitions were wrong?
4Normal_Anomaly
What basis do I have for a utility system besides my moral intuitions? If my intuitions are inconsistent, I'll notice that because every system I formulate will be inconsistent. (Currently, I think that if my intuitions are inconsistent the best fix will be accepting the repugnant conclusion, which I would be relatively okay with.)
0jhuffman
I understand what you are saying. But when I start with a conclusion, what I find myself doing is rationalizing. Even if my reasons are logically consistent I am suspicious of any product based on this process.
2Normal_Anomaly
If it helps, the thought process that produced the great^4-grandparent was something like this: "Total utilitarianism leads to the repugnant conclusion; average leads to killing unhappy people. If there was some middle ground between these two broken concepts . . . hm, what if people who were alive and are now dead count as having zero utility, versus the utility they could be experiencing? That makes sense, and it's mathematically elegant. And it weighs preserving and restoring life over creating it! This is starting to look like a good approximation of my values. Better post it on LW and see if it stands up to scrutiny."
8KatieHartman
It seems that you could use this to argue that nobody ever ought to be born unless we can ensure that they'll never die (assuming they stay dead, as people tend to do now).
3Normal_Anomaly
I bite this bullet to an extent, but I don't think the argument that strong. If someone has a better-than-average life before they die, they can still raise the average, especially if everyone else dies too. I'm not sure how to model that easily; I'm thinking of something like: the utility of a world is the integral of all the utilities of everyone in it (all the utility anyone ever experiences), divided by the number of people who ever existed. In this framework, I think it would be permissible to create a mortal person in some circumstances, but they might be too rare to be plausible.
3[anonymous]
I like this. Captures everything nicely. Em-ghettos and death both suck. It is good to have a firm basis to argue against them.

This actually reminds me of a movie trailer I saw the other day, for a movie called In Time. (Note: I am not at all endorsing it or saying you should see it. Apparently, it sucks! lol)

General premise of the sci-fi world- People live normally until 25. Then you stop aging and get a glowy little clock on your arm, that counts down how much time you have left to live. "Time" is pretty much their version of money. You work for time. You trade time for goods, etc. Rich people live forever; Poor people die very young. (pretty much imagine if over-drafting your bank account once means that you die)

Anyway, when I saw this preview, being the geek I am, I thought: "That doesn't make sense!"

The reason it doesn't make sense has to do with the extension v. replacement argument. Until the age of at least 16, and more generally 22-ish, people are a drain rather than benefit to society. The economic cost of maintaining a child is not equal to the output of a child. (I'm obviously not talking about love, and fulfillment of the parents, etc.).

This society's idea is that people of working age would be required to provide the economic cost for their life. However what would act... (read more)

response a) My life gets better with each year I live. I learn new things and make new friends. 2 people who live 12 years will not have the same amount of happiness as I will on my birthday, when I turn 24. I see no reason why the same should not hold for even longer lifespans.

Response b) I privilege people that already exist over people who do not exist. A person living 800 years is more valuable to me EVEN if you say the same amount of happiness happens in both cases. I care about existing people being happy, and about not creating sad people, but I don't particularly care about creating new happy entities unless it's necessary for the perpetuation of humanity, which is something I value.

response c) The personal response, I value my own happiness significantly higher than that of other people. 1 year of my own life is worth more to me than 1 year of someone else's life. If my decision was between creating 10 people as happy as I am or making myself 10 times happier, I will make myself 10 times happier.

Finally, you don't seem to realize what is meant by caring about average utility. In your scenario, the TOTAL years lived remains the same in both cases, but the AVERAGE utility goes far down in the second case. 80 years per person is a lot less than 800 years per person.

2Logos01
Not only that but there is a decent claim to be made to -- within certain bounds -- noting that ten people who live only 100 years is less preferable to a utilitarian than 1 person who lives 1,000 years, so long as we accept the notion that deaths cause others to experience negative utility. The same number of years are lived but even without attempting to average utility the 10x100 scenario has 9 additional negative-utility events the 1x1,000 does not.
6Prismattic
Implied assumption: death causes more disutility to others than birth causes utility to others. Might be true, but ought to be included explicitly in any such calculation.
0Logos01
True.
1Julia_Galef
Thanks -- I fixed the setup.
3[anonymous]
Please don't do that. OP's comment doesn't make any sense now.
1Julia_Galef
Ah, true! I edited it again to include the original setup, so that people will know what Logos01 and drethelin are referring to.

First thought: I accept the repugnant conclusion because I am a hard utilitarian. I also take the deals in the lifespan dilemma because my intuition that the epsilon chances of survival "wouldn't be worth it" are due to scope insensitivity.

Second: I attach much more disutility to death than utility to birth for two reasons, one good and bad. The bad reason is that I selfishly do not want to die. The good reason, which I have not seen mentioned, is that the past is not likely to repeat itself. Memories of the past have utility in themselves! History is just lines on paper, sometimes with videos, sometimes not, but it doesn't compare to actual experience! Experience and memory matter. Discounting them is an error in utilitarian reasoning.

0jhuffman
The exact circumstances and memories of a person's life will not repeat but that's just as good an argument for creating new people who will also have unique memories that otherwise would not happen. While some remarkable memories from the past would be in some ways special to me if I can trace any sort of cultural lineage through them, memories from closely intertwined lives would interest me less than other memories that would be completely novel to me.
1Grognor
You're right. But here's the thing. I should have said it in my original comment, but the argument holds because learning from history is important, and, as we've all shown, that's REALLY HARD to do when everyone keeps dying. And I also strongly value the will to awesomeness, striving to be better and better (even before I read Tsuyoku Naritai), and I expect that people start at 0 and increase faster than linearly over time. In other words, the utility is still greater for the people who are still alive.

I'm perfectly prepared to bite this bullet. Extending the life of an existing person a hundred years and creating a new person who will live for a hundred years are both good deeds, they create approximately equal amounts of utility and I believe we should try to do both.

1torekp
I agree. Note that this is independent of utilitarianism per se.

I already exist. I prefer to adopt a ruleset that will favor me continuing to existing. Adopting a theory that does not put disutility on me being replaced with a different human would be very disingenuous of me. Advocating the creation of an authority that does not put disutility on me being replaced with a different human would also be disingenuous.

For spreading your moral theory, you need the support of people who live, not people who may live. Thus, your moral theory must favor their interests.

[edit] Is this metautilitarianism?

1jhuffman
I am rich because I own many slaves. I prefer to adopt a ruleset that will favor me by continuing to provide me with slaves. ... etc.
-1FeepingCreature
Which is not necessarily a bad choice for you! Very few people are trying to genuinely chose the most good for the most people; they're trying to improve their group status by signalling social supportiveness. There's no point to that if your group will be replaced; even suicide bombers require the promise of life after death or rewards for their family.

In the replicating scenario, people die twice as much. Since expectations of near death are unpleasant and death itself is unpleasant for the relatives and friends, doubling the number of deaths induces additional disutility, ceteris paribus.

I don't see how this is a paradox at all.

Scenario (1) creates 100 years of utility, minus the death of one person. Scenario (2) creates 100 years of utility, plus the birth of one person, minus the death of two people. We can set them equal to each other and solve for the variables, you should prefer scenario (1) to scenario (2) iff the negative utility caused by a death is greater than the utility caused by a birth. Imagine that a child was born, and then immediately died ten minutes later. Is this a net positive or negative utility? I vote negative ... (read more)

2[anonymous]
I'm not sure about the Children of Men example: a birth in that situation is only important in that it implies MORE possible births. If it doesn't, I still say that a death outweighs a birth. But here's another extremely inconvenient possible world: People aren't 'born' in the normal sense - instead they are 'fluctuated' into existence as full-grown adults. Instead of normal 'death', people simply dissolve painlessly after a given amount of time. Nobody is aware that at some point in the future they will 'die', and whenever someone does all currently existing people have their memories instantly modified to remove any trace of them. I still prefer option (1) in this scenario, but I'm much less confident of it.
0Ghatanathoah
This scenario is way, way worse than the real world we live in. It's bad enough that some of my friends and loved ones are dead. I don't want to lose my memories of them too. The social connections people form with others are one of the most important aspects of their lives. If you kill someone and destroy all their connections at the same time you've harmed them far more badly than if you just killed them. Plus, there's also the practical fact that if you are unaware of when you will "dissolve" it will be impossible for you to plan your life to properly maximize your own utility. What if you had the choice between going to a good movie today, and a great movie next week, and were going to dissolve tomorrow? If you didn't know that you were going to dissolve you'd pick the great movie next week, and would die having had less fun than you otherwise could have had. I'd prefer option 1 in this scenario, and in any other, because the title of the OP is a misnomer, people can't be replaced. The idea that you are "replacing" someone if you create a new person after they die implies that people are not valuable, they are merely containers for holding what is really valuable (happiness, utility, etc.), and that it does not matter if a container is destroyed as long as you can make a new one to transfer its contents into. I completely disagree with this approach. Utility is valuable because people are valuable, not the other way around. A world with lower utility where less people have died is better than a world of higher utility with more death.

I prefer 1 to 2 because I'm currently alive, and so 1 has a more direct benefit for me than 2. I don't know if I have any stronger reasons; I don't think I need any, though.

I really need to fix my blog archive, but I discussed this in the post at the top of this page.

0Julia_Galef
Thanks -- but if I'm reading your post correctly, your arguments hinge on the utility experienced in Life Extension being greater than that in Replacement. Is that right? If I stipulate that the utility is equal, would your answer change?
3steven0461
If utility per life year is equal, and total life years are equal, then total utility is equal and total utilitarianism is indifferent. But for the question to be relevant for decision-making purposes, you have to keep constant not utility itself, but various inputs to utility, such as wealth. Nobody is facing the problem of how to distribute a fixed utility budget. (And then after that, of course, you can analyze how those inputs themselves would vary as a result of life extension.) I object to the phrasing "utility experienced". Utility isn't something you experience, it's a statement about a regularity in someone's preference ordering -- in this case, mine.

I think it comes down to how you value relationships. I don't want my family replaced, so replacing one of them with someone in a similarly valuable mental state might be equal in terms of their mental state, but because you've broken a relationship I value, the total utility has dropped. Other than this, I'm not sure I can see a relevant difference between extension and replacement.

I assume everyone is familiar with the following argument:

Premise: You are not indifferent about the utility of people who will come to exist, if they definitely will exist. Conclusion: You can't be in general indifferent between people existing and not existing.

World A: Person has 10 utility World B: Person does not exist World C: Person has 20 utility

By hypothesis, you're not indifferent between A and C. Hence by transitivity, you're not indifferent between both A,B and B,C.

Ignoring the fact that replacement tend to be expensive, I'd consider them equal utility if I believed in personal identity. I don't, so not only are the equally good, they are, for all intents and purposes, the same choice.

0[anonymous]
Downvoted for using such an ill defined word as personal identity, without additional specification.
0DanielLC
I don't think there's any fundamental connection between past and future iterations of the same person. You die and are replaced by someone else every moment. Extending your life and replacing you are the same thing.
5orthonormal
I don't need to posit any metaphysical principle; my best model of the universe (at a certain granularity) includes "agents" composed of different mind-states across different times, with very similar architecture and goals, connected by memory to one another and coordinating their actions.
0DanielLC
Exactly what changes if you remove the "agents", and just have mind-states that happen to have similar architecture and goals?
3orthonormal
At present, when mind-copying technology doesn't exist, there's an extremely strong connection exhibited by the mind-states that occupy a given cranium at different times, much stronger than that exhibited by any two mind-states that occupy different crania. (This shouldn't be taken naively- I and my past self might disagree on many propositions that my current self and you would agree on- but there's still an architectural commonality between my present and past mind-states, that's unmistakably stronger than that between mine and yours.) Essentially, grouping together mind-states into agents in this way carves reality at its proper joints, especially for purposes of deciding on actions now that will satisfy my current goals for future world-states.
0DanielLC
So does specifying rubes and bleggs. This is what I mean by there being nothing fundamentally separating them. It might matter whether it's red or blue, or whether it's a cube or an egg, but it can't possibly matter whether it's a rube or a blegg, because it isn't a rube or a blegg.
5orthonormal
At present, there aren't any truly intermediate cases, so "agents with an identity over time" are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, "rube" becomes a useful concept. In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of "agents" in order to be successful. I'm not talking about agents being fundamental- they aren't- just that they're tremendously useful components of certain approximations, like the wings of the airplane in a simulator. Even if a concept isn't fundamental, that doesn't mean you should exclude it from every model. Check instead to see whether it pays rent.
-3DanielLC
My point isn't that it's a useless concept. It's that it would be silly to consider it morally important.
6Vladimir_Nesov
You argued that a concept "isn't fundamental", because in principle it's possible to construct things gradually escaping the current natural category, and therefore it's morally unimportant. Can you give an example of a morally important category?
2orthonormal
Sorry, but my moral valuations aren't up for grabs. I'm not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation. But at this point, I think we've passed the productive stage of this particular discussion.
1Curiouskid
like memory?
1DanielLC
There is nothing morally important about remembering being someone. There's no reason there has to be the same probability of being you and being one of the people you remember being. Memory exists, but it's not relevant. Read The Anthropic Trilemma. I agree with the third horn.
0jacob_cannell
I find this odd because it sounds like the exact opposite of the patternist view of identity, where memory is all that is relevant. Would you not mind then if some process erased all of your memories? Or replaced them completely with the memories of someone else?
0DanielLC
It's the lack of the patternist view of identity. I have no view of identity, so I disagree. It would be likely to cause problems, but beyond that, no. I don't see why losing your memory would be intrinsically bad. I think the main thing I'm against is that any of this is fundamental enough to have any effect on anthropics. Erasing your memory and replacing it with someone else's who's still alive won't make it half as likely to be you, just because there's only a 50% chance of going from past him to you. Erasing your memory every day won't make it tens of thousands of times as likely to be one of them, on the basis that now you're tens of thousands of people. You could, in principle, have memory mentioned in your utility function, but it's not like it's the end of the world if someone dies. I mean that in the sense that existance ceases for them or something like that. You could still consider it bad enough to warrant the phrase "it's like the end of the world".
0[anonymous]
Don't know if I would call a mind-state a person, persons usually respond to things, think and so on, a mind state can't do any of that. It's somewhat like saying "a movie is made of little separate movies" when it's actually separate frames. And death implies the end of a person not a mind-state. It might be a bit silly of me to make all this fuss about definitions but it's already a quite messy subject, let's not make it any messier.
-1DanielLC
Fine, there's no fundamental connection between separate mind-states. Personhood can be defined (mostly), but it's not fundamentally important whether or not two given mind-states are connected by a person. All that matters is the mind-states, whether you're talking about morality or anthropics.
0[anonymous]
All this is of course very speculative but couldn't you just reduce mind-states into sub-mind-states? If you look at split brain patients, where you have cut off corpus callosum, the two hemispheres behave/report in some situations as if they were two different people, it seems (at least to me) that there does not seem to be such irreducible quanta as "brain-states" either. My point is that you could make the same argument: It's not fundamentally important whether or not two given sub-mind-states are connected by a mind state. All that matters is the sub-mind-states.
-2DanielLC
It seems to me that my qualia are all experienced together, or at least the ones that I'm aware of. As such, there is more than just sub-mind-states. There is a fundamental difference. For what it's worth, I don't consider this difference morally relevant, but it's there.

I guess that my own response to the repugnant conclusion tends to be along the lines that mere duplication does not add value, and the more people there are, the closer the inevitable redundancy will bring you to essentially adding duplicates of people you already have. At least as things are at present, giving an existing person an extra hundred years seems like it will involve less redundancy than adding yet another person with a hundred year lifespan to the many we already have and are constantly adding.

We can deal with this with a thought experiment that engages our intuitions more clearly, since it doesn't involve futuristic technology: Is it okay to kill a fifteen year old person who is destined to live a good life if doing so will allow you to replace them with someone who will live a life that is as good, or better, as the fifteen year old's remaining years would have been? What if the fifteen year old in question is disabled, so their life is a little more difficult, but still worth living, while their replacement would be an able person? Would i... (read more)

This is a really interesting issue which I suspect will only get more important over time. I largely agree with Xachariah, but I see a greater dependency on personal preference.

Another way of looking at the problem is to consider individual preferences. Imagine a radical sustainable future where everyone gets to choose between an extended life with no children or a normal life with 1 child (or 2 per couple). I'd be really interested in polls on that choice. Personally I'd choose extension over children. I also suspect that polls may reveal a significa... (read more)

As you phrased it, life extension and replacement seem roughly similar to me. I don't feel the need to modify my utilitarianism to strongly prefer life extension. There are some differences, though:

  • Perhaps the later life years are less pleasant than the earlier ones? You're less physically able, more cynical, less open to new ideas? Or perhaps the later life years are more pleasant than the earlier ones? You've had the time to get deeply into subjects and achieve mastery, you could have some very strong old friendships, you have a better model of th
... (read more)

This presumes that extending the life of an existing person by 100 years precludes the creation of a new person with a lifespan of 100 years. We will be motivated to prefer the former scenario because it is difficult for us to feel its relevance to the latter.

I currently route around this by being an ethical egoist, though I admit that I still have a lot to learn when it comes to metaethics. (And I 'm not just leaving it at "I still have a lot to learn", either - I'm taking active steps to learn more, and I'm not just signalling that, and I'm not just signalling that I'm not signalling that, etc.)!

[-][anonymous]00

Why does one have to be better than the other?

4Julia_Galef
One doesn't have to be better than the other. That's what's in dispute. I think making this comparison is important philosophically, because of the implications our answer has for other utilitarian dilemmas, but it's also important practically, in shaping our decisions about how to allocate our efforts to better the world.
[-][anonymous]-20

But in my thought experiment, average utility remains unchanged.

The average utility, counting only those two people, is unchanged (as long as we assume that life from 0-100 is as pleasurable as life from 100-200). But firstly the utility of other humans should be taken into account; the loved ones of the person already living, the likely pleasure given to others by younger people in comparison to older people, the expected resources consumed etc.

But perhaps your thought experiment supposes that these expected utility calculations all happen to be equal ... (read more)

[This comment is no longer endorsed by its author]Reply