Distinction between "creating/preventing future lives" and "improving future lives that are already expected to exist"?
I'm writing something (mostly for myself right now) about how if you're somewhat of a utilitarian, a very wide range of population ethics principles (total utilitarianism, average utilitarianism, and critical-level utilitarianism with any critical level) will lead to the population size of some countries being strongly non-neutral, in the sense that changing the number of people in those countries is worth a surprisingly large reduction in average income (>2% income reduction for a 1% population increase/decrease).
Part of what I wrote used an assumption that shared by all the utilitarian population ethics principles I know of: if you prevent the birth of someone with utility X and cause the birth of someone else with utility Y (with Y > X), that's just as good as causing a not-yet-born person to have utility Y instead of X. In fact, population ethics is not needed to make this comparison, since neither outcome changes the population size. But it's not too far-fetched to think that the two situations are different: in the first one, the Y-utility person is a different person from the X-utility person, while in the second one they could be argued to be the same person. Good arguments have been made that the second outcome actually produces a different person because very small things, like which egg/sperm you came from, can change your identity (Parfit's Nonidentity Problem). So I think my assumption is reasonable, but I'm concerned that I don't know what the best arguments against it are.
What are the most well-known utilitarian or non-utilitarian consequentialist theories that make a distinction "different future people" and "the same future person"? Is there a consistent way to make this distinction "fuzzy" so that an event like being conceived by a different sperm is less "identity-changing" than being born on the other side of the world to completely different parents?
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (5)
There aren't any. It's an inherently deongological idea.
Suppose Alice or Bob can exist, but not both. Under deontology, you could talk about which one of them will exist if you do nothing, and ask if it's a good idea to change it. You might decide that you can't play god and you have to leave it, that you should make sure the one with a better life comes into existence, or that since they're not born yet neither of them have any rights and you can decide whichever you like.
Under consequentialism, it's a meaningless question. There is one universe with Alice. There is one with Bob. You must choose which you value more. Choosing not to act is a choice.
If Alice and Bob have the same utility, then you should be indifferent. If you consider preventing the birth of Alice with X utility and causing the birth of Bob with Y utility, that's the same as preventing the birth of Alice with X utility and causing the birth of Bob with X utility plus increasing the utility of Bob from X to Y. This has a total utility of 0 + (Y-X) = Y-X.
Is there a separate name for "consequentialism over world histories" in comparison to "consequentialism over world states" ?
What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say "don't do it, killing people is bad". Consequentialism over world states would say "do it, utility will increase" (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say "the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don't do it".
I don't know if there's a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don't like the idea of judging based on things like that, but it's just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you'd have no way of knowing if the instants happened in a different order, or even if some of them didn't happen.)
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.
Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn't something that can be justified on consequentialist grounds.
Yes, that makes the most sense.
No no, I understand that you're not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.
And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility ... unless "playing god events" have negative utility.
If you already have a way to compare utilities of different moral agents you should look into that method whether differences arise or not. You could ofcourse identify the moral identity of the person on how it impacts the global utility function. However the change in utility relative to the persons values need not be one-to-one to the global function. Not handling the renumeration would be like assuming that $ and £ impact equally to wealth straight with their numerical values. However if I have the choice to make Clippy the paperclip maximiser or Roger the rubberband maximiser there probably is some amount of paperclips in utility to Clippy that would correspond to rubberbands in utility to Roger. But I have hard time imagining how I would come to know that amount.