endoself comments on Non-personal preferences of never-existed people - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
I want to bring people into existence to satisfy my own preferences. Of course, everything I want tautologically satisfies my own preferences, but I decide that bringing people into existence is good because of the value of their lives, not because they would have chosen to exist.
Good
I'd somewhat disagree with you (at least in the strong, repugnant conclusion form of your argument), but this is a much more defensible argument that ones that implicitly rely on the preferences of non-existent people.
What do you mean by this? When I talked about the value of people lives, I was referring to peoples lives, insofar as they have value, not implying that all lives inherently have value just by existing.
I was referring to this type of argument: http://en.wikipedia.org/wiki/Repugnant_conclusion and making unwarranted assumptions about how you would handle these cases.
Oh, the original repugnant conclusion. I though you were just drawing an analogy to it. Anyway, I think that people only find this conclusion repugnant because of scope insensitivity.
I find it repugnant because I find it repugant. Any population ethic that is utilitarian is as good as any other; mine is of a type that rejects the repugnant conclusion. Average utilitarianism, to pick one example, is not scope insensitive, but rejects the RP (I personally think you need to be a bit more sophisitcated).
You sound a bit like Self-PA here. You do realize that it is possible to misjudge your preferences due to factual mistakes? That's what the people in Eliezer's examples of scope insensitivity were doing. I don't see how you could determine the utility of one billion happy lives just by asking a human brain how it feels about the matter (ie without more complex introspection, preferably involving math).
Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else's personal experiential utility, then this should be done. My mind can understand one person's experiences, and I think that, as long as their personal experiential utility is positive*, doing so is wrong.
* Since personal experiential utility must be integrated over time, it must have a zero, unlike the utility functions that describe preferences.
I suspect you've allowed yourself to be confused by the semantics of the scenario. If you rule out externalities, removing someone from the world of the thought experiment can't be consequentially equivalent to killing them (which leaves a mess of dangling emotional pointers, has a variety of knock-on effects, and introduces additional complications if you're using a term for preference satisfaction, to say nothing of timeless approaches); it's more accurately modeled with a comparison between worlds where the person in question does and doesn't exist, Wonderful Life-style.
With that in mind, it's not at all self-evident to me that the world where the less-satisfied-than-average individual exists is more pleasant or morally perfect than the one in which they don't. Why not bite that bullet?
No, I was not making that confusion. I based my decision on a consideration of just that person's mental state. I find a `good' life valuable, though I don't know the specifics of what a good life is, and ceteris paribus, I prefer its existence to its nonexistence.
As evidence to me clearly differentiating killing and `deleting' someone, I am surprised by how much emphasis Eliezer puts on preserving life, rather than making sure that good life exist. Actually, thinking about that article, I am becoming less surprised that he takes this position because he focuses on the rights of conscious beings rather than on some additional value possessed by already-existing life relative to nonexistent life.
Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.
The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I'm really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there's nothing in the definition of the scenario that appears to forbid it.
That's the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven't thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I'm sticking with them :-)
Btw, total utilitarianism has a problem with death as well. Most total utilitarians do not consider "kill this person, and replace them with a completely different person who is happier/has easier to satisfy preferences" as an improvement. But if it's not an improvement, then something is happening that is not captured by the standard total utility. And if total utilitarianism has to have an extra module that deals with death, I see no problem for other utility functions to have a similar module.
Do you think that Eliezer's arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn't your average utilitarianism based on the same intuition?
I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say "actually, I care about warm fuzzies to do with saving children, not saving children per see", then they are monsterous people, but consistent.
You can't say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don't count as moral agents - just as normal humans aren't worried about the scope insensitivity over the feelings of sand.
I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I'm working towards an improved utility that has more of my moral values and less problems.