AlexMennen comments on Population Ethics Shouldn't Be About Maximizing Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (46)
This seems obviously absurd. The examples you listed are all valuable things, but only because I (and others) value them.
I certainly don't think that the things I listed are valuable independent of creatures that value them existing. For instance, I don't think that a lifeless world full of beautiful statues or great books would be any more valuable than any other kind of lifeless world. And as I say later, trying to force a person who doesn't value those things to do them is not valuable at all.
What I am saying is that creating creatures that value those things is morally good and attempting to replace creatures that value those things is morally bad. A village full of humans living worthwhile lives is morally better than a galaxy full of paperclip maximizers
I agree, but only because it would likely increase the utility of currently existing creatures, such as myself.
I disagree with your reasoning in this respect. It seems to me like I have some sort of moral duty to make sure creatures with certain values exist, and this is not only because of their utility, or the utility they bring to existing creatures.
For instance, if I had a choice between: A) Everyone on Earth dies in a disaster, later a species with humanlike values evolves or B) Everyone on Earth dies in a disaster, later a race of creatures with extremely unhumanlike values evolves, I would pick A, even though neither choice would increase the utility of anyone existing. I would even pick A if the total utility of the creatures in A was lower than the total utility of the creatures in B, as long as the total utility was positive.
I agree that ultimately, the only reason I am motivated to act in such a fashion is because of my desires and values. But that does not mean that morality reduces only to satisfying desires and values. It means that it is morally good to create such creatures, and I desire and value acting morally.
A is better than B for currently existing people.
As I pointed out in my other response to your post, that doesn't actually mean anything
I would choose A even if it meant existing people would have to make some sort of small sacrifice before the disaster killed them.
And in the case of the the Repugnant Conclusion I might choose A or Q over Z even if every single person is created at the same time instead of one population existing prior to the other.
I'm not referring to VNM utility, what I'm talking about is closer to what the Wikipedia entry calls "E-utility." To summarize the difference, if I do something I'll be miserable doing because it greatly benefits someone else and that is morally good, I will have higher VNM utility than if I didn't do it, because VNM utility includes doing what is morally good as part of my utility function. My E-utility, however, will be lower than if I didn't do it.
Interpersonal E-utility comparisons aren't that hard. I do them all the time in my day-to-day life. That's basically what the human capacity for empathy is. I use my capacity for empathy to model other people's minds. In fact, if it wasn't possible to do interpersonal E-utility comparisons between agents it's kind of hard to see how humans would have ever even evolved the capacity for empathy..
So would I. As evidenced by our choices, we care enough about A vs B that A is still better than B in terms of our (VNM) utility even if A requires us to make a small local sacrifice.
If you were talking about E-utility this entire time, then that changes everything. Our preferences over people's E-utilities are tremendously complicated, so any ideal population ethics described in terms of E-utilities will also be tremendously complicated. It doesn't help that E-utility is a pretty fuzzy concept.
Also, saying that you're talking about E-utility doesn't solve the problem of interpersonal utility comparison. Humans tend to have similar desires as other humans, so comparing their utility isn't too hard in practice. But how would you compare the E-utilities experienced by a human and a paperclip maximizer?
I was, do you think I should make a note of that somewhere in the OP? I should have realized that on a site that talks about decision theory so often I might give the impression I was talking about VNM instead of E.
That is difficult, to be sure. Some kind of normalizing assumption is probably neccessary. One avenue of attack would be the concept of a "life worth living," for a human it would be a life where positive experiences outweighed the negative, for a paperclipper one where its existence resulted in more paperclips than not.
It may be that the further we get away from the human psyches the fuzzier the comparisons get. I can tell that a paperclipper whose existence has resulted in the destruction of a thousand paperclips has a lower E-utility than a human who lives a life very much worth living. But I have trouble seeing how to determine how much E-utility a paperclipper who has made a positive number of paperclips has compared to that person.
That might be a good idea.
That tells you what zero utility is, but it doesn't give you scale.
It sounds like you're equating E-utility with VNM utility for paperclippers. It seems more intuitive to me to say that paperclippers don't have E-utilities, because it isn't their experiences that they care about.
I added a footnote.
That's probably right. That also bring up what I consider an issue in describing the utility of humans. Right now we are basically dividing the VNM of humans into E-utility and what might be termed "moral utility" or "M-utility." I'm wondering if there is anything else. That is, I wonder if human beings have any desires that are not either desires to have certain experiences, or desires to do something they believe is morally right. Maybe you could call it "Nonpersonal nonmoral utility," or "NN utility for short."
I wracked my brain and I can't think of any desires I have that do not fit the categories of M or E utility. But maybe I'm not just thinking hard enough. Paperclippers are obviously nothing but NN utility, but I wonder if it's present in humans at all.