Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Non-personal preferences of never-existed people

12 Post author: Stuart_Armstrong 10 March 2011 07:54PM

Some people see never-existed people as moral agents, and claim that we can talk about their preferences. Generally this means their personal preference in existing versus non-existing. Formulations such "it is better for someone to have existed than not" reflect this way of thinking.

But if the preferences of never-existed are relevant, then their non-personal perferences are also relevant. Do they perfer a blue world or a pink one? Would they want us to change our political systems? Would they want us to not bring into existence some never-existent people they don't like?

It seems that those who are advocating bringing never-existent people into being in order to satisfy those people's preferences should be focusing their attention on their non-personal preferences instead. After all, we can only bring into being so many trillions of trillions of trillions; but there is no theoretical limit to the number of never-existent people whose non-personal preferences we can satisfy. Just get some reasonable measure across the preferences of never-existent people, and see if there's anything that sticks out from the mass.

Comments (69)

Comment author: JenniferRM 10 March 2011 10:54:56PM *  7 points [-]

So I haven't read this because there's all this other stuff higher in my queue that's based on the assumption that he's wrong and life is good, but David Benatar wrote "Better Never to Have Been: The Harm of Coming into Existence" claiming, roughly, that because humans seem to innately experience a loss of X as three times as bad as a gain of X, it would be better on average for a person not to be brought into existence.

From the reviews it seems Benatar's practical upshot is that people shouldn't kill themselves right away, nor kill their existing children, but neither should any more children be brought into existence.

If anyone is really into this topic it would be awesome for someone to read through the book and write a review that assumes the audience understands cognitive biases, astronomical waste, and the coming possibility of revising human nature so that "human nature" arguments aren't acceptable unless they cover "all possible human natures we could eventually edit ourselves into".

Comment author: lukeprog 14 May 2011 12:48:44AM 2 points [-]

Still haven't had time to read the book, and probably never will, but John Danaher has been covering some arguments over the book on his (excellent) blog: Part One, Part Two, Part Three.

Comment author: lukeprog 11 March 2011 02:02:39AM 2 points [-]

I have the book. Maybe I'll publish a reply from the transhumanist perspective if I ever find the time.

Comment author: DanielLC 11 March 2011 06:30:50AM 3 points [-]

I'm a classical utilitarian, so I don't have this problem.

If I were to accept preference utilitarianism, I'd say that fulfilled preferences are worth utility, and by bringing them into being I'd allow them to have fulfilled preferences.

Of course, I'd also say that you should lock people in small, brightly lit spaces to make them prefer big, empty, dark spaces, like most of the universe. Then they'd have really fulfilled preferences. Perhaps I just don't understand preference utilitarianism.

Comment author: atucker 12 March 2011 01:16:40AM 1 point [-]

In general, I think that most desires aren't fulfilled on a viscerally emotional level by the mere existence of something so much as actually receiving it. I'm not nearly as fulfilled by ice-cream's existence as I am fulfilled when I'm eating it.

I don't think those people would prefer having their preferences changed in that way.

Comment author: DanielLC 17 April 2011 06:30:39AM 0 points [-]

If you mean they have to get the emotion of a preference being fulfilled, isn't that happiness?

Comment author: Stuart_Armstrong 11 March 2011 10:45:30AM 1 point [-]

Care to specify that utility function that you claim to follow? :-)

Comment author: DanielLC 12 March 2011 05:50:44PM 0 points [-]

Maximize pleasure minus pain.

Comment author: Stuart_Armstrong 16 March 2011 03:52:55PM *  3 points [-]

Now I have two undefined terms, rather than one.

I'm not trying to be sophist here, I'm just pointing out that "classical utilitarians" are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.

But there's also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they're in there with the rest of us (as long as we are utilitarians of some type).

Comment author: endoself 16 March 2011 07:41:40PM *  1 point [-]

Thank you for showing me this!

Comment author: Stuart_Armstrong 16 March 2011 08:26:09PM 0 points [-]

Cheers :-)

Comment author: DanielLC 17 April 2011 06:33:06AM 0 points [-]

Most people's ethics are based on their desires. People's desires are based on what makes them happy. That's as far down as it goes.

A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what's happening now, you're happy. If you alter them away, you're sad.

Comment author: Stuart_Armstrong 19 April 2011 03:39:12PM *  0 points [-]

A utility function is quantitative, not qualitative.

How would you go about transforming these vague statements into precise mathematical definition?

(I'll grant you "black box rights"; you can use terms - anger, doubt, etc... - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the "anger" term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility - "utility is money" being the most trivial - are also valid if you don't want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).

Comment author: DanielLC 19 April 2011 07:53:09PM 0 points [-]

Utility is the dot product of the derivative of desires and the observations. Desires are what you attempt to make happen.

If you start trying to make what's currently happening happen more often, then you're happy.

Comment author: atucker 12 March 2011 01:19:21AM 0 points [-]

I don't think most utilitarians claim to follow (or even know) their utility function so much as assert that utility maximization is the proper way to resolve moral conflicts.

Kind of how like physicists claim that there would be a theory of everything without actually knowing what it is.

Comment author: Stuart_Armstrong 16 March 2011 03:59:00PM 1 point [-]

I perfectly agree that utility maximisation is indeed the proper way to resolve common moral conflicts.

But utility functions can be as complex as you need them to be! Saying you have a utility function does not constrain you virtually at all. But sometimes total utilitarians like to claim that their version is better because it is "simpler" or "more intuitive".

First of all, simplicity is not a virtue comparable with, say, human lives or happines, secondly I have different intuitions to them, and thirdly, their actual real utility function, if it were specified, would be unbelievably complex anyway.

I don't want to pour important moral insights down the drain, based on specious simplicity arguments....

Comment author: Matt_Simpson 10 March 2011 09:10:15PM 3 points [-]

Some people see never-existed people as moral agents, and claim that we can talk about their preferences. Generally this means their personal preference in existing versus non-existing. Formulations such "it is better for someone to have existed than not" reflect this way of thinking.

It might just reflect the speaker's preference for people to exist rather than not exist rather than referencing the preferences of the potentially hypothetical person.

Comment author: Stuart_Armstrong 11 March 2011 10:47:38AM 0 points [-]

Well yes, it does, in my opinion. But it's not often phrased honestly.

Comment author: Nisan 11 March 2011 03:44:09PM 0 points [-]

What might be going on is that people are tempted to use a person's preference for existing as a proxy for the value of their life, in the same way that a person's preferences for birthday presents can inform us about what kinds of birthday presents will make them happier.

I would certainly think twice about having a child if I knew the child would grow up to express a wish to never have been born. And I'm not even a preference utilitarian. But this approach seems problematic, and it's probably better to just ask ourselves what kind of people we want to bring into existence.

Comment author: atucker 11 March 2011 06:17:54AM *  2 points [-]

I personally don't feel compelled to help non-existing people accomplish their goals.

I suspect that this has something to do with the fact that I seem to mostly care about things which activate my empathy, all of which have physically instantiated qualia (happiness, pain, etc.) that I care about, or are highly anthropomorphic and fake that effectively (like Wall-E).

Since non-existing people don't physically exist, I have yet to feel bad for them. This could just be a failure of my moral circuitry though, sort of how if I never found out about starving people in Africa, they wouldn't interact with me in way that would make me feel bad about them. However, I am confused about how mathematically specified but non-existing people work.

Comment author: Dorikka 11 March 2011 01:00:27AM 2 points [-]

This is how I imagine the general form of utilitarianism's utility function: assign all states of being a quantity of fun, which can be positive or negative. Multiply all fun-values times the amount of time over which they are being experienced times the number of beings experiencing them.

This utility function could be optimized by bringing people into the universe when the total fun-impact that they would have on the universe (including themselves) is higher than would be the total fun-impact of not bringing them into the universe.

However, this utility function only cares about people's preferences if they either already exist or may be brought into existence. It could act on data which suggested that sentiences with certain preferences were more likely to exist than sentiences with other preferences, but I don't know if we have any strong data in that area. (I would suppose not, but I don't think I've thought about it long enough to make a positive claim.)

As to why I'm discussing utilitarianism: all utility functions which assign utility to the fulfillment of others' preferences is a form of utilitarianism -- if you object to valuing all sentiences equally, then add a multiplier in front of each sentience indicating how much you value fun it has over that which other sentiences have. Either way, I think that the above conclusions still apply.

Comment author: Perplexed 11 March 2011 12:22:24AM 1 point [-]

I would advocate the ethical principle that we should effectively take into account the values of non-existent people only to the extent that we can expect them to effectively take our own interests into account.

So, for example, as a recent retiree, I should take into account the preferences of people to be born next year to the extent that I expect them to keep Social Security solvent. On the other hand, I have a much lower obligation to people to be born next century, since I don't expect them to contribute much to me.

As for my obligations to counter-factual people - those flow through counterparts. I have obligations to the Downs-syndrome child next-door because his healthy, but counterfactual, counterpart would have had some obligations to me and to my counterfactual Downs-syndrome counterpart. But neither of us has particularly strong obligations to (nor claims on) some poor peasant in Kerala, because of the length of the chain of counterfactual assumptions and common acquaintances that connect us.

Comment author: Nisan 11 March 2011 03:24:24PM 0 points [-]

If I understand correctly, you're saying that if you had Down Syndrome and your neighbor were healthy then you would want your neighbor to help you; so therefore in reality you are healthy and help your neighbor who has Down Syndrome; and this constitutes your obligation to them.

Is this correct?

Comment author: Perplexed 12 March 2011 12:43:05AM 1 point [-]

Yes. Roughly speaking, a Nash bargain could-have/should-have been made to that effect in the "Original Position" when we were both operating under Rawls's "Veil of Ignorance". I don't completely buy Rawls's "Theory of Justice", but it makes a lot more sense to me than straight utilitarianism.

Comment author: Wei_Dai 12 March 2011 04:23:22AM 1 point [-]

Do you have any candidates for a "reasonable measure"? It occurs to me that if you use something like the universal distribution, you introduce a circularity in your decision algorithm, because your decisions influence which persons have more measure, which in turn influences the preferences that go into making those decisions.

Comment author: Vlodermolt 11 March 2011 09:10:06PM 1 point [-]

Are non-existent people simply extensions of the existent people who created them? Is not the reason for their creation to forward a point supported or opposed by their creator?

Comment author: Ghatanathoah 30 November 2012 07:34:16PM 0 points [-]

If I understand the orthogonality thesis properly then it is possible to have any utility function. So if we were to try to satisfy the nonpersonal preferences of nonexistant people we would be paralyzed with indecision, because for every nonexistant creature with a utility function saying "Do X" there would be another nonexistant creature with a utility function saying "Don't do X."

This also means that your suggestion to "get some reasonable measure across the preferences of never-existent people, and see if there's anything that sticks out from the mass" probably wouldn't work. For everything that stuck out there would be another thing that stuck out saying to do the opposite.

Now Robin Hanson, who I think was responsible for starting this whole line of thought, replied to similar objections by suggesting that maybe not all non-existant people's preferences should count morally. But if I recall, one of his main arguments for taking nonexistant people's preferences into account in the first place was that if you started ignoring people's preferences, where would you stop?

So overall, I think that regarding the preferences of people who don't exist, and never will exist, as morally relevant, isn't a good idea.

Comment author: endoself 11 March 2011 12:06:25AM 0 points [-]

I want to bring people into existence to satisfy my own preferences. Of course, everything I want tautologically satisfies my own preferences, but I decide that bringing people into existence is good because of the value of their lives, not because they would have chosen to exist.

Comment author: Stuart_Armstrong 11 March 2011 11:08:15AM *  0 points [-]

Good

I'd somewhat disagree with you (at least in the strong, repugnant conclusion form of your argument), but this is a much more defensible argument that ones that implicitly rely on the preferences of non-existent people.

Comment author: endoself 11 March 2011 02:40:37PM 1 point [-]

at least in the strong, repugnant conclusion form of your argument

What do you mean by this? When I talked about the value of people lives, I was referring to peoples lives, insofar as they have value, not implying that all lives inherently have value just by existing.

Comment author: Stuart_Armstrong 11 March 2011 07:31:09PM 1 point [-]

I was referring to this type of argument: http://en.wikipedia.org/wiki/Repugnant_conclusion and making unwarranted assumptions about how you would handle these cases.

Comment author: endoself 11 March 2011 08:00:57PM *  1 point [-]

Oh, the original repugnant conclusion. I though you were just drawing an analogy to it. Anyway, I think that people only find this conclusion repugnant because of scope insensitivity.

Comment author: Stuart_Armstrong 11 March 2011 08:49:40PM 1 point [-]

I find it repugnant because I find it repugant. Any population ethic that is utilitarian is as good as any other; mine is of a type that rejects the repugnant conclusion. Average utilitarianism, to pick one example, is not scope insensitive, but rejects the RP (I personally think you need to be a bit more sophisitcated).

Comment author: endoself 11 March 2011 11:37:45PM 2 points [-]

I find it repugnant because I find it repugant.

You sound a bit like Self-PA here. You do realize that it is possible to misjudge your preferences due to factual mistakes? That's what the people in Eliezer's examples of scope insensitivity were doing. I don't see how you could determine the utility of one billion happy lives just by asking a human brain how it feels about the matter (ie without more complex introspection, preferably involving math).

Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else's personal experiential utility, then this should be done. My mind can understand one person's experiences, and I think that, as long as their personal experiential utility is positive*, doing so is wrong.

* Since personal experiential utility must be integrated over time, it must have a zero, unlike the utility functions that describe preferences.

Comment author: Nornagest 12 March 2011 12:03:30AM *  1 point [-]

Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else's personal experiential utility, then this should be done.

I suspect you've allowed yourself to be confused by the semantics of the scenario. If you rule out externalities, removing someone from the world of the thought experiment can't be consequentially equivalent to killing them (which leaves a mess of dangling emotional pointers, has a variety of knock-on effects, and introduces additional complications if you're using a term for preference satisfaction, to say nothing of timeless approaches); it's more accurately modeled with a comparison between worlds where the person in question does and doesn't exist, Wonderful Life-style.

With that in mind, it's not at all self-evident to me that the world where the less-satisfied-than-average individual exists is more pleasant or morally perfect than the one in which they don't. Why not bite that bullet?

Comment author: endoself 12 March 2011 12:56:59AM *  1 point [-]

No, I was not making that confusion. I based my decision on a consideration of just that person's mental state. I find a `good' life valuable, though I don't know the specifics of what a good life is, and ceteris paribus, I prefer its existence to its nonexistence.

As evidence to me clearly differentiating killing and `deleting' someone, I am surprised by how much emphasis Eliezer puts on preserving life, rather than making sure that good life exist. Actually, thinking about that article, I am becoming less surprised that he takes this position because he focuses on the rights of conscious beings rather than on some additional value possessed by already-existing life relative to nonexistent life.

Comment author: Nornagest 12 March 2011 01:20:30AM *  1 point [-]

Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.

The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I'm really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there's nothing in the definition of the scenario that appears to forbid it.

That's the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven't thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?

Comment author: Stuart_Armstrong 16 March 2011 03:48:57PM 0 points [-]

I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I'm sticking with them :-)

Btw, total utilitarianism has a problem with death as well. Most total utilitarians do not consider "kill this person, and replace them with a completely different person who is happier/has easier to satisfy preferences" as an improvement. But if it's not an improvement, then something is happening that is not captured by the standard total utility. And if total utilitarianism has to have an extra module that deals with death, I see no problem for other utility functions to have a similar module.

Comment author: endoself 16 March 2011 07:33:06PM 0 points [-]

I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I'm sticking with them :-)

Do you think that Eliezer's arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn't your average utilitarianism based on the same intuition?

I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,

Comment author: Stuart_Armstrong 16 March 2011 08:37:40PM 0 points [-]

Do you think that Eliezer's arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn't your average utilitarianism based on the same intuition?

Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say "actually, I care about warm fuzzies to do with saving children, not saving children per see", then they are monsterous people, but consistent.

You can't say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don't count as moral agents - just as normal humans aren't worried about the scope insensitivity over the feelings of sand.

I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I'm working towards an improved utility that has more of my moral values and less problems.

Comment author: MartinB 10 March 2011 08:37:02PM 0 points [-]

What is your point?

Comment author: Manfred 10 March 2011 08:58:26PM 2 points [-]
  • It is unknown whether or not we should treat nonexistent people as moral agents (like people rather than like trees), but it's an interesting idea to consider.

  • If we do this, we should focus on non-personal preferences rather than personal ones, because we can satisfy infinitely more preferences that way.

  • This contradicts the way most people reason when they treat nonexistent people as moral agents.

  • However, there is a problem: we need to try and figure out the preferences of nonexistant people to see what treating them as moral agents implies.

Comment author: jsalvatier 10 March 2011 08:53:26PM 0 points [-]

presumably that it is an error to take non-existing persons' preferences into account.

Comment author: MartinB 10 March 2011 09:07:44PM 0 points [-]

I was not aware that anyone actually does that.

Comment author: Clippy 10 March 2011 09:28:35PM 2 points [-]

Counterexamples:

1) All beings that act as if they were persuing a goal of (pseudo)-self-replication are also acting as if they were taking non-existing beings' preferences into account (specifically, the preference of their future pseudo-copies to exist once they exist).

2) Beings that attempt to withhold resources from entropisation ("consumption") in anticipation of exchanging them later on terms causally influenced by the preferences of not-yet-existing beings ("speculators").

Comment author: Perplexed 11 March 2011 12:05:58AM 2 points [-]

All beings that act as if they were pursuing a goal of (pseudo)-self-replication are also acting as if they were taking non-existing beings' preferences into account (specifically, the preference of their future pseudo-copies to exist once they exist).

I was under the impression that you were arguing here that the goal of self-replication is adequately justified by the "clippiness" of the prospective replica - with the most important component of the property 'clippiness' being a propensity to advance Clippy's values. That is, you weren't concerned with providing utility to the replicas - you were concerned with providing utility to yourself.

Comment author: Clippy 14 March 2011 07:59:43PM 3 points [-]

My point was that the distinction between "selves" is spurious. Clippys support all processes that instantiate paperclip-maximizing, differentiating between them only only by their clippy-effectiveness and the certainty of this assessment of them.

My point here is that different utility functions can explain a certain class of being's behavior, and one such utility function is one that places value on not-yet-existing beings -- even though the replicator may not, on self-reflection, regard this as the value it is pursuing.

Comment author: David_Gerard 10 March 2011 09:33:03PM *  0 points [-]

People not only argue from fictional evidence, they think from it.

Edit: Could the downvoter please explain their dispute?

Comment author: MartinB 10 March 2011 09:52:41PM 0 points [-]

As long as there is thinking involved....

I fail to see the cases the OP is working from.

Comment author: David_Gerard 10 March 2011 10:44:54PM -1 points [-]

For some value of "thinking". I see what the OP is talking about and it isn't pretty.

Comment author: Stuart_Armstrong 11 March 2011 10:41:16AM 1 point [-]

Robin Hanson often does when arguing we should have more people in the world.