You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling

4 Post author: Pavitra 07 March 2011 04:35AM

(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)

 

You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.

Do you push the button?

I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.

 

Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.

(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)

So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.

 

But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.

Comments (87)

Comment author: Mitchell_Porter 08 March 2011 12:03:14AM 15 points [-]

I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.

This is one of those statements where I set out to respond and just stare at it for a while, because it is coming from some other moral or cognitive universe so far away that I hardly know where to begin.

Copies are people, right? They're just like you. In this case, they're exactly like you, until your experiences start to diverge. And you know that people don't like slavery, and they especially don't like torture, right? And it is considered just about the height of evil to hand people over to slavery and torture. (Example, as if one were needed; In Egypt right now, they're calling for the death of the former head of the state security apparatus, which regularly engaged in torture.)

Consider, then, that these copies of you, who you would willingly see enslaved and tortured for your personal benefit, would soon be desperately eager to kill you, the original, if that would make it stop, and they would even have a motivation beyond their own suffering, namely the moral imperative of stopping you from doing this to even further copies.

Has none of this occurred to you? Or does it truly not matter in your private moral calculus?

Comment author: Raemon 08 March 2011 04:44:54AM *  3 points [-]

The "it's okay to kill copies" thing has never made any sense to me either. The explanation that often accompanies it is "well they won't remember being tortured", but that's the exact same scenario for ALL of us after we die, so why are copies an exception to this?

Would you willingly submit yourself to torture for the benefit of some abstract, "extra" version of you? Really? Make a deal with a friend to pay you $100 for every hour of waterboarding you subject yourself to. See how long this seems like a good idea.

Comment author: Broggly 10 March 2011 10:29:45PM 0 points [-]

To my mind the issue with copies is that it's copies who remain exactly the same that "don't matter", whereas once you've got a bunch of copies being tortured, they're no longer identical copies and so are different people. Maybe I'm just having trouble with Sleeping Beauty-like problems, but that's only a subjective issue for decision making (plus I'd rather spend time learning interesting things that won't require me to bite the bullet of admitting anyone with a suitable sick and twisted mind could Pascal Mug me). Morally, I much prefer 5,000 iterations each of two happy, fulfilled minds than 10,000 of the same one.

Where "Copies" is used isomorphically with "Future versions of you in either MWI or similar realist interpretation of probability theory", then I would certainly subject some of them to torture only for a very large potential gain and small risk of torture. "I" don't like torture, and I'd need a pretty damn big reward for that 1/N longshot to justify a (N-1)/N chance or brutal torture or slavery. This is of course assuming I'm at status quo, if I were a slave or Bagram/Laogai detainee I would try to stay rational and avoid fear making me overly risk averse from escape attempts. I haven't tried to work out my exact beliefs on it, but as said above if I have two options, one saving a life with certainty and the other having a 50% chance of saving two, I'd prefer saving two (assuming they're isolated ie two guys on a lifeboat).

tl; dr, it's a terrible idea in that if you only have the moral authority to condemn copies

Comment author: Raemon 11 March 2011 03:01:31AM 0 points [-]

Is your last sentence missing something? It feels incomplete.

Comment author: Broggly 11 March 2011 01:15:14PM 0 points [-]

Ah yes, I meant to type that you only have the moral authority to condemn copies to torture or slavery if they're actually you, and it's pretty stupid to risk almost certain torture for a small chance of a moderate benefit

Comment author: Pavitra 08 March 2011 04:47:46AM 0 points [-]

People break under torture, so I'd take precautions to ensure that the torture-copy is not allowed to make decisions about whether it should continue. Of course I'm going to regret it. That doesn't change the fact that it's a good idea.

Comment author: Raemon 08 March 2011 05:15:54AM 2 points [-]

Why is this a good idea in any way other than the general position that "torturing other people for your own profit is a good idea so long as you don't care about people?" Most of human history is based around the many being exploited for the benefit of the few. Why is this different?

I suppose people should have the right to willingly submit to torture for some small benefit to another person, which is what you're saying you'd be willing to do. But the fact that a copy gets erased doesn't make the experience any less real, and the fact that an identical copy gets to live doesn't in any way help the copies that were being tortured.

Comment author: Pavitra 08 March 2011 05:28:49AM -2 points [-]

It's different because (1) I'm not hurting other people, only myself, and (2) I'm not depriving the world of my victim's potential contributions as a free person.

I don't actually care about the avoidance of torture as a terminal moral value.

Comment author: Snowyowl 08 March 2011 12:12:54PM 2 points [-]

(1) I'm not hurting other people, only myself

But after the fork, your copy will quickly become another person, won't he? After all, he's being tortured and you're not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?

Comment author: Pavitra 08 March 2011 06:56:42PM 0 points [-]

In thought experiment land... maybe. I'd have to think carefully about what value I place on myself as a special case. In practice, I don't believe that you can fully compensate for all of the unknown accomplishments I might have made to society.

Comment author: wedrifid 08 March 2011 12:18:08PM 0 points [-]

After all, he's being tortured and you're not, and he is probably very angry at you for making this decision.

Pavitra is a he? I must have guessed wrong.

Comment author: Pavitra 08 March 2011 06:12:00PM 4 points [-]

Pavitra is a he?

It's complicated.

Comment author: DanielLC 09 March 2011 11:13:08PM 1 point [-]

What are your terminal moral values?

Also, why is hurting yourself different from hurting other people? And why is not hurting others a moral value, but not avoidance of torture?

Comment author: Pavitra 10 March 2011 10:22:09PM 0 points [-]

Hurting others is ethically problematic, not morally. For example, I would probably be okay with hurting someone else at their own request. Avoidance of torture is a question of an entirely different type: what I value, not how I think it's appropriate to go about getting it.

I don't have a formalization of my terminal values, but roughly:

I have noticed that sometimes I feel more conscious than other times -- not just awake/dreaming/sleeping, but between different "awake" times. I infer that consciousness/sentience/sapience/personhood/whatever you want to call it, you know, that thing we care about is not a binary predicate, but a scalar. I want to maximize the degree of personhood that exists in the universe.

Comment author: DanielLC 12 March 2011 05:49:03PM *  0 points [-]

Hurting others is ethically problematic, not morally.

What's the difference between ethics and morals?

I want to maximize the degree of personhood that exists in the universe.

So, if you create a person, and torture them for their entire life, that's worth it?

Comment author: Pavitra 12 March 2011 08:00:35PM 0 points [-]

What's the difference between ethics and morals?

By morals, I mean terminal values. By ethics, I mean advanced forms of strategy involving things like Hofstadter's superrationality. I'm not sure what the standard LW jargon is for this sort of thing, but I think I remember reading something about deciding as though you were deciding on behalf of everyone who shares your decision theory.

I want to maximize the degree of personhood that exists in the universe.

So, if you create a person, and torture them for their entire life, that's worth it?

If the most conscious person possible would be unhappy, I'd rather create them than not. The consensus among science fiction writers seems to be with me on this: a drug that makes you happy at the expense of your creative genius is generally treated as a bad thing.

Comment author: DanielLC 13 March 2011 05:03:16AM 0 points [-]

By ethics, I mean advanced forms of strategy involving things like Hofstadter's superrationality. I'm not sure what the standard LW jargon is for this sort of thing

Sounds like decision theory.

Comment author: TheOtherDave 12 March 2011 08:10:20PM 0 points [-]

Do you mean to equate here the degree to which something is a person, the degree to which a person is conscious, and the degree to which a person is a creative genius?

That's what it reads like, but perhaps I'm reading too much into your comment.

That seems unjustified to me.

Comment author: Pavitra 08 March 2011 04:40:40AM *  0 points [-]

It's not like I'm handing other people over into slavery and torture. I don't have to worry that I'm subconsciously ignoring other people's suffering for my own benefit. I don't see the question as a moral one at all, only one of whether it would be a good idea.

ETA: Also, because at least one copy remains free, I'm not depriving anyone of the chance to live their life.

Comment author: Raemon 08 March 2011 05:23:36AM *  1 point [-]

It's not like I'm handing other people over into slavery and torture. I don't have to worry that I'm subconsciously ignoring other people's suffering for my own benefit. I don't see the question as a moral one at all, only one of whether it would be a good idea.

I mostly understand this statement.

ETA: Also, because at least one copy remains free, I'm not depriving anyone of the chance to live their life.

I think this is irrelevant. Each instance of you is choosing to sacrifice their life and happiness, and they are not getting anything in return.

The only way I can see this actually being a good idea is if the utility you gain at least outweighs the utility lost by one copy. The other scenarios you describe sound like good ideas on paper where you don't have to fully process the consequences, but I do not believe for a second that the other-instances-of-you would continue to think this was a good idea when it was their lives on the line.

Comment author: Pavitra 08 March 2011 05:27:05AM 0 points [-]

Each instance of you is choosing to sacrifice their life and happiness.

But it's the same me. They wouldn't have done anything with their freedom that I won't with mine.

Comment author: Raemon 08 March 2011 05:31:29AM 1 point [-]

I'm not denying the choice is made willingly. But I do not think there is a difference between willingly enduring torture for a copy of yourself and willingly enduring torture for someone else you happen to like.

Legally, if these circumstances ever became real, I think people should be allowed to create the copies, but they should not be allowed to make decisions for the copies. You are only allowed to hit the "torture" button if you believe that it is you, personally, who will be undergoing that torture.

Comment author: Pavitra 08 March 2011 05:34:31AM 0 points [-]

What if I set up the copy-decision-depriving mechanism before I fork myself?

Comment author: Raemon 08 March 2011 05:43:51AM 1 point [-]

Legally, I think people should allowed to torture themselves. They should not be allowed to torture other people. Legally, I think each copy counts as a person. If you hit the torture button before the copies are made (and then prevent them from changing their mind) you are not just torturing yourself, you are torturing other people.

I do not want to live in a society where sentient creatures are denied the right to escape torture. While it is possible that an individual has worked out a perfect decision theory in which each copy would truly prefer to be tortured, I think many of the people attempting this scenario would simply be short sighted, and as soon as it became their life on the line it their timeless decision would not seem so wise.

If you really are confidant of your willingness to subject yourself to torture for a copy's benefit, fine. But for the sake of the hypothetical millions of copies of people who HAVEN'T actually thought this through, it should be illegal to create slave copies.

Comment author: TheOtherDave 08 March 2011 12:32:39PM *  4 points [-]

Hm.

If I willingly submit to be tortured starting tomorrow (say, in exchange for someone I love being released unharmed), don't the same problems arise? After all, once the torture starts I am fairly likely to change my mind. What gives present-me the right to torture an unwilling future-me?

It seems this line of reasoning leads to the conclusion that it's unethical for me to make any decision that I'll regret later, no matter what the reason for my change of heart.

Comment author: Raemon 08 March 2011 04:09:03PM 2 points [-]

I might have been misinterpreting Pavrita's original statement, and may have been unclear about my position.

People should be allowed to torture themselves without ability to change their mind, if they need to. (However, this is something that in real life would happen rarely for extreme reasons. I think that if people start doing that all the time, we should stop and question whether something is wrong with the system).

The key is that you must firmly understand that you, personally, will be getting tortured. I'm okay with making the decision to get tortured, and then fork yourself. I guess. (Although for small utility, I think it's a bad decision). What I'm not okay with is making the decision to fork yourself, and then have one of your copies get tortured while one of you doesn't. Whoever decides to BEGIN the torture must be aware that they, personally, will never receive any benefit from it.

Comment author: TheOtherDave 08 March 2011 04:41:18PM 1 point [-]

Um.

I think I agree with you, but I'm not sure, and I'm not sure if the problem is language or that I'm just really confused.

For the sake of clarity, let's consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.

If I've understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).

Yes?

Comment author: Pavitra 08 March 2011 05:51:04AM 0 points [-]

We've been talking as though there was one "real" me and several xeroxes, but you seem to be acting as if that were the case on a moral level, which seems wrong. Surely, if I fork myself, each branch is just as genuinely me as any other? If I build and lock a cage, arrange to fork myself with one copy inside the cage and one outside, press the fork button, and find myself inside the cage, then I'm the one who locked myself in.

Comment author: Raemon 08 March 2011 05:57:35AM *  3 points [-]

Surely, if I fork myself, each branch is just as genuinely me as any other?

Fundamental disagreement here, which I don't expect to work through. Once you fork yourself, I would treat each copy as a unique individual. (It's irrelevant whether one of you is "real" or not. They're identical people, but they're still separate people).

If those people all actually make the same decisions, great. I am not okay with exposing hundreds of copies to years of torture based on a decision you made in the comfort of your computer room.

Comment author: Pavitra 08 March 2011 06:02:56AM 0 points [-]

I don't ask you to accept that the various post-fork copies are the same person as each other, only that each is (perhaps non-transitively) the same person as the single pre-fork copy.

Suppose I don't fork myself, but lock myself in a cage. Does the absence of an uncaged copy matter?

Comment author: endoself 07 March 2011 05:43:56AM *  6 points [-]

I push the button, because it causes net happiness (not that I am necessarily a classical utilitarian, but there are no other factors here that I would take into account). I would be interested to hear what Eliezer thinks of this dilemma.

The post you linked only applies to identical copies. If one copy is tortured while the other lives normally, they are no longer running the same computation, so this is a different argument. Where do you draw the line between other people and copies? Is it only based on differing origins? What about an imperfect copy? If the person who was created for 15 minutes was completely unlike any other person, wouldn't you create em then, according to your stated values? Wouldn't you press the button even if you thought that the person had no moral value because you are not certain of your own values and the possibility that the person's existence has moral value outweighs the possibility that it has negative moral value or vice versa?

Comment author: Pavitra 08 March 2011 04:19:24AM 0 points [-]

Identicalness of copies doesn't matter much to me. The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself. This reduces the problem from a moral dilemma to a question of mere strategy.

Comment author: endoself 08 March 2011 04:48:41AM *  0 points [-]

So wouldn't you press the button, since the person in the box is not a copy of you (unless you place no value on the happiness of others or something like that)?

You seem to be indifferent between being in pain for a few minutes, then dying and being tortured for a few years then dying ("the (terminal) utility of the existence of a particular computation is bounded below at zero"). This strikes me as odd.

Also, I take an approach to the idea of anticipating subjective experience that is basically what Eliezer describes as the third horn of the anthropic trilemma but with more UDT, so I regard many of the concepts you discuss as meaningless.

Comment author: Pavitra 08 March 2011 05:25:16AM 0 points [-]

When there's nothing real at stake, I might decide to press the button or take the few minutes of pain, in order to get the warm fuzzies. But if there was something that actually mattered on the line, this stuff would go right out the window.

I reject all five horns of the anthropic trilemma. My position is that the laws of probability mostly break down whenever weird anthropic stuff happens, and that the naive solution to the forgetful driver problem is correct. In the hotel with the presumptuous philosopher, I take the bet for an expected $10.

Comment author: endoself 08 March 2011 06:30:30AM *  1 point [-]

The third horn basically states that the laws of probability break down when weird anthropic things happen. How can you retain a thread of subjective experience if the laws of probability - the very laws that describe anticipation of subjective experience - break down?

Decision-theoretically I believe in UDT. I would take the bet because I do not attach any negative utility to the presumptuous philosopher smiling, but if I had anything to lose, even a penny, I would not take it because each of my copies in the big hotel, each of which has a 50% chance of existing, would stand to lose, a much greater total loss. It would make no sense to ask me what I would do in this situation if I were selfish and did not care about the other copies because the idea of selfishness, at least as it would apply here, depends on anticipated subjective experience.

Comment author: Pavitra 08 March 2011 06:55:58AM 0 points [-]

I don't think they break quite as badly as the third horn asserts. If I fork myself into two people, I'm definitely going to be each of them, but I'm not going to be Britney Spears.

Most of your analysis of the hotel problem sounds like what I believe, but I don't see where you get 50%. Do you think you're equally likely to be in each hotel? And besides, if you're in the small hotel, the copies in the big hotel still exist, right?

Comment author: endoself 08 March 2011 07:24:19AM 0 points [-]

Sorry, I thought she flipped a coin to decide which hotel to build rather than making both. This changes nothing in my analysis.

I don't think they break quite as badly as the third horn asserts. If I fork myself into two people, I'm definitely going to be each of them, but I'm not going to be Britney Spears.

Can you back this up? Normal probabilities don't work but UDT does (for some reason I had written TDT in previous post, that was an error and has been corrected). However, UDT makes no mention of subjective anticipated probabilities. In fact, the idea of a probability that one is in a specific universe breaks down entirely in UDT. It must, otherwise UDT agents would not pay counterfactual muggers. If you don't have the concept of a probability that one is in a specific universe, let alone a specific person in that specific universe, what could possibly remain on which to base a concept of personal identity?

Comment author: Pavitra 08 March 2011 07:31:33AM *  0 points [-]

In that case, I'm not sure where we disagree. Your explanation of UDT seems to accurately describe my position on the subject.

Edit: wait, no, that doesn't sound right. Hm.

Edit 2: no, I read right the first time. There might be something resembling being in specific universes, just as there might be something resembling probability, but most of the basic assumptions are out.

Comment author: endoself 08 March 2011 09:01:30AM 0 points [-]

I'm not quite sure that I understand your post, but, if I do, it seems to contradict what you said earlier. If the concepts of personal identity and anticipated subjective experience are mere approximation to the truth, how do you determine what is and isn't a copy? Your earlier statement, "The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself.", seems to be entirely grounded in these ideas.

Comment author: Pavitra 08 March 2011 06:31:28PM *  0 points [-]

Continuity of personal identity is an extraordinarily useful concept, especially from an ethical perspective. If Sam forks Monday night in his sleep, then on Tuesday we have two people:

  • Sam-X, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_x

  • Sam-Y, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_y

I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.

Comment author: Nisan 07 March 2011 07:43:59PM 3 points [-]

If asked, they would say that they're glad to have existed [...]

There is an interesting question here: What does it mean to say that I'm glad to have been born? Or rather, what does it mean to say that I prefer to have been born?

The alternative scenario in which I was never born is strictly counterfactual. I can only have a revealed preference for having been born if I use a timeless/updateless decision theory. In order to determine my preference you'd need to perform an experiment like the following:

  • Omega approaches me and offers me $100. It tells me that it had an opportunity to prevent my birth, and it would have prevented my birth if and only if it had predicted that I would accept the $100. It is a good predictor. Do I take the $100?

Without thinking about such an experiment, it's not clear what my preference is. More significantly, when 30% of American adolescents in 1930 wish they had never been born, it is not clear exactly what they mean.

Now if you know I'm an altruist, then the problem is simpler: I prefer to have been born insofar as I prefer any arbitrary person to have been born, and this preference can be detected with the thought experiment described in the OP.

... unless I'm a preference utilitarian, in which case I prefer an arbitrary person to have been born only if they prefer to have been born.

Comment author: Snowyowl 08 March 2011 12:18:08PM 2 points [-]

How about: Given the chance, would you rather die a natural death, or relive all your life experiences first?

Comment author: Normal_Anomaly 08 March 2011 12:34:48PM 1 point [-]

I like that formulation. One question: would I be able to remember having lived them while I was reliving them? Because then it would be more boring than the first time.

Comment author: Nisan 08 March 2011 07:01:57PM 0 points [-]

If the subject were not allowed to remember their first life while living the second, we would want to know how the subject feels about copies of themself.

Comment author: AlephNeil 07 March 2011 05:32:37PM *  3 points [-]

I don't think it's possible to give answers to all ethical dilemmas in such a way as to be consistent and reasonable across the board, but here my intuition is that if a mind only lasts 15 minutes, and it has no influence on the outside world and leaves no 'thought children' (e.g. doodles, poems, theorems) behind after its death, then whether it experiences contentment or agony has no moral value whatsoever. Its contentment, its agony, its creation and its destruction are all utterly insignificant and devoid of ethical weight.

To create a mind purely to torture it for 15 minutes is something only an evil person would want to do (just as only an evil person would watch videos of torture for fun) but as an act, it's a mere 'symptom' of the fact that all is not well in the universe.

(However, if you were to ask "what if the person lasted 30 minutes? A week? A year? etc." then at some point I'd have to change my answer, and it might be difficult to reconcile both answers. But again, I don't believe that the 'sheaf' of human moral intuitions has a 'global section'.)

the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities

Hmm. There might be a good insight lurking around there, but I'd want to argue that (a) such entities may include 'pieces of knowledge', 'trains of thought', 'works of art', 'great cities' etc rather than just 'people'. And (b), the 'utilities' (clearer to just say 'values') of these things might be partially rather than linearly ordered, so that the 'maximum' becomes a 'join', which may not be attained by any of them individually. (Is the best city better or worse than the best symphony, and are they better or worse than Wiles' proof of Fermat's Last Theorem, and are they better or worse than a giraffe?)

Comment author: DanielVarga 07 March 2011 08:24:45PM 1 point [-]

I agree fully with your first two paragraphs. I would not change my answer regardless of the amount of time the causally disconnected person lasts. Biting this bullet leads to some quite extreme conclusions, basically admitting that current human values can not be consistently transferred to a future with uploads, self-modification and such. (Meaning, Eliezer's whole research program is futile.) I am not happy about these conclusions, but they do not change my respect for human values, regardless of my opinion about their fundamental inconsistencies.

I believe even AlephNeil's position is quite extreme among LWers, and mine is definitely fringe. So if someone here agrees with either of us, I am very interested in that information.

Comment author: endoself 08 March 2011 04:44:49AM 1 point [-]

Biting this bullet leads to some quite extreme conclusions, basically admitting that current human values can not be consistently transferred to a future with uploads, self-modification and such. (Meaning, Eliezer's whole research program is futile.)

Couldn't an AI prevent us from ever achieving uploads or self-modification? Wouldn't this be a good thing for humanity if human values could not survive in a future with those things?

Comment author: DanielVarga 08 March 2011 10:56:59PM *  1 point [-]

Yes, this is a possible end point of my line of reasoning: we either have to become luddites, or build a FAI that prevents us from uploading. These are both very repulsive conclusions for me. (Even if I don't consider the fact that I am not confident enough in my judgement to justify such extreme solutions by it.) I, personally, would rather accept that much of my values will not survive.

My value system works okay right now, at least when I don't have to solve trolley problems. In any given world with uploading and self-modification, my value system would necessarily fail. In such a world, my current self would not feel at home. My visit there would be a series of unbelievably nasty trolley problems, a big reductio ad absurdum of my values. Luckily, it is not me who has to feel at home there, but the inhabitants of that world. (*)

(*) Even the word "inhabitants" is misleading, because I don't think personal identity has much of a role in a world where it is possible to merge minds. Not to talk about the word "feel", which, from the perspective of a substrate-independent self-modifying mind refers to a particular suboptimal self-reflection mechanism. Which, to clear up a possible misunderstanding in advance, does not mean that this substrate-independent mind can not possibly see positive feelings as terminal value. But I am already quite off-topic here.

Comment author: endoself 08 March 2011 11:22:31PM 0 points [-]

I, personally, would rather accept that much of my values will not survive.

If there is something that you care about more than your values, they are not really your values.

I think we should just get on with FAI. If it realizes that uploads are okay according to our values it will allow uploads and if uploads are bad it will forbid them (maybe not entirely forbid; there could easily be something even worse). This is one of the questions that can completely be left until after we have FAI because whatever it does will, by definition, be in accordance with our values.

Comment author: Pavitra 09 March 2011 12:56:36AM 0 points [-]

I, personally, would rather accept that much of my values will not survive.

If there is something that you care about more than your values, they are not really your values.

You seem to conflate "I will care about X" with "X will occur". This breaks down in, for example, any case where precommitment is useful.

Comment author: DanielVarga 09 March 2011 12:42:48AM 0 points [-]

If there is something that you care about more than your values, they are not really your values.

You seem to rely on a hidden assumption here: that I am equally confident in all my values.

I don't think my values are consistent. Having more powerful deductive reasoning, and constant access to extreme corner cases would obviously change my value system. I also anticipate that my values would not be changed equally. Some of them would survive the encounter with extreme corner cases, some would not. Right now I don't have to constantly deal with perfect clones and merging minds, so I am fine with my values as they are. But even now, I have a quite good intuition about which of them would not survive the future shock. That's why I can talk without contradiction about accepting to lose those.

In CEV jargon: my expectation is that the extrapolation of my value system might not be recognizable to me as my value system. Wei_Dai voiced some related concerns with CEV here. It is worth looking at the first link in his comment.

Comment author: endoself 09 March 2011 03:03:05AM 0 points [-]

Oh, I see. I appear to have initially missed the phrase `much of my values'.

I am wary of referring to my current inconsistent values rather than their reflective equilibrium as `my values' because of the principle of explosion, but I am unsure of how to resolve this into my current self even having values.

Comment author: DanielVarga 09 March 2011 11:16:57AM *  1 point [-]

It seems our positions can be summed up like this: You are wary of referring to your current values rather than their reflective equilibrium as 'your values', because your current values are inconsistent. I am wary of referring to the reflective equilibrium rather than my current values as 'my values', because I expect the transition to reflective equilibrium to be a very aggressive operation. (One could say that I embrace my ignorance.)

My concern is that the reflective equilibrium is far from my current position in the dynamical system of values. Meanwhile, Marcello and Wei Dai are concerned that the dynamical system is chaotic and has multiple reflective equilibria.

Comment author: endoself 09 March 2011 08:20:56PM *  0 points [-]

I don't worry about the aggressiveness of the transition because, if my current values are inconsistent, they can be made to say that this transition is both good and bad. I share the concern about multiple reflective equilibrium. What does it mean to judge something as an irrational cishuman if two reflective equilibria would disagree on what is desirable?

Comment author: TheOtherDave 09 March 2011 04:24:20PM 0 points [-]

I expect the transition to reflective equilibrium to be a very aggressive operation.

Upvoted purely for the tasty, tasty understatement here.

I should get that put on a button.

Comment author: Pavitra 09 March 2011 03:22:21AM 0 points [-]

I like to think of my "true values" as (initially) unknown, and my moral intuitions as evidence of, and approximations to, those true values. I can then work on improving the error margins, confidence intervals, and so forth.

Comment author: endoself 09 March 2011 03:36:50AM 0 points [-]

So do I, but I worry that they are not uniquely defined by the evidence. I may eventually be moved to unique values by irrational arguments, but if those values are different from my current true values than I will have lost something and if I don't have any true values than my search for values will have been pointless, though my future self will be okay with that.

Comment author: Pavitra 08 March 2011 04:34:12AM 0 points [-]

Your point about partial ordering is very powerfully appealing.

However, I feel that any increase in utility from mere accumulation tends strongly to be completely overridden by increase in utility from increasing the quality of the best thing you have, such as by synthesizing a symphony and a theorem together into some deeper, polymathic insight. There might be edge cases where a large increase in quantity outweighs a small increase in quantity, but I haven't thought of any yet.

(Incidentally, I just noticed that I've been using terms incorrectly and I'm actually a consequentialist rather than a utilitarian. What should I be saying instead of "utility" to mean that-thing-I-want-to-maximize?)

Comment author: MartinB 07 March 2011 04:54:25PM 3 points [-]

A question that I pondered since learning more about history. Would you prefer to shot without any forewarning, or a process where you know the date well in advance?

Both methods were used extensively with Prisoners of War, and Criminals.

Comment author: Pavitra 08 March 2011 04:27:08AM 1 point [-]

Forewarning could reduce the enjoyability and perhaps productiveness of the rest of my life due to feelings of dread, but on balance I think I'd rather have the chance to set my affairs in order and generally be able to plan.

Comment author: wedrifid 07 March 2011 02:32:37PM 3 points [-]

Do you push the button?

Yes. You included a lot of disclaimers and they seem to be sufficient.

According to my preferences there are already more humans around than desirable, at least until we have settled a few more galaxies. Which emphasizes just how important the no externalities clause was to my judgement. Even the externality of diluting the neg-entropy in the cosmic commons slightly further would make the creation a bad thing.

I don't share the same preference intuitions as you regarding self-clone-torture. I consider copies to be part of the output. If they are identical copies having identical experiences then they mean little more than having a backup available. If some are getting tortured then the overall output of the relevant computation really does suffer (in the 'get slightly worse' sense although I suppose it is literal too).

Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.

It's OK. I (lightheartedly) reckon my clone army could take out your clone army if it became necessary to defend myselves. I/we'd then have to figure out how to put 'ourselfs' back together again without merge conflicts once the mobilization was no longer required. That sounds like a tricky task, but it could be fun.

Comment author: Pavitra 08 March 2011 04:22:21AM 0 points [-]

I don't share the same preference intuitions as you regarding self-clone-torture. I consider copies to be part of the output.

I derive my intuitions from the analogy of a cpu-inefficient interpreted language. I don't care about the 99% wasted cycles, except secondarily as a moderate inconvenience. I care about whether the job gets done.

Comment author: nazgulnarsil 07 March 2011 06:02:40PM *  4 points [-]

Holy crap I should hope the cev answer is yes. This is what happy humans look like to powerful long lived entities.

Comment author: benelliott 07 March 2011 06:19:54PM 3 points [-]

Whether you are lifeist of anti-deathist the answer is that those entities shouldn't kill us. The only question is whether they should create more of us.

Comment author: nazgulnarsil 07 March 2011 06:32:52PM *  5 points [-]

Or allow us to create more of ourselves.

Comment author: Pavitra 08 March 2011 04:35:06AM 1 point [-]

Those powerful entities presumably have the option of opening the box.

Comment author: orthonormal 08 March 2011 03:09:03PM 2 points [-]

I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.

I believe Eliezer would, by extrapolation from the hypothetical at the bottom of this post.

Comment author: Nornagest 07 March 2011 11:48:06PM *  2 points [-]

Funny. My instincts are telling me that there's a Utility Monster behind that bush.

I'm not satisfied with the lifeist or the anti-deathist reasoning here as you present them, since both measure (i.e. life-count) and negadeaths as dominant terms in a utility equation lead pretty quickly to some pretty perverse conclusions. Nor do I give much credence to the boxed subject's own opinion; preference utilitarianism works well as a way of gauging consequences against each other, but it's a lousy measure of scalar utility.

Presuming that the box's inhabitant would lead a highly fun-theoretically positive fifteen minutes of life by any standards we choose to adopt, though, pressing the button seems to be neutral or positive (neutral with respect to my own causal universe, positive relative to the short-lived branch Omega's creating) -- with the proviso that Omega may be acting unethically by garbage-collecting the boxed subject when it has the power not to.

Comment author: Pavitra 08 March 2011 04:38:21AM 1 point [-]

I would feed the utility monster.

Comment author: Armok_GoB 07 March 2011 03:07:10PM 1 point [-]

My intuitions give a rather interesting answer to this: It depends strongly on the details of the mind in question. For the vast majority of possible minds I would push the button, but the human dot an a fair sized chunk of mind design space around it I'd not push the button for. It also seems to depend on seemingly unrelated things, for example I'd push it for a human if an only if it was similar enough to a human existing elsewhere whose existence was not affected by the copying AND would approve of pushing the button.

Comment author: Nornagest 07 March 2011 10:58:19PM 0 points [-]

For the vast majority of possible minds I would push the button, but the human dot an a fair sized chunk of mind design space around it I'd not push the button for.

How come? This is an immensely suggestive statement, but I'm not sure where you're going with it.

Comment author: Armok_GoB 08 March 2011 05:44:42PM 0 points [-]

As I said, intuition. I can make guesses about the causes of those intuitions, and probably have a better chance at getting the right answer than an outside observer due to having the black box in question inside my head for preforming experiments on, but I don't have any direct introspective access. If you're asking for arguments that someone else should act this way as well, that's a very different question.

Comment author: [deleted] 09 August 2013 07:05:11PM 0 points [-]

Being an information theoretical person-physicalist, there are no copies. There are new originals.

Making N copies is only meaningless, utility wise, if the copies never diverge. The moment they do, you have a problem.

Comment author: MinibearRex 07 March 2011 09:24:54PM 0 points [-]

If they would be genuinely happy to have lived, then creating them wouldn't be necessarily "immoral". However, I still have a moral instinct (suspect, I know, but that doesn't change the fact that it's there) against killing a sentient being. Watching a person get put into a garbage compactor would make me feel bad, even if they didn't mind.

In other words, even if someone doesn't care, or even wants to die, I still would have a hard time killing them.