Pentashagon comments on Expected utility and utility after time - Less Wrong

3 Post author: Metus 29 August 2012 01:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread.

Comment author: Pentashagon 29 August 2012 08:42:33PM 5 points [-]

Isn't the second option the same as Omega offering to clone you, put the clone in hell for a finite amount of time and then destroy it, and give you the money immediately (assuming the money is adjusted to compensate for any lost time in hell in the original example)? So the option is actually to be paid a lot of money in exchange for allowing Omega to torture a person (nominally "you") who will never experience any further positive utility. I would take two slaps in the face even without compensation instead of that option. I don't consider my similarity to a person as a reason to treat them as a redundant copy.

Comment author: Dolores1984 30 August 2012 12:59:20AM 0 points [-]

Much clearer way to think about it.

Comment author: evand 30 August 2012 02:17:07AM 0 points [-]

That option runs into the problem that you've just let Omega extort money by threatening to create a person, torture it, and then destroy it. That seems problematic in other ways.

Comment author: Pentashagon 30 August 2012 06:43:16AM -1 points [-]

Everything Omega does is horribly problematic because Omega is at best an UFAI. I've never seen "preemptively neutralize Omega completely" offered as an option in any of the hypothetical scenarios even though that's obviously the very best choice.

Is it really in anyone's best interest to ever cooperate with Omega given that Omega seems intent on continuing a belligerent campaign of threats against humanity? "I'll give you a choice between $1,000,000 or $1,000 today, but tomorrow I might threaten to slap you or throw you in hell. Oh, btw, I just simulated you against your will 2^100 times to maintain my perfect record on one/two-boxing."

I may be overly tired and that may sound like hyperbole, but I do think that any rational agent encountering a far more powerful agent known to be at least not-friendly should think long and hard about the probability that the powerful agent can be trusted with even seemingly innocuous situations. There may be no way to Win. Some form of defection or defiance of the powerful agent may yield more dignity utilons than playing along with any of the choices offered by Omega. Survival machines may not value dignity and self-determination, but many humans value them quite highly.

Comment author: Pentashagon 31 August 2012 06:02:03PM 0 points [-]

I'm using this comment to test the -5 karma rule. Just ignore it.

Comment author: shminux 30 August 2012 09:18:31PM *  -1 points [-]

I'd totally go for the memory loss/clone destruction option. To me it's the final outcome that matters most, so if you start with one poor me and end with one rich me without the memory of anything unpleasant, it's clearly a better option than ending up with one still-pretty-poor me with smarting cheeks. This is, of course, my subjective utility, I have no claim that it is better than anyone else's for them.

Comment author: Vladimir_Nesov 30 August 2012 09:43:01PM *  1 point [-]

To me it's the final outcome that matters most ... it's clearly a better option than ending up with one still-pretty-poor me ... This is, of course, my subjective utility, I have no claim that it is better than anyone else's for them.

How could one know with any certainty what's better for them (in the murkier cases)? Alternatively, if you do have a process that allows you to learn what's better to you, you should claim that you can also help others to apply that process in order to figure out what's better to them (which may be a different thing than what the process says about you).

You can of course decide what to do, but having ability to implement your own decisions is separate from having ability to find decisions that are reliably correct, from knowing that the decisions you make are clearly right or pursuing what in fact matters the most.

Comment author: Pentashagon 30 August 2012 10:19:29PM 0 points [-]

Does that apply only to copies of you or to people in general? Would you choose to torture all of humanity for a finite time, make them forget it, and then you receive 1 utilon?

Comment author: shminux 30 August 2012 10:40:20PM -1 points [-]

Does that apply only to copies of you or to people in general?

As I explained, I do not presume to make decisions for others.

Would you choose to torture all of humanity for a finite time, make them forget it, and then you receive 1 utilon?

I would not, see above. A better question would have been "Would you choose to slightly inconvenience a person you dislike for a short time, make them forget it, and then you receive 3^^^3 utilons?" If I answered "yes" (and I probably would), then you could probe further to see where exactly my self-professed non-interference breaks down. This is the standard way of forking the dust specks-vs-torture boundary and showing the resulting inconsistency.

Similar strategies apply to clarifying other seemingly absolute positions, including yours ("I don't consider my similarity to a person as a reason to treat them as a redundant copy.") Presumably at some point the answers become "I don't know", rather than Yes/No.

Comment author: Pentashagon 31 August 2012 12:04:07AM 0 points [-]

I am fairly certain the only way that I would treat a clone of myself differently than another independent person is if we continued to share internal mental experiences. Then again, I would probably stop thinking of myself and a random person off the street as different people if I started sharing mental experiences with them, too.

In other words, while I would reject sending my fully independent clone to hell in order to gain utility, I might agree to fully share the mental experience with the clone in hell so long as the clone also got to experience the extra utility Omega paid me to balance out hell. That brings up a rather interesting question; if two people share mental experiences do they achieve double the utility of each person individually, or merely the set union of their individual utilities? Or something else?

Comment author: shminux 31 August 2012 12:13:26AM 0 points [-]

while I would reject sending my fully independent clone to hell in order to gain utility, I might agree to fully share the mental experience with the clone in hell so long as the clone also got to experience the extra utility Omega paid me to balance out hell.

This seems to contradict your earlier assertion that

the second option the same as Omega offering to clone you, put the clone in hell for a finite amount of time and then destroy it, and give you the money immediately

because if you and the clone are one and the same (no cloning happened, you were tortured and then memory-wiped), "both" of you reap the rewards.

Comment author: Pentashagon 31 August 2012 05:02:07PM 0 points [-]

because if you and the clone are one and the same (no cloning happened, you were tortured and then memory-wiped), "both" of you reap the rewards.

We are not the same person after the point of the decision. There's no continuity of experience. The tortured me experiences none of the utility, and the enriched me experiences none of the torture. That was why I thought of the cloning interpretation to begin with.