You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ike comments on Selfish preferences and self-modification - Less Wrong Discussion

4 Post author: Manfred 14 January 2015 08:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread.

Comment author: ike 14 January 2015 03:48:32PM *  0 points [-]

How much do your other selves need to diverge from you for you to stop caring about them?

Obviously you still view the other one as "you" even though your brain contains a pattern that says "I am B" and the other has a pattern that says "I am A".

Can you rigorously define at what point you no longer consider the "other" one as part of you?

What if your alter ego has a long conversation with a philosopher and comes out as no longer selfish, and now wants to help the world, giving you a severe distaste and making you not want to help them any more? (You prefer to use your resources rather than let them use it even if they can produce more out of it.)

Comment author: shminux 14 January 2015 10:32:23PM 0 points [-]

Can you rigorously define at what point you no longer consider the "other" one as part of you?

Presumably this is like trying to solve the Sorites paradox. The best you can do is to find a mutually acceptable Schelling point, e.g. 100 grains of sand make a heap, or disagreeing on 10% or more of all decisions means you are different enough.

Comment author: torekp 14 January 2015 11:09:39PM 2 points [-]

A gradual falling-off of concern with distance seems more graceful than suddenly going from all to nothing. It's not like the legal driving age, where there's strong practical reason for a small number of sharp cut-offs.

Comment author: ike 15 January 2015 04:06:50PM 0 points [-]

10% or more of all decisions

Then we have the problem of deciding what counts as a decision. Even very minor changes will invalidate a broad definition like "body movements", as most body movements will be different after the 2 diverge.

My prefered diverging point is as soon as the cloning happens. I'm open to accepting that as long as they are identical, they can cooperate, but that can be justified by pure TDT without invoking "caring for the other". But any diverging stops this; that's my Schelling point.

Comment author: Leonhart 15 January 2015 08:51:48PM 1 point [-]

Do you really think your own nature that fragile?

(Please don't read that line in a judgemental tone. I'm simply curious.)

I would automatically cooperate with a me-fork for quite a while if the only "divergence" that took place was on the order of raising a different hand, or seeing the same room from a different angle. It doesn't seem like value divergence would come of that.

I'd probably start getting suspicious in the event that "he" read an emotionally compelling novel or work of moral philosophy I hadn't read.

Comment author: ike 15 January 2015 08:57:49PM 0 points [-]

If we raised different hands, I do think it would quickly cause us to completely diverge in terms of how many body movements are equal. That doesn't mean we would be very different, or that I'm fragile. I'm pretty much the same as I was a week ago, but my movements now are different. I was just pointing out that "decisions" isn't that much more well defined than what it was coming to define (divergent).

I would automatically cooperate

In a True Prisoner's Dilemma, or even in situations like the OP? The divergence there is that one person knows they are "A" and the other "B", in ways relevant to their actions.

Comment author: Leonhart 15 January 2015 09:44:08PM *  1 point [-]

Ah, I see. We may not disagree, then. My angle was simply that "continuing to agree on all decisions" might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)

In the OP's scenario, yes, I cooperate without bothering to reflect. It's clearly, obviously, the thing to do, says my brain.

I don't understand the relevance of the TPD. How can I possibly be in a True Prisoner's Dilemma against myself, when I can't even be in a TPD against a randomly chosen human?

Comment author: ike 15 January 2015 09:53:04PM 0 points [-]

OP is assuming selfishness, which makes this True. Any PD is TPD for a selfish person. Is it still the obvious thing to do if you're selfish?

Comment author: Leonhart 15 January 2015 10:04:59PM 0 points [-]

Yes, for a copy close enough that he will do everything that I will do and nothing that I won't. In simple resource-gain scenarios like the OP's, I'm selfish relative to my value system, not relative to my locus of consciousness.

Comment author: ike 16 January 2015 02:05:43PM 0 points [-]

So we have different models of selfishness, then. My model doesn't care about anything but "me", which doesn't include clones.

Comment author: Manfred 16 January 2015 07:18:41PM *  0 points [-]

any diverging stops this

The trouble is, of course, that if you both predictably (say, with 98% probability) switch to defecting after one sees 'A' and the other sees 'B', you could just as easily (following some flavor of TDT) predictably cooperate.

This issue is basically the oversimplification within TDT where it treats algorithms as atomic causes of actions, rather than as a lossy abstraction from complex physical states. This is a very difficult AI problem that I'm pretending is solved for the purposes of my posts.

Comment author: shminux 15 January 2015 05:21:04PM 0 points [-]

I agree, "as soon as the cloning happens" is an obvious Schelling point with regards to caring. However, if you base your decision to cooperate or defect on how similar the other clone is to you in following the same decision theory, then this leads to "not at all similar", resulting in defection as the dominant strategy. If instead you trust the other clone to apply TDT the way you do, then you behave in a way that is equivalent to caring even after you profess that you do not.

Comment author: ike 15 January 2015 07:44:34PM *  1 point [-]

I don't think so. When I say I would cooperate, I mean standard Prisoner's Dilemma stuff. I don't have to care about them to do that.

The things I wouldn't care about are the kinds of situations mentioned in the OP. In a one sided Dilemma, where the other person has no choice, TDT does not say you should cooperate. If you cared about them, then you should cooperate as long as you will lose less than they gain. In that case I would not cooperate, even though I might self-modify to cooperating now if given the choice.

Comment author: shminux 15 January 2015 10:01:49PM 0 points [-]

I see. I understand what you mean now.

Comment author: Manfred 14 January 2015 08:02:54PM 0 points [-]

If one cares about their copies because their past self self-modified to a stable point, then what matters are the preferences of this causal ancestor. If I don't want my preferences to be satisfied if I am given a pill that makes me evil, then I will self-modify so that if one of my future copies takes the evil pill, my other future copies will not help them.

In other words, there is absolutely not one true definition here.

However, at a minimum, agents will self-modify so that copies of them with the same values and world-model, but who locate themselves at different places within that model, will sacrifice for each other.

Comment author: ike 14 January 2015 08:21:53PM *  0 points [-]

You are just giving yourself a large incentive to lie to your alter ego if you suspect that you are diverging. That doesn't sound good.

On the original post: I don't think that it's practical to commit to something like that right now as a human. I have the same problem with TDT. I can agree that self modifying is best, but still not do as I would wish to have precommitted. But as we're talking about cloning here anyway, we can assume that self-modification is possible, in which the question arises whether this modification has positive expected utility. I think it does, but you seem to be trying to say that you wouldn't need to modify, as each side would stay selfish but still do what they would have preferred in the past. Why would you continue doing something that you committed to if it no longer has positive utility?

Would you pay the traveler in Parfit's hitchhiker as a selfish agent? If not, why cooperate with your alter ego after you find out that you are B? (Yes, I'm comparing this to Parfit's hitchhiker with your commitment to press the button if B analogous to a commitment to give money later. It's a little different as it's symmetrical, but the question of whether you should pay up seems isomorphic. Assuming the traveler isn't reading your mind, in which case TDT enters the picture.)