You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on Selfish preferences and self-modification - Less Wrong Discussion

4 Post author: Manfred 14 January 2015 08:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 14 January 2015 10:32:23PM 0 points [-]

Can you rigorously define at what point you no longer consider the "other" one as part of you?

Presumably this is like trying to solve the Sorites paradox. The best you can do is to find a mutually acceptable Schelling point, e.g. 100 grains of sand make a heap, or disagreeing on 10% or more of all decisions means you are different enough.

Comment author: torekp 14 January 2015 11:09:39PM 2 points [-]

A gradual falling-off of concern with distance seems more graceful than suddenly going from all to nothing. It's not like the legal driving age, where there's strong practical reason for a small number of sharp cut-offs.

Comment author: ike 15 January 2015 04:06:50PM 0 points [-]

10% or more of all decisions

Then we have the problem of deciding what counts as a decision. Even very minor changes will invalidate a broad definition like "body movements", as most body movements will be different after the 2 diverge.

My prefered diverging point is as soon as the cloning happens. I'm open to accepting that as long as they are identical, they can cooperate, but that can be justified by pure TDT without invoking "caring for the other". But any diverging stops this; that's my Schelling point.

Comment author: Leonhart 15 January 2015 08:51:48PM 1 point [-]

Do you really think your own nature that fragile?

(Please don't read that line in a judgemental tone. I'm simply curious.)

I would automatically cooperate with a me-fork for quite a while if the only "divergence" that took place was on the order of raising a different hand, or seeing the same room from a different angle. It doesn't seem like value divergence would come of that.

I'd probably start getting suspicious in the event that "he" read an emotionally compelling novel or work of moral philosophy I hadn't read.

Comment author: ike 15 January 2015 08:57:49PM 0 points [-]

If we raised different hands, I do think it would quickly cause us to completely diverge in terms of how many body movements are equal. That doesn't mean we would be very different, or that I'm fragile. I'm pretty much the same as I was a week ago, but my movements now are different. I was just pointing out that "decisions" isn't that much more well defined than what it was coming to define (divergent).

I would automatically cooperate

In a True Prisoner's Dilemma, or even in situations like the OP? The divergence there is that one person knows they are "A" and the other "B", in ways relevant to their actions.

Comment author: Leonhart 15 January 2015 09:44:08PM *  1 point [-]

Ah, I see. We may not disagree, then. My angle was simply that "continuing to agree on all decisions" might be quite robust versus environmental noise, assuming the decision is felt to be impacted by my values (i.e. not chocolate versus vanilla, which I might settle with a coinflip anyway!)

In the OP's scenario, yes, I cooperate without bothering to reflect. It's clearly, obviously, the thing to do, says my brain.

I don't understand the relevance of the TPD. How can I possibly be in a True Prisoner's Dilemma against myself, when I can't even be in a TPD against a randomly chosen human?

Comment author: ike 15 January 2015 09:53:04PM 0 points [-]

OP is assuming selfishness, which makes this True. Any PD is TPD for a selfish person. Is it still the obvious thing to do if you're selfish?

Comment author: Leonhart 15 January 2015 10:04:59PM 0 points [-]

Yes, for a copy close enough that he will do everything that I will do and nothing that I won't. In simple resource-gain scenarios like the OP's, I'm selfish relative to my value system, not relative to my locus of consciousness.

Comment author: ike 16 January 2015 02:05:43PM 0 points [-]

So we have different models of selfishness, then. My model doesn't care about anything but "me", which doesn't include clones.

Comment author: Manfred 16 January 2015 07:18:41PM *  0 points [-]

any diverging stops this

The trouble is, of course, that if you both predictably (say, with 98% probability) switch to defecting after one sees 'A' and the other sees 'B', you could just as easily (following some flavor of TDT) predictably cooperate.

This issue is basically the oversimplification within TDT where it treats algorithms as atomic causes of actions, rather than as a lossy abstraction from complex physical states. This is a very difficult AI problem that I'm pretending is solved for the purposes of my posts.

Comment author: shminux 15 January 2015 05:21:04PM 0 points [-]

I agree, "as soon as the cloning happens" is an obvious Schelling point with regards to caring. However, if you base your decision to cooperate or defect on how similar the other clone is to you in following the same decision theory, then this leads to "not at all similar", resulting in defection as the dominant strategy. If instead you trust the other clone to apply TDT the way you do, then you behave in a way that is equivalent to caring even after you profess that you do not.

Comment author: ike 15 January 2015 07:44:34PM *  1 point [-]

I don't think so. When I say I would cooperate, I mean standard Prisoner's Dilemma stuff. I don't have to care about them to do that.

The things I wouldn't care about are the kinds of situations mentioned in the OP. In a one sided Dilemma, where the other person has no choice, TDT does not say you should cooperate. If you cared about them, then you should cooperate as long as you will lose less than they gain. In that case I would not cooperate, even though I might self-modify to cooperating now if given the choice.

Comment author: shminux 15 January 2015 10:01:49PM 0 points [-]

I see. I understand what you mean now.