Manfred comments on Superintelligence 19: Post-transition formation of a singleton - Less Wrong

7 Post author: KatjaGrace 20 January 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 20 January 2015 02:15:50AM 5 points [-]

If you had copies, how altruistic do you think you would be toward them?

Comment author: Manfred 20 January 2015 05:55:39AM *  7 points [-]

Funny timing! Or, good Baader-Meinhoffing :P

Although selfishness w.r.t. copies is a totally okay preference structure, rational agents (with a world-model like we have, and no preferences explicitly favoring conflict between copies) will want to precommit or self-modify so that their causal descendants will cooperate non-selfishly.

In fact, if there is a period where the copies don't have distinguishing indexical information that greatly uncorrelates their decision algorithm, copies will even do the precommitting themselves.

Therefore, upon waking up and learning that I am a copy, but before learning much more, I will attempt to sign a contract with a bystander stating that if I do not act altruistically towards my other copies who have signed similar contracts, I have to pay them my life savings.

Comment author: torekp 25 January 2015 12:22:25AM 3 points [-]

Moreover, when copying becomes the primary means of reproduction, caring for one's copies becomes the ultimate in kin-selection. That puts a lot of evolutionary pressure on to favoring copy-cooperation. Imagine how siblings would care for each other if identical twins (triplets, N-tuples) were the norm.

Comment author: RobinHanson 20 January 2015 04:06:57PM 4 points [-]

If signing a contract was all that we needed to coordinate well, we would already be coordinating as much as is useful now. We already have good strong reasons to want to coordinate for mutual benefit.

Comment author: Manfred 20 January 2015 04:58:58PM *  4 points [-]

This is a good point - one could already sign a future-altruism-contract with someone, and it would already be an expected gain if it worked. But we only see approximations to this, like insurance or marriage. So unless my copies are enough more trustworthy and thoughtful of me to make this work, maybe something on the more efficacious self-modification end of the spectrum is actually necessary.

Comment author: Luke_A_Somers 20 January 2015 04:27:18PM *  4 points [-]

We don't have the identity and value overlap with others that we'd have with copies. The contract would just be formalizing it. I think it's a silly way of formalizing it. I respect my copies' right to drift differently than I do, and will then cease cooperating as absolutely. I certainly don't want to lose all of my assets in this case!