AlexMennen comments on Superintelligence 19: Post-transition formation of a singleton - Less Wrong

7 Post author: KatjaGrace 20 January 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 20 January 2015 06:39:17PM *  3 points [-]

Philosophically, I would want to value each of my copies equally, and I suspect that initially, my copies would be pretty altruistic towards each other. Using some mechanism to keep it that way, as Manfred suggests, seems appealing to me, but it isn't clear how feasible it would be. I would expect that absent some such mechanism, I would gradually become less altruistic towards copies for psychological reasons: If I benefited another copy of myself at my own expense, I remember the expense and not the benefit, so even though I would endorse it as good for me in aggregate (if the benefit outweighed the expense), I would be trained not to do that via reinforcement learning. I expect that I would remain able to cooperate with copies pretty well for quite a long time, in the sense of coordinating for mutual benefit, since I would trust myself and there would therefore be lower enforcement costs to contracts, but I might fairly quickly stop being any more altruistic towards copies, in the sense of willingness to help them at my expense without an expectation of return, than I am towards close friends.