magfrump comments on On the Anthropic Trilemma - Less Wrong

33 Post author: KatjaGrace 19 May 2011 07:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 19 May 2011 11:40:59PM *  3 points [-]

I agree, but as you allow, your (future) specific identity amongst identical copies matters very much when symmetry is broken, e.g. one copy is to be tortured and the rest pleasured.

I'm not sure I understand you. Obviously it matters to your future self A whether A is tortured or pleasured. And also to your current self whether there is a future self A that will be tortured. Do you think that, given that your future self A is tortured and your future self B pleasured, there is an additional fact as to whether you will be tortured or pleasured? I don't. And I don't see the relevance of the rest of your post to my point either.

Comment author: magfrump 20 May 2011 01:22:21AM 1 point [-]

If I see myselves at different points of time as being in collusion as to how to make all of us better off, which has been a viewpoint I've seen taken recently, then there is some agreement between a set of sufficiently-similar agents.

I could view the terms of that agreement as "me" and then the question becomes "what do the terms of the agreement that different sufficiently-similar instances of me serve under say about this situation."

In which case "I" want to come up with a way of deciding, for example, how much pleasure I require per unit of torture, etc. But certainly the question "Am I being tortured or pleasured" doesn't exactly carry over.

I thought I disagreed with you but then I showed my work and it turns out I agree with you.

Comment author: Will_Newsome 20 May 2011 11:04:38AM -1 points [-]

If I see myselves at different points of time as being in collusion as to how to make all of us better off, which has been a viewpoint I've seen taken recently, then there is some agreement between a set of sufficiently-similar agents.

If this is too easy, a way to make it more fun is to do the same thing but with parts of you and coalitions of parts of you, gene/meme-eye view of evolution style. Thinking about whether there's an important metaphysical or decision theoretic sense in which an algorithm is 'yours' or 'mine' from this perspective, while seeing if it continues to add up to normality, can lead to more fun still. And if that's still not fun enough you can get really good at the kinds of meditation that supposedly let you intuitively grok all of this nonsense and notice the subtleties from the inside! Maybe. :D