Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Identity map - Less Wrong Discussion

7 Post author: turchin 15 August 2016 11:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 15 August 2016 09:18:01PM *  1 point [-]

In fact the identity is technical term which should help us to solve several new problems which will appear after older intuitive ideas of identity stop work, that is the problems of uploading, human modification and creation of copies.

So the problems:

1) Should I agree to be uploaded into a digital computer? Should we use gradual uploading tech? Should I sign for cryonics?

2) Should I collect data for digital immortality hoping that future AI will reconstruct me. Which data is most important? How much? What if my future copy will be not exact?

3) Should I agree on creation of my several copies?

4) What about my copies in the multiverse? Should I count them even? Should I include casually disconnected copies in another universes in my expectation of quantum immortality?

5) How quantum immortality and destructive uploading work together?

6) Will I die in case of deep coma, as my stream of consciousness will interrupt? So should I prefer only local anesthesias in case of surgery?

7) Am I responsible for the things I did 20 years ago?

8) Should I act now for the things which I will get in only 20 years like life extension.

So there are many things which depends of my ideas of personal identity in my decision making, and some of them like digital immortality and taking life extension drugs should be implemented now. Some people refuse to record everything about them or sign for cryonics because of their identity ideas.


The problem with the hope that AI will solve all our problems that it has a little bit of circularity. In child language, because to create good AI we need to know exactly what is "good". I mean that if we can't verbalise our concept of identity, we also poor in verbalaising any other complex idea, including friendliness and CEV.

So I suggest to try our best in creating really good definitions of what is really important to us, hoping that future AI will be able to get the idea much better from these attempts.

Comment author: Manfred 16 August 2016 03:10:09AM 0 points [-]

Right. I think that one can use one's own concept of identity to solve these problems, but that which you use is very difficult to put into words. Much like your functional definition of "hand," or "heap." I expect that no person is going to write a verbal definition of "hand" that satisfies me, and yet I am willing to accept peoples' judgments on handiness as evidence.

On the other hand, we can use good philosophy about identity-as-concept to avoid making mistakes, much like how we can avoid certain mistaken arguments about morality merely by knowing that morality is something we have, not something imposed upon us, without using any particular facts about our morality.