Dmytry comments on Cryonics without freezers: resurrection possibilities in a Big World - Less Wrong

40 Post author: Yvain 04 April 2012 10:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 05 April 2012 06:31:08AM 18 points [-]

Isn't dissolving the concept of personal identity relatively straightforward?

We know that we've evolved to protect ourselves and further our own interests, because organisms who didn't have that as a goal didn't fare very well. So in this case at least, personal identity is merely a desire to make sure that "this" organism survives.

Naturally, the problem is defining in "this organism". One says, "this" organism is something defined by physical continuity. Another says, "this" organism is something defined by the degree of similarity to some prototype of this organism.

One says, sound is acoustic vibrations. Another says, sound is the sensation of hearing...

There's no "real" answer to the question "what is personal identity", any more than there is a "real" answer to the question "what is sound". You may pick any definition you prefer. Of course, truly dissolving "personal identity" isn't as easy as dissolving "sound", because we are essentially hard-wired to anticipate that there is such a thing as personal identity, and to have urges for protecting it. We may realize on an intellectual level that "personal identity" is just a choice of words, but still feel that there should be something more to it, some "true" fact of the matter.

But there isn't. There are just various information-processing systems with different degrees of similarity to each other. One may draw more-or-less arbitrary borders between the systems, designating some as "me" and some as "not-me", but that's a distinction in the map, not in the territory.

Of course, if you have goals about the world, then it makes sense to care about the information-processing systems that are similar to you and share those goals. So if I want to improve the world, it makes sense for me to care about "my own" (in the commonsense meaning of the word) well-being - even though future instances of "me" are actually distinct systems from the information-processing system that is typing these words, I should still care about their well-being because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out. But that doesn't mean that I should necessarily consider them "me", or that the word would have any particular meaning.

And naturally, on some anticipation/urge-level I still consider those entities "me", and have strong emotions regarding their well-being and survival, emotions that go above and beyond that which is justified merely in the light of my goals . But I don't consider that as something that I should necessarily endorse, except to the extent that such anticipations are useful instrumentally. (E.g. status-seeking thoughts and fantasies may make "me" achieve things which I would not otherwise achieve, even though they make assumptions about such a thing as personal identity.)

Comment author: Dmytry 05 April 2012 05:20:23PM *  0 points [-]

Interesting. I was thinking about same regarding the early stages of AGI - it is difficult to define the 'me' precisely, and its unclear why would one have a really precise definition of 'me' in an early AGI. It's good enough if the life is 'me' to AI but the Jupiter isn't 'me' , that's a negligible loss in utility from the life not being kosher food for the AI.