kodos96 comments on Abnormal Cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (365)
I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you'll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your "normal" self that you expect to be alive tomorrow.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don't feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.
In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.
That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people's strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.
Would it change your mind if that computer program [claimed to] strongly identify with you?
I'm not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?
Well right, obviously a program consiting of "printf("I am Vladmir_M")" wouldn't qualify... but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.
No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it's me.
I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the "fading qualia" argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don't see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.
If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like "yourself", how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I'm just curious what you think, since I've never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)
For the robotic "me" -- though not for anyone else -- this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we're pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.
Therefore, my answer would be that I don't know how exactly the subjective intuitions and convictions of the robotic "me" would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don't think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.
Of course, I am aware that a similar argument can be applied to the "normal me" who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it's a matter of strong subjective preferences.
Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?
Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.