Vladimir_M comments on Abnormal Cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (365)
I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you'll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your "normal" self that you expect to be alive tomorrow.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don't feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.
In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.
That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people's strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.
Roko:
It would probably depend on the exact nature of the evidence that would support this discovery. I allow for the possibility that some sorts of hypothetical experiences and insights that would have the result of convincing me that we live in a simulation would also have the effect of dramatically changing my intuitions about the question of personal identity. However, mere thought-experiment considerations of those I can imagine presently fail to produce any such change.
I also allow for the possibility that this is due to the limitations of my imagination and reasoning, perhaps caused by unidentified biases, and that actual exposure to some hypothetical (and presently counterfactual) evidence that I've already thought about could perhaps have a different effect on me than I presently expect it would.
For full disclosure, I should add that I see some deeper problems with the simulation argument that I don't think are addressed in a satisfactory manner in the treatments of the subject I've seen so far, but that's a whole different can of worms.
That would fall under the "evidence that I've already thought about" mentioned above. My intuitions would undoubtedly be shaken and moved, perhaps in directions that I presently can't even imagine. However, ultimately, I think I would be led to conclude that the whole concept of "oneself" is fundamentally incoherent, and that the inclination to hold any future entity or entities in special regard as "one's future self" is just a subjective whim. (See also my replies to kodos96 in this thread.)
While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn't identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.
Your point about weirdness signaling is a good one, and I'd expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.
JoshuaZ:
The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject's identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan's principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point.
In any case, I honestly don't see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.
This seems similar to something that I'll arbitrarily decide to call the 'argument from arbitrariness': every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I'd be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.
I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah
In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.
Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.
I don't mean to insult you (I'm trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You're admitting to false beliefs that you hold "because you evolved that way" rather than using reason to reconcile two intuitions that you "sort of follow" but which contradict each other.
Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can't be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one's hands but your own.
Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted "I think therefore I am" as one basis for a stable starting point.
But starting from that idea and a handful of others like "trust of our own memories as a sound basis for induction" we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time - one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our "initial epistemic conditions". The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.
Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now... unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost "continuity of awareness" in the middle because your brain will go into a repair and update mode that's not capable of sensing your environment or continuing to compute "continuity of awareness".
If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.
Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you're mostly focused on your "contextual value" (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).
The real thing to which you should be paying attention (other than to make sure they don't stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.
For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.
Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I'll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.
When I set up a "drake equation for cryonics" and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn't even have terms for negative value outcomes like "loss of value in 'some other context' because of cryonics/simulationist interactions".
So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.
Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.
No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.
I don't understand why psychological continuity isn't enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don't see why that matters.)
Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don't care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.
We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.
Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.
Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values.
If a proposed intrinsic value is questioned and justified with another value statement, then the supposed "intrinsic value" is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of "simple" fact. And you are quite right that these facts (by definition as "non value statements") will not be motivating.
We fundamentally like vanilla (if we do) "because we like vanilla" as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P
On the other hand... basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create "flow" where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on.
As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that "I don't care what I will experience tomorrow" can be interpreted as a prediction that "Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions". This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professional help if you want to live very much longer (which of course you might not if you're really experiencing anhedonia), or else you are a non human intelligence and I have to start from scratch trying to figure you out.
Taking it as granted that you and I can both safely predict that you will continue to enjoy life tomorrow... then an inductive proof can be developed that "unless something important changes from one day to the next" you will continue to have a stake in the day after that, and the day after that, and so on. When people normally discuss cryonics and long term values it is the "something important changing" issue that they bring up.
For example, many people think that they only care about their children... until they start seeing their grandchildren as real human beings whose happiness they have a stake in, and in whose lives they might be productively involved.
Other people can't (yet) imagine not falling prey to senescence, and legitimately think that death might be preferable to a life filled with pain which imposes costs (and no real benefits) on their loved ones who would care for them. In this case the critical insight is that not just death but also physical decline can be thought of as a potentially treatable condition and so we can stipulate not just vastly extended life but vastly extended youth.
But you are not making any of these points so that they can even be objected to by myself or others... You're deploying the kind of arguments I would expect from an undergrad philosophy major engaged in motivated cognition because you have not yet "learned how to lose an argument gracefully and become smarter by doing so".
And it is for this reason that I stand by the conclusion that in some cases beliefs about cryonics say very much about the level of pragmatic philosophical sophistication (or "rationality") that a person has cultivated up to the point when they stake out one of the more "normal" anti-cryonics positions. In your case, you are failing in a way I find particularly tragic, because normal people raise much better objections than you are raising - issues that really address the meat of the matter. You, on the other hand, are raising little more than philosophical confusion in defense of your position :-(
Again, I intend these statements only in the hope that they help you and/or audiences who may be silently identifying with your position. Most people make bad arguments sometimes and that doesn't make them bad people - in fact, it helps them get stronger and learn more. You are a good and valuable person even if you have made comments here that reveal less depth of thinking than might be hypothetically possible.
That you are persisting in your position is a good sign, because you're clearly already pretty deep into the cultivation of rationality (your arguments clearly borrow a lot from previous study) to the point that you may harm yourself if you don't push through to the point where your rationality starts paying dividends. Continued discussion is good practice for this.
On the other hand, I have limited time and limited resources and I can't afford to spend any more on this line of conversation. I wish you good luck on your journey, perhaps one day in the very far future we will meet again for conversation, and memory of this interaction will provide a bit of amusement at how hopelessly naive we both were in our misspent "childhood" :-)
Why is psychological continuity important? (I can see that it's very important for an identity to have psychological continuity, but I don't see the intrinsic value of an identity existing if it is promised to have psychological continuity.)
In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don't sense any responsibility to care about a future self that needn't exist. On the contrary, if this person has no effect on anything that matters to me, I'd rather be free of being responsible for this future self.
In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don't have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.
Less unique, I think, though perhaps not generally realized, is the fact that I don't feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn't be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.
I've just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html
He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn't seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.
I'm in the signing process right now, and I wanted to comment on the "work in progress" aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true.
The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy).
So yeah, the term "working on it" is not correctly applicable to this situation. Someone who's never climbed a flight of stairs may work out for months in preparation, but they really don't need to, and afterwards might be somewhat annoyed that no one who'd climbed stairs before had bothered to tell them so.
Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked "What's CI?" that it's a place that'll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact - on a daily basis - with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don't die, and realize that you never would have.
The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life.
SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don't currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up.
Secondarily, there are similarly complex social issues that come up because I'm married, love my family, am able to have philosophical conversations them, and don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be?
Finally, I've worked on a personal version of a "drake equation for cryonics" and it honestly wasn't a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family - given that they are generally rational in their own personal ways :-)
Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may "stick" for the next forty years. Against this I weigh the issue of "best being the enemy of good" because (in point of fact) I'm not safe in any way at all right now... which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that "sloppiness calibration" over to the rest of my life?
So, yeah, its a work in progress.
I'm pretty much not afraid of the social issues that you brought up. If people who disagree with me about the state of the world want to judge me, that's their problem up until they start trying to sanction me or spread malicious gossip that blocks other avenues of self improvement or success. The judgment of strangers who I'll never see again is mostly a practical issue and not that relevant compared to relationships that really matter, like those with my husband, nuclear family, friends, personal physician, and so on.
Back in 1999 I examined these issues. In 2004 I got to the point of having all the paperwork to sign and turn in with Alcor and Insurance, with all costs pre-specified. In each case I backed off because I calculated the costs and looked at my income and looked at the things I'd need to cut out of my life (and none of it was coffee from starbucks or philanthropy or other fluffy BS like that - it was more like the simple quality of my food and whether I'd be able to afford one bedroom vs half a bedroom) and they honestly didn't seem to be worth it. As I've gotten older and richer and more influential (and partly due to influence from this community) I've decided I should review the decision again.
The hard part for me is dotting the i's and crossing the t's (and trying to figure out where its safe to skip some of these steps) while seeking to minimize future regrets and maximize positive outcomes.
You can't hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can't view yourself as the primary source for their actions.
It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening.
The question is: to what degree is failing to sign up for cryonics like suicide by negligence?
I'm not finding this. Can you refer me to your trivially easy agency?
I used State Farm, because I've had car insurance with them since I could drive, and renters/owner's insurance since I moved out on my own. I had discounts both for multi-line and loyalty.
Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax.
Heck, my agent had to do much more work than I did, previous to this she didn't know that you can designate someone other than yourself as the owner of the policy, required some training.
I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn't complete it for me. That was spooky. I don't want to do that.
Disagree. What's this trivially easy part? You can't buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you'll have to get a medical exam, but still...)
Of course, in fairness, I'm trying to combine it with "infinite banking" by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don't want to limit the policy to a specific term, risking that you'll die afterward and no be able to afford the preservation, when the take-off hasn't happened.)
Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you'll end up way ahead.
Yes, I'm intimately familiar with the argument. And while I'm not committed to whole life, this particular point is extremely unpersuasive to me.
For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.
That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free.
If you "buy term and invest the difference", you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you're ~60. The optimistic "long term" returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in '08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds -- and especially not after taxes.
Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries' fiscal situations, is looking more likely every day), they'll most likely be hit before life insurance policies.
So yes, I'm aware of the argument, but there's a lot about the calculation that people miss.
It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.
Well said.
I think this is true. Cryonics being the "correct choice" doesn't just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can't imagine people not valuing life extension in this way.
I wouldn't pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)
Would it change your mind if that computer program [claimed to] strongly identify with you?
I'm not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?
Well right, obviously a program consiting of "printf("I am Vladmir_M")" wouldn't qualify... but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.
No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it's me.
I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the "fading qualia" argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don't see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.
If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like "yourself", how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I'm just curious what you think, since I've never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)
For the robotic "me" -- though not for anyone else -- this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we're pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.
Therefore, my answer would be that I don't know how exactly the subjective intuitions and convictions of the robotic "me" would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don't think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.
Of course, I am aware that a similar argument can be applied to the "normal me" who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it's a matter of strong subjective preferences.
Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?
Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.
Well, they say that cryonics works whether you believe in it or not. Why don't give it a try?