I don't understand why psychological continuity isn't enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don't see why that matters.)
Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don't care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.
We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.
Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.
I don't mean to insult you (I'm trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You're admitting to false beliefs that you hold "because you evolved that way" rather than using reason to reconcile two intuitions that you "sort of follow" but which contradict each other.
Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can't be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one's hands but your own.
Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted "I think therefore I am" as one basis for a stable starting point.
But starting from that idea and a handful of others like "trust of our own memories as a sound basis for induction" we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time - one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our "initial epistemic conditions". The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.
Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now... unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost "continuity of awareness" in the middle because your brain will go into a repair and update mode that's not capable of sensing your environment or continuing to compute "continuity of awareness".
If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.
Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you're mostly focused on your "contextual value" (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).
The real thing to which you should be paying attention (other than to make sure they don't stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.
For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.
Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I'll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.
When I set up a "drake equation for cryonics" and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn't even have terms for negative value outcomes like "loss of value in 'some other context' because of cryonics/simulationist interactions".
So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.
Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.
No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.
This seems similar to something that I'll arbitrarily decide to call the 'argument from arbitrariness': every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I'd be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.
I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah
In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.
Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Why is psychological continuity important? (I can see that it's very important for an identity to have psychological continuity, but I don't see the intrinsic value of an identity existing if it is promised to have psychological continuity.)
In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don't sense any responsibility to care about a future self that needn't exist. On the contrary, if this person has no effect on anything that matters to me, I'd rather be free of being responsible for this future self.
In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don't have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.
Less unique, I think, though perhaps not generally realized, is the fact that I don't feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn't be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.
I've just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html
He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn't seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.