Lukas_Gloor comments on An attempt to dissolve subjective expectation and personal identity - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (68)
Does the reductionist view of personal identity affect how we should ethically evaluate death? I mean even if we, obviously, can't shake it off for making day-to-day decisions. For instance, if continuing to exist is like bringing new conscious selves into existence (by omission of not killing oneself), and if we consider continued existence ethically valuable, wouldn't this imply classical total utilitarianism, the view that we try to fill the universe with happy moments? To me it seems like it undermines "prior-existence" views. Also, the idea of "living as long as possible" appears odd under this view, like an arbitrary grouping of certain future conscious moments one just happens to care about (for evolutionary reasons having nothing to do with "making the world a better place"). Finally, in the comments someone remarked that he still has an aversion to creating repetititve conscious moments, but wouldn't the reductionist view on personal identity also undermine that? For *whom" would repetition be a problem? I'm not a classical utilitarian by the way, just playing devil's advocate.
I actually don't think any of those things are problematic. A reductionist view of personal identity mainly feels like an ontology shift - you need to redefine all the terms in your utility function (or other decision-making system), but most outcomes will actually be the same (with the advantage that some decisions that were previously confusing should now be clear). Specifically:
I don't think so! You can redefine death as a particular (optionally animal-shaped) optimization process ceasing operation, which is not reliant on personal identity. (Throw in a more explicit reference to lack of continuity if you care about physical continuity.) The only side-effect of the reductionist view, I feel, is that it makes out preferences feel more arbitrary, but I think that's something you have to accept either way in the end.
Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.
This is true enough. If you have strong preferences for the world outside of yourself (general "you"), you can argue that continuing the operation of the optimization process with these preferences increases the probability of the world more closely matching these preferences. If you care mostly about yourself, you have to bite the bullet and admit that that's very arbitrary. But since preferences are generally arbitrary, I don't see this as a problem.
This basically comes down to the fact that just because you believe that there's no continuity of personal identity, you don't have to go catatonic (or epileptic). You can still have preferences over what to do, because why not? The optimization process that is your body and brain continues to obey the laws of physics and optimize, even though the concept of "personal identity" doesn't mean much. (I'm really having a lot of trouble writing the preceding sentence in a clear and persuasive way, although I don't think that means it's incorrect.)
And in case someone thinks that I over-rely on the term "optimization process" and the comment would collapse if it's tabooed, I'm pretty sure that's not the case! The notion should be emergent as a pattern that allows more efficient modelling of the world (e.g. it's easier to consider a human's actions than the interaction of all particles that make up a human), and the comment should be robust to a reformulation along these lines.
I strongly second this comment. I have been utterly horrified the few times in my life when I have come across arguments along the lines of "personal identity isn't a coherent concept, so there's no reason to care about individual people." You are absolutely right that it is easy to steel-man the concept of personal identity so that it is perfectly coherent, and that rejecting personal identity is not a valid argument for total utilitarianism (or any ethical system, really).
In my opinion the OP is a good piece of scientific analysis. But I don't believe it has any major moral implications, except maybe "don't angst about the Ship of Theseus problem." The concept of personal identity (after it has been sufficiently steel-manned) is one of the wonderful gifts we give to tomorrow, and any ethical system that rejects has lost its way.
Well you could focus your utility function on anything you like anyway, the question is why, under utilitarianism, would it be justified to value this particular optimization process? If personal identity was fundamental, then you'd have no choice, conscious existence would be tied to some particular identity. But if it's not fundamental, then why prefer this particular grouping of conscious-experience-moments, rather than any other? If I have the choice, I might as well choose some other set of these moments, because as you said, "why not"?
I wrote an answer, but upon rereading, I'm not sure it's answering your particular doubts. It might though, so here:
Well, if we're talking about utilitarianism specifically, there are two sides to the answer. First, you favour the optimization-that-is-you more than others because you know for sure that it implements utilitarianism and others don't (thus having it around longer makes utilitarianism more likely to come to fruition). Basically the reason why Harry decides not to sacrifice himself in HPMoR. And second, you're right, there may well be a point where you should just sacrifice yourself for the greater good if you're a utilitarian, although that doesn't really have much to do with dissolution of personal identity.
But I think a better answer might be that:
You do not, in fact, have the choice. Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity, and there is no additional motivation for doing so. If you mean something similar to Eliezer writing "how do I know I won't be Britney +5 five seconds from now" in the original post, that question actually relies on a concept of personal identity and is undefined without it. There's not really a classical "you" that's "you" right now, and five seconds from now there will still be no "you" (although obviously there's still a bunch of molecules following some patterns, and we can assume they'll keep following similar patterns in five seconds, there's just no sense in which they could become Britney).
I think the point is actually similar to this discussion, which also somewhat confuses me.
Well, for what it is worth I'm not extremely concerned about dying, and I was much more afraid of dying before I figured out that subjective expectation doesn't make sense.
My present decisions are made by consulting my utility function about what sort of future I would wish to see occur. That optimal future need not necessarily contain a being like myself, even after taking into account the particularly deep and special affection I have for future me.
Don't get me wrong here - death as arbitrarily set by our biology is bad and I wish it wouldn't happen to me. But that doesn't mean that preserving my consciousness for an arbitrarily long time is the optimum good. There may well come a time when my consciousness is outdated, or perhaps just made redundant. Following the same thought process that keeps me from making 100 copies of myself for no good reason, I wouldn't want to live forever for no good reason.
I'm the one who mentioned having an aversion to creating redundant consciousnesses, by the way. An interesting universe is one of my many terminal values, and diversity keeps things interesting. Repetition is a problem for me because it saps resources away from uniqueness and is therefore a sub-optimal state. The first hundred or so duplicates would be pretty fascinating (think of the science! Best control group ever) but if you get too many copies running around things get too homogeneous and my terminal value for an interesting universe will start to complain. There is a diminishing return on duplicates - the extent to which they can make unique contributions declines as a function of the number of copies.
Got infinite resources? Sure, go crazy - create infinite copies of yourself that live forever if you want. As a matter of fact, why not just go ahead and create every possible non-morally aberrant thing you can imagine! But I'm not sure that infinite resources can happen in our universe. Or at least, I was assuming significant resource constraints when I said that I have an aversion to unnecessary duplication.
The same thought process applies to not necessarily living forever. It's not interesting to have the same individuals to continue indefinitely - it's more diverse and interesting to have many varied individuals rising and falling. There are better things to do with resources than continually maintain everyone who is ever born. Of course, some of the more emotional parts of me don't give two shits about resource constraints and say "fuck no, I don't want myself or anyone else to die!" but until you get infinite resources, I don't see how that's feasible.
This does an awesome job of putting into words a thought I've had for a long time, and one of the big reasons I have trouble getting emotionally worked up about the idea of dying. Although it's not necessarily true that an individual living forever would be less interesting–the more time you have to learn and integrate skills, the m ore you can do and imagine, especially because assuming we've solved aging also kinda suggets we've solved things like Altzeimer's and brain plasticity and stuff. Then again, when I imagine "immortal human", I think my brain comes up with someone like Eliezer being brilliant and original and getting more so with practice, as opposed to Average Joe bored in the same career for 1000 years. The latter might be closer to the truth.
From my perspective, it's not intelligence that's the problem so much as morality, culture, and implicit attitudes.
Even if we could freeze a human at peak cognitive capacity (20-30 years?) we wouldn't get the plasticity of a newborn child. I don't think that sexism, racism, homophobia, etc... just melt away with the accumulation of skills and experience. It's true that people get more socially liberal as they get older, but it's also true that they don't get more socially liberal as quickly as the rest of society. And the "isms" I named are only the most salient examples, there are many subtler implicit attitudes which will be much harder to name and shed. Remember that most of the current world population has the cultural attitudes of 1950's America or worse.
Of course, I might be thinking too small. We might be able to upgrade ourselves to retain both the flexibility of a new mind and the efficiency of an adult one.
I don't know how much hope I have for my own, individual life though. It will probably cost a lot to maintain it, and I doubt the entire planet will achieve acceptable enough standard of living that I'd be comfortable spending vast amounts on myself (assuming i can even afford it). It's something I've still got to think about.
Of course, societal attitudes can become more conservative as well as more liberal. You seem to be assuming that the overall direction is towards greater liberality, but it's not obvious to me that that's the case (e.g. the Arab world going from the center of learning during the Islamic Golden Age to the fundamentalist states that many of them are today, various claims that I've heard about different fundamentalist and conservative movements only getting really powerful as a backlash to the liberal atmosphere of the sixties, some of my friends' observations about today's children's programming having more conservative gender roles than the equivalent programs in the seventies-eighties IIRC, the rise of nationalistic and racist movements in many European countries during the last decade or two, etc.). My null hypothesis would be that liberal and conservative periods go back and forth, with only a weak trend towards liberality which may yet reverse.