shminux comments on Ritual 2012: A Moment of Darkness - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
Having lost parents and grandparents in the last several years, I appreciate your sentiment. But, as much as I would want to live forever, I am not sure that eternal individual life is good for humanity as a whole, at least without some serious mind hacking first. Many other species, like, say, salmon, have a fixed lifespan, so intelligent salmon would probably not worry about individual immortality. It seems to me that associating natural death of an individual with evil is one of those side effects of evolution humans could do without. That said, I agree that suffering and premature death probably has no advantage for the species as a whole and ought to be eliminated, but I cannot decide for sure if fixed lifespan is such a bad idea.
I actually mostly agree with you. Or at least, that the answer is not terribly obvious. I didn't expound upon it during the ceremony (partly due to time, and partly because one of the most important aspects of the moment was to give a time for anti-deathists to grieve for people they lost, who's death they were unable to process among peers who shared their beliefs.)
But in the written up version here, I thought it was important that I make my views clear, and included the bit about me not actually being that much of an anti-deathist. I think the current way people die is clearly supoptimal, and once you remove it as an anchor I'm not sure if people should die after 100 years, a thousand years, or longer or at all. But I don't think it's as simple an idea as "everybody gets to live forever."
The obvious answer is "Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that's fine too; totally up to you."
In any case that seems to me to be much more obvious than "we (for some value of 'we') decide, for all of humanity, how long everyone gets to live".
In other words, I don't think there's a fact of the matter about "if people should die after 100 years, a thousand years, or longer or at all". The question assumes that there's some single answer that works for everyone. That seems unlikely. And the idea that it's OK to impose a fixed lifespan on someone who doesn't want it is abhorrent.
Additionally — this is re: shminux's comment, but is related to the overall point — "Good for humanity as a whole" and "advantage for the species as a whole" seem like nonsensical concepts in this context. Humanity is just the set of all humans. There's no such thing as a nebulous "good for humanity" that's somehow divorced from what's good for any or every individual human.
If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you'd get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I'd end up dying when my age was somewhere in the 3 digits.
Obviously, that solution is "fair and stable", not "optimal". I'm not arguing that that's how things should work — and I can easily imagine ways to change it that I'd view as improvements — but it's a nice simple model of how things could be stable.
Well, that model may be stable (I haven't actually thought it through sufficiently to judge, but let's grant that it is) — but how exactly is it "fair"? I mean, you're assuming a set of values which is nowhere near universal in humanity, even. I'm really not even sure what your criteria here are for fairness (or, for that matter, optimality).
My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).
Ok. But:
For instance, you say that "each living entity would get to have" so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are... a few... moral issues here. Perhaps we'll just kill people at some age?
What I am getting at is that you can't just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.
Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.
A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn't inherently self-destructive.
I think I agree with the spirit of your answer. "We can't possibly figure out how to do that and in any case doing so wouldn't feel right, so we'll let the people involved sort it out amongst themselves.," but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you'll ever be able to really design a system for all humanity.
To you, perhaps. Well, and me. You're intuitions on the matter are not universal, however. Far from it, as our friends's comments show.
My main problems (read: ones that don't rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that's all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)
Note: Not trying to attack your position, just curious.
Fixed by whom, might I ask?
You seem to be implying that designed death is worse. How do you figure?
Superhappy aliens, FAI, United Nations... There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans' terminal values are adjusted in a way that they don't strive for infinite individual lifespan.
I don't. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There's no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone's souls. That's literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it's a terminal value for most people to suffer and grieve over the loss of individual life - and they want to suffer and grieve, and want to want to - a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we'd be messing with the very complexity of human values, period.
A statement like that needs a mathematical proof.
"If" indeed. There is little "evolution-spawned" about it (not that it's a good argument to begin with, trusting the "blind idiot god"), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don't you give it a go.
I agree with what you're saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don't know what I think.)
Ah... If you or I knew what to think, we'd be working on CEV right now, and we'd all be much less fucked than we currently are.
If human terminal values need to be adjusted for this to be acceptable to them, then it is immoral by definition.
Looks like you and I have different terminal meta-values.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you're a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people's distress when you kill them, in which case remind me never to drink anything you give me.
Typical mind fallacy?
... are you saying I'm foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn't realize? Yes? Congratulations, you're not a psychopath.
Everyone who voluntarily joins the military is a psychopath?
"Neurotypical"... almost as powerful as True!
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he's actually a psychopath or, indeed, an actual salmon-person (those are still technically "human", I assume.)
I'm really curious to know what you mean by 'terminal meta-values'. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
Say, whether it is ever acceptable to adjust someone's terminal values.
No, I'm perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it's likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Possible outcome; better than most; boring. I don't think that's really something to strive for, but my values are not yours, I guess. Also, I'm assuming we're just taking whether an outcome is desirable into account, not its probability of actually coming about.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I'm just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.