pleeppleep comments on Ritual 2012: A Moment of Darkness - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
Note: Not trying to attack your position, just curious.
Fixed by whom, might I ask?
You seem to be implying that designed death is worse. How do you figure?
Superhappy aliens, FAI, United Nations... There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans' terminal values are adjusted in a way that they don't strive for infinite individual lifespan.
I don't. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There's no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone's souls. That's literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it's a terminal value for most people to suffer and grieve over the loss of individual life - and they want to suffer and grieve, and want to want to - a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we'd be messing with the very complexity of human values, period.
A statement like that needs a mathematical proof.
"If" indeed. There is little "evolution-spawned" about it (not that it's a good argument to begin with, trusting the "blind idiot god"), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don't you give it a go.
I agree with what you're saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don't know what I think.)
Ah... If you or I knew what to think, we'd be working on CEV right now, and we'd all be much less fucked than we currently are.
If human terminal values need to be adjusted for this to be acceptable to them, then it is immoral by definition.
Looks like you and I have different terminal meta-values.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you're a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people's distress when you kill them, in which case remind me never to drink anything you give me.
Typical mind fallacy?
... are you saying I'm foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn't realize? Yes? Congratulations, you're not a psychopath.
Everyone who voluntarily joins the military is a psychopath?
"Neurotypical"... almost as powerful as True!
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he's actually a psychopath or, indeed, an actual salmon-person (those are still technically "human", I assume.)
I'm really curious to know what you mean by 'terminal meta-values'. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
Say, whether it is ever acceptable to adjust someone's terminal values.
No, I'm perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it's likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Possible outcome; better than most; boring. I don't think that's really something to strive for, but my values are not yours, I guess. Also, I'm assuming we're just taking whether an outcome is desirable into account, not its probability of actually coming about.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I'm just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.