denisbider comments on Welcome to Heaven - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (242)
Is it friendly to rescue a drunk friend who is about to commit suicide, knowing that they'll come to their senses? Or is it friendly to let them die, because their current preference is to die?
That depends on whether they decided to commit suicide while in a normal-for-them frame of mind, not on their current preference. The first part of the question implies that they didn't, in which case the correct response is to rescue them, wait for them to get sober, and talk it out - and then they can commit suicide, if they still feel the need.
Very well, then. Next example. Your friend is depressed, and they want to commit suicide. You know that their real problem is a neurotransmitter imbalance that can be easily fixed. However, that same neurotransmitter imbalance is depriving them of any will to fix it, and in fact they refuse to cooperate. You know that if you fix their imbalance regardless, they will be happy, they will live a fulfilled life, and they will be grateful to you for it. Is it friendly to intervene and fix the imbalance, or is it friendly to let them die, seeing as depression and thoughts of suicide are a normal-for-them frame of mind?
It doesn't follow that they prefer to commit suicide.
This is an excellent answer, and squares well with mine: If they merely want to commit suicide, they may not have considered all the alternatives. If they have considered all the achievable alternatives, and their preference is to commit suicide, I'd support them doing so.
If this is leading in a direction where "wireheading" is identified with "being happy and living a fulfilled life", then we might as well head it off at the pass.
Being happy - being in a pleasurable state - isn't enough, we would insist that our future lives should also be meaningful (which I would argue is part of "fulfilled").
This isn't merely a subjective attribute, as is "happy" which could be satisfied by permanently blissing out. It has objective consequences; you can tell "meaningful" from the outside. Meaningful arrangements of matter are improbable but lawful, structured but hard to predict, and so on.
"Being totally happy all the time" is a state of mind, the full description of which would compress very well, just as the description of zillions of molecules of gas can be compressed to a handful of parameters. "Meaningful" corresponds to states of mind with more structure and order.
If we are to be somehow "fixed" we would want the "fix" to preserve or restore the property we have now, of being the type of creature who can (and in fact do) choose for themselves.
The preference for "objective meaningfulness" - for states which do not compress very well - seems to me a fairly arbitrary (meaningless) preference. I don't think it's much different from paperclip maximization.
Who is to observe the "meaningful" states, if everyone is in a state where they are happy?
I am not even convinced that "happy and fulfilled" compresses easily. But if it did, what is the issue? Everyone will be so happy as to not mind the absence of complicated states.
I would go so far as to say that seeking complicated states is something we do right now because it is the most engaging substitute we have for being happy.
And not everyone does this. Most people prefer to empty their minds instead. It may even be that seeking complexity is a type of neurotic tendency.
Should the FAI be designed with a neurotic tendency?
I'm not so sure.
I can give you my general class of answers to this kind of problem: I will always attempt to the best of my ability to talk someone I care about out of doing something that will cause them to irretrievably cease to function as a person - a category which includes both suicide and wireheading. However, if in spite of my best persuasive efforts - which are likely to have a significant effect, if I'm actually friends with the person - they still want to go through with such a thing, I will support them in doing so.
The specific implementation of the first part in this case would be to try to talk them into trying the meds, with the (accurate) promise that I would be willing to help them suicide if they still wanted to do that after a certain number of months (dependent on how long the meds take to work).
There are so many different anti-depressants and the methods for choosing which ones are optimal basically come down to the intuition of the psychiatrist. It can take years to iterate through all the possible combinations of psychiatric medication if they keep failing to fix the neurotransmitter imbalance. I think anything short of 2 years is not long enough to conclude that a person's brain is irreparably broken. It's also a field that has a good chance of rapid development, such that a brain that seems irreparably broken today will certainly not always be unfixable.
--
I explored a business in psychiatric genetic testing, and identified about 20 different mutations that could help psychiatrists make treatment decisions, but it was infeasible to bring to market right now without having millions of dollars for research, and the business case is not strong enough for me to raise millions of dollars for research. It'll hit the market within 10 years, sooner if the business case becomes stronger for me doing it or if I have the spare $20k to go out and get the relevant patent to see what doors that opens.
I expect the first consequence of widespread genetic testing for mental health is for NRIs to become much more widely prescribed as the firstline treatment for depression.
Probably the "friendly" action would be to create an un-drunk copy of them, and ask the copy to decide.
And what do you do with the copy? Kill it?
I'm OK with the deletion of very-short-lived copies of myself if there are good reasons to do it. For example, supposing after cryonic suspension I'm revived with scanning and WBE. Unfortunately, unbeknownst to those reviving me, I have a phobia of the Michelin Man and the picture of him on the wall means I deal with the shock of my revival very badly. I'd want the revival team to just shut down, change the picture on the wall and try again.
I can also of course imagine lots of circumstances where deletion of copies would be much less morally justifiable.
There's a very nice thought experiment that helps demonstrate this (I think it's from Nozick). Imagine a sleeping pill that makes you fall asleep in thirty minutes, but you won't remember the last fifteen minutes of being awake. From the point of view of your future self, the fifteen minutes you don't remember is exactly like a short-lived copy that got deleted after fifteen minutes. It's unlikely that anyone would claim taking the pill is unethical, or that you're killing a version of yourself by doing so.
Armchair reasoning: I can imagine the mental clone and the original existing at the same time, side-by-side. I cannot imagine myself with the memory loss and myself without the memory loss as existing at the same time. Also, whatever actions my past self does actually affects my future self regardless of what I remember. As such, my instinct is to think of the copy as a separate identity and my past self as the same identity.
Your copy would also take actions that affects your future self. What is the difference here?
Imagine a scenario where I cut off my arm. I am responsible. If my copy cuts off my arm, he would be responsible, not "me."
This is all playing semantics with personal identity. I am not trying to espouse any particular belief; I am only offering one possible difference between the idea of forgetting your past and copying yourself.
That doesn't make any sense. Your copy is you.
Yeah, okay. You are illustrating my point exactly. Not everyone thinks the way you do about identity and not everyone thinks the way I mentioned about identity. I don't hold hard and fast about it one way or the other.
But the original example of someone who loses 15 minutes being similar to killing off a copy who only lived for 15 minutes implies a whole ton of things about identity. The word "copy" is too ambiguous to say, "Your copy is you."
If I switched in, "X's copy is X" and then started talking about various cultural examples of copying we quickly run into trouble. Why does "X's copy is X" work for people? Unless I missed a definition of terms comment or post somewhere, I don't see how we can just assume that is true.
The first use of "copy" I found in this thread is:
It was followed by:
As best as I can tell, you take the sentence, "Your copy is you" to be a tautology or definition or something along those veins. (I could obviously be wrong; please correct me if I am.) What would you call a functionally identical version of X with a separate, distinct Identity? Is it even possible? If it is, use that instead of "copy" when reading my comment:
When I read the original comment I responded to:
I was not assuming your definition of copy. Which could entirely be my fault, but I find it hard to believe that you didn't understand my point enough to predict this response. If you did, it would have been much faster to simply say, "When people at LessWrong talk about copies they mean blah." In which case I would have responded, "Oh, okay, that makes sense. Ignore my comment."
I'd actually be kinda hesitant of such pills and would need to think it out. The version of me that is in those 15 minutes might be a bit unhappy about the situation, for one thing.
Such pills do exist in the real world: a lot of sleeping pills have similar effects, as does consuming significant amounts of alcohol.
For that matter, so does falling asleep in the normal way.
And it basically results in 15 minutes of experience that simply "go away"? no gradual transition/merging into the mainline experience, simply 15 minutes that get completely wiped?
eeew.
Certainly - this is the restore-from-backup scenario, for which Blueberry's sleeping-pill comparison was apt. (I would definitely like to make a secure backup before taking a risk, personally.) What I wanted to suggest was that duplicate-for-analysis was less clear-cut.
What's the difference? Supposing that as a matter of course the revival team try a whole bunch of different virtual environments looking for the best results, is that restore-from-backup or duplicate-for-analysis?
Suppose that we ironically find that the limitations on compute hardware mean that no matter how much we spend we hit an exact 1:1 ratio between subjective and real time, but that the hardware is super-cheap. Also, there's no brain "merge" function. I might fork off a copy to watch a movie to review it for myself, to decide whether the "real me" should watch it.
As MrHen pointed out, you can imagine the 'duplicate' and 'original' existing side-by-side - this affects intuitions in a number of ways. To pump intuition for a moment, we consider identical twins to be different people due to the differences in their experiences, despite their being nearly identical on a macro level. I haven't done the calculations to decide where the border of acceptable use of duplication lies, but deleting a copy which diverged from the original twenty years before clearly appears to be over the line.
Absolutely, which is why I specified short-lived above.
Though it's very hard to know how I would face the prospect of being deleted and replaced with a twenty-minute-old backup in real life!
I may be answering an un-asked question, since I haven't been following this conversation, but the following solution to the issue of clones occurs to me:
Leave it up to the clone.
Make suicide fully legal and easily available (possibly 'suicide of any copy of a person in cases where more than one copy exists', though that could allow twins greater leeway depending on how you define 'person' - perhaps also add a time limit: the split must have occurred within N years). When a clone is created, it's automatically given the rights to 1/2 of the original's wealth. If the clone suicides, the original 'inherits' the wealth back. If the clone decides not to suicide, it automatically keeps the wealth that it has the rights to.
Given that a clone is functionally the same person as the original, this should be an ethical solution (assuming that you consider suicide ethical at all) - someone would have to be very sure that they'd be able to go through with suicide, or very comfortable with the idea of splitting their wealth in half, in order to be willing to take the risk of creating a clone. The only problem that I see is with unsplittable things like careers and relationships. (Flip a coin? Let the other people involved decide?)
This seems like a good solution. If I cloned myself, I'd want it to be established beforehand which copy would stay around, and which copy would go away. For instance, if you're going to make a copy that goes to watch a movie to see if the movie is worth your time, the copy that watches the movie should go away, because if it's good the surviving version of yourself will watch it anyway.
I (and thus my clones) don't see it as suicide, more like amnesia, so we'd have no problem going through with it if the benefit outweighed the amnesia.
If you keep the clone around, in terms of splitting their wealth, both clones can work and make money, so you should get about twice the income for less than twice the expenses (you could share some things). In terms of relationships, you could always bring the clones into a relationship. A four way relationship, made up of two copies of each original person, might be interesting.
I imagine it would be much like a case of amnesia, only with less disorientation.
Edit: Wait, I'm looking at the wrong half. One moment.
Edit: I suppose it would depend on the circumstances - "fear" is an obvious one, although mitigated to an extent by knowing that I would not be leaving a hole behind me (no grieving relatives, etc.).
Depends on how much it cost me to make it, and how much it costs to keep it around. I'm permanently busy, I'm sure I could use a couple of extra hands around the house ;)