I was saddened to learn of the recent death by suicide of Chris Capel, known here as pdf23ds. I didn't know him personally, but I was an occasional reader of his blog. In retrospect, I regret not having ever gotten into contact with him. Obviously, I don't know that I could have prevented his death, but, as one with mental-health issues myself, at least I could have made a friend, and been one to him. Now I feel a sense of disappointment that I'll never get that chance.
Having said that, I must say that I take his arguments here very seriously. I do not consider it to be automatic that every suicide is the "wrong" decision. We can all imagine circumstances under which we would prefer to die than live; and given this, we should also be able to imagine that these kinds of circumstances may vary for different people. And if one is already accepting of euthanasia for incurable physical suffering, it should not be that much of a leap to accept it for incurable psychological suffering as well.
Of course, as Chris acknowledges, this doesn't imply that everyone who is contemplating suicide is actually being rational. People may for instance be severely mistaken about their prospects for improvement, especially while in the midst of acute crisis.(Conceivably, that could even have been his own situation.) Nonetheless, I think many of the usual arguments that people use to show that suicide is "wrong" are bad arguments. For example, consider what is probably the most common argument: that committing suicide will inflict pain upon friends and family. It frankly strikes me as absurd (and grotesquely unempathetic) to suppose that someone for whom life is so painful that they would rather die somehow has an obligation to continue enduring it just in order to spare other people the emotion of grief (which they are inevitably going to have to confront at some point anyway, at least until we conquer all death).
Ironically, society's demonization of suicide and suicidal people has negative consequences even from the standpoint of preventing suicide itself, as Chris points out:
I passionately hate that all of the mental health people are obligated by law to commit me to an asylum if they think I’m about to kill myself. They can’t be objective. You know, if they could talk to me without such stupid constraints, they might have prevented this very suicide
It seems to me very possible that our society's fervor to prevent suicide may result in denying severely depressed people the compassion they need. This could theoretically be worth it if it prevented enough suicides that turned out to be worth preventing, but cases like Chris's raise doubt about this, in my mind. (From both angles: if Chris's decision was the right one for him, then the system is saving people it shouldn't be saving; if on the other hand it was the wrong decision, then we clearly see how the system failed him.)
Although I'm inclined to be sympathetic to Chris's view -- perhaps because I haven't always been maximally enthusiastic about my own existence myself -- there are some arguments that do worry me. Such as: if you think of future versions of yourself as separate agents, then suicide is a form of homicide. However, usually suicide is carried out on the belief that the future selves would approve of their nonexistence; and all of our decisions have consequences (often irreversible) for our future selves, so this is a general ethical problem that transcends the specific issue of suicide.
This post is a place to rationally discuss the ethics and rationality of suicide, as well as our attitudes (on an individual level, and as reflected in our institutions) toward suicidal people and, more generally, those suffering from psychological conditions such as depression.
I'm sad that Chris won't be able to participate.
This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven't heard any examples of exceptions.
You are right that I was wrong to use "zero possibility of improvement" as my requirement.