15

In secret, an unemployed man with poor job prospects uses his savings to buy a large term life insurance policy, and designates a charity as the beneficiary. Two years after the policy is purchased, it will pay out in the event of suicide. The man waits the required two years, and then kills himself, much to the dismay of his surviving relatives. The charity receives the money and saves the lives of many people who would otherwise have died.

Are the actions of this man admirable or shameful?

New Comment
98 comments, sorted by Click to highlight new comments since: Today at 8:56 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

One thing to note is that the man would probably harm, not help, his chosen charity (in expectation).

If it was thought that the charity had encouraged the "really extreme altruism", or if it was simply thought that the charity was the sort of thing that fanatics like that liked, the charity would have serious problems attracting others' work or donations, since most people fear fanatical and suicidal mental states. It would need to refuse the money, and refusing the money wouldn't be enough to prevent serious damage.

8wedrifid13y
One would hope that in the two years between signing up for the insurance policy and offing himself he took the time to figure out how to make the donation suitably indirect and manage appearances. All it would take is one person you can trust.
3khafra13y
I don't know if "trust" is a sufficiently boolean property for this. One would need an executor trustworthy to * Handle large amounts of money with no oversight * Deal with the legal system * Maintain absolute discretion on the subject, basically forever * Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons A good lawyer fits some of those criteria, but not all; and is difficult for the unemployed to retain. Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.

Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons

They do not need to know this. Their role is to execute your will. That is all.

Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.

Will the money to someone else who is obsessed with the cause. In that case you don't need personal trust. Just game theory.

Saying "this will do more harm than good" sounds wise and sends the desired message of 'suicide is bad and I do not encourage it' but isn't actually accurate under examination.

-1rwallace13y
"This will do more harm than good" may not be accurate under examination, but I think it is accurate in reality. What you're talking about is a flimsy elaborate plan that requires some people to do exactly what they are supposed to do and nobody else to seriously interfere. The probability of such a plan working first time is small enough to be ignored. Something will go wrong that you didn't think of. In many contexts, that's not a showstopper: you wait until something does go wrong, then you fix it. But if step two of the plan was "you die", it's going to be a bit hard to fix what goes wrong in step three.
9wedrifid13y
I disagree. Especially with the way 'flimsy', 'elaborate' and 'reality' are used (or misused) and the straightforward complications of will-execution raised as though this is some sort of special case. I would consider an argument of the form "This is a f@$%@ing terrible idea because if you kill yourself you DIE" far more persuasive than anything that relied on technical difficulties. Flip. This is two years worth of preparation time. How long does it take to google "suicide look like accident"? The technical problem is utterly trivial. It is just one that you are better off not implementing. On account of life being better than death.
1rwallace13y
Well I agree with you that "if you kill yourself you die" is a sufficient and primary argument against the proposal. I was merely following the implied "what if somebody is in a suicidal mood and therefore not convinced by the primary argument, what arguments are there against the feasibility of the proposal on its own terms" of this subthread.
0Mestroyer12y
You could just split the money among a whole bunch of different charities. That way no one in particular is shamed by the news stories that result.

Shame? Is that the issue? Shame sounds like something that he can't feel because he's dead but that his relatives could feel regarding him because his actions indicate/are their lack of selective fitness. His actions aren't generally admirable because human preferences aren't set up to admire that sort of altruism.
His actions are generally "good" in that they lead to a better rank order world by his criteria than non-action would, but are probably sub-optimal because at the cost of his life he can probably produce a better world rank-ordering (I certainly hope he managed to at least donate all his organs, but unless the recipients are radical altruists too he's still probably nowhere near optimal)

5TimFreeman13y
You can't usefully donate organs if you commit suicide. Suicide leads to autopsy leads to unusable organs. This covers donating brains too, so to a first approximation, cryonics won't work for you if you suicide. With that said, I agree that if we assume for the purposes of argument he could have donated his organs, and he cared enough about others to donate his life insurance to charity, he would probably want to donate his organs too.
8christopherj10y
I wonder how long before an insurance company decides to test cryonics as an excuse. "We respect his belief that he is not dead, but rather in suspended animation."
1DanielH10y
That would probably be a good thing. I think that the company says they pay out in the event of legal death, so this would mean that they'd have to try to get the person declared "not dead". By extension, all cryonics patients (or at least all future cryonics patients with similar-quality preservations) would be not dead. If I were in charge of the cryonics organization this argument was used against, I would float the costs of the preservation and try to get my lawyers working on the same side as those of the insurance company. If they succeed, cryonics patients aren't legally dead and have more rights, which is well worth the cost of one guy's preservation + legal fees. If they fail, I get the insurance money anyway, so I'm only out the legal fees. At least most cryonics patients have negligible income, so the IRS isn't likely to get very interested.

The man has done nothing shameful: (a) his life is his own; and (b) the insurance company bet, with its eyes open, that sufficient suicide-intenders would back down from their plans within two years that the policies would still be profitable. It lost its bet, but it was a reasonable bet.

The man has done nothing admirable, either; he has taken money from the shareholders of the insurance company, and given it to charity. Presumably this is something the shareholders could have done themselves, if they chose to. So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do. Even though he did this through "voluntary" means.

However, I can see that if you're of the opinion that it's a good thing to take money from shareholders (who presumably are wealthier than average) and use it to save lives, then I can see how you would think this to be an admirable act.

You could also argue that the insurance company isn't stupid: it may have sold a thousand policies to intended-suiciders, and this was the only one who went through with it. In that case, the insurance company made a profit, and this man actually h... (read more)

"So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do."

No, he didn't. They wanted to offer a life insurance policy. I'm confident that they're not thrilled about having to pay out, but they're not being forced to do anything against their will - only to keep to the obligations they freely entered into.

The man has done nothing admirable, either; he has taken money from the shareholders of the insurance company, and given it to charity. Presumably this is something the shareholders could have done themselves, if they chose to. So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do. Even though he did this through "voluntary" means.

This paragraph indicates that you believe that forcing people to do something they don't want to do is wrong.

What he should have done was contingently committed to selling his organs on the black market before committing suicide. Then, there would have been a net benefit to his death, instead of it being zero-sum, and his actions would have been admirable.

This paragraph indicates that you believe it is morally beneficial to save lives--in this case, by donating organs.

Why is it that when these two moral principles contradict, you let the first one win?

What he should have done was contingently committed to selling his organs on the black market before committing suicide. Then, there would have been a net benefit to his death, instead of it being zero-sum, and his actions would have been admirable.

Does not follow - the breakup value of your organs is not necessarily greater than your organs working together. Just because someone gets paid doesn't mean that game is positive-sum.

4110phil13y
Yes, I assumed that the breakup value of the organs was higher. That seems reasonable to me: two kidneys save two lives, one liver saves a third life, and so on. And only one life is lost, and that one voluntarily. Also, my argument was not contingent on anyone being paid ... donating organs on the black market works too.
5wedrifid13y
House MD doesn't seem to get that sort of conversion rate from organs to lives saved. Am I generalising from fictional evidence or is your life saving equation absurdly optimistic. Ok, I admit, both.
1110phil13y
I guess it's an empirical question. A death creates two kidneys. Are there usually two people on a waiting list who need the kidneys and would otherwise die? If not, then perhaps I am indeed being too optimistic.

I guess it's an empirical question.

Yes.

A death creates two kidneys. Are there usually two people on a waiting list who need the kidneys and would otherwise die?

Humans aren't lego. Yes, we can transplant but they don't always work and they don't always last indefinitely. We also don't just use them to flip a nice integer 'life saved' up by one. It's ok if the spare organ just increases someone's chances. Or extends a life for a while. Or drastically improves the quality of life for someone who was scraping by with other measures.

If I recall correctly kidneys are actually the easiest organ to transplant - the least likely to cause rejection. With the right donors it gets up into the 90s(%). But translating that into lives saved or 'years added to life' is a little tricky. Especially when we the patients also happen to require transfusions of donor blood throughout the process. We like to say the blood transfusions are 'saving a life'. There are only so many times you can count a life as 'saved' in a given period of time.

1110phil13y
OK, fair enough. It sounds to me, though, like it should be possible to somehow quantify the benefit of donating a kidney, on some scale, at least. Or do you think the benefit is so small, relative to one suicide, that my original argument doesn't hold?
7michaelkeenan12y
From Wikipedia:

I think this post would count as a public statement that would invalidate your life insurance policy upon suicide. Insurance companies are in the business of not actually paying out their benefits.

However, I think we could do some advocacy related to this on the usenet hardcore suicide newsgroups. We might convince some people to delay their suicides long enough to not actually kill themselves, as this meme sounds different than most other memes trying to convince truly suicidal people to not do it.

I think this post would count as a public statement that would invalidate your life insurance policy upon suicide. Insurance companies are in the business of not actually paying out their benefits.

Under U.S. law, after two years, life insurance policies can't be revoked for any reason except non-payment of premiums. If they don't cancel the policy in those two years, they have to pay out regardless of how big a liar you were.

However, I think we could do some advocacy related to this on the usenet hardcore suicide newsgroups. We might convince some people to delay their suicides long enough to not actually kill themselves, as this meme sounds different than most other memes trying to convince truly suicidal people to not do it.

And if they kill themselves anyway, after the two years are over, at least they saved a lot of other lives. Do you know of a way to reach actual suicidal people?

I am not sure I can be rational about this at all, because I find suicide repulsive. Yet my society admires the bravery of a soldier who, say, throws himself on a grenade so that it will not kill the others in his dugout. I might see a tincture of dishonesty in the man's actions, and yet he enters a contract, with a free contracting party, and performs his part of the contract.

So. Something to practice Rationality on. To consider the value of an emotional response. Thank you. I am afraid, I still have the emotional response, shameful. I cannot, now, see it as admirable.

0AlexanderRM9y
I was about to give the exact same example of the soldier throwing himself on a grenade. I don't know where the idea of his actions being "shameful" even comes up. The one thing I realize from your comment is there's the dishonesty of his actions, and if lots of people did this insurance companies would start catching on and it would stop working plus it would make life insurance that much harder to work with. But it didn't sound like the original post was talking about that with "shameful", it sounds like they were suggesting (or assuming people would think) that there was something inherently wrong with the man's altruism. At least that's what's implied by the title, "really extreme altruism". Edit: I didn't catch the "Two years after the policy is purchased, it will pay out in the event of suicide." bit until reading others comments- so, indeed, he's not being dishonest, he made a bet with the insurance company (over whether he would still intend suicide two years later) and the insurance company lost. I don't know how many insurance companies have clauses like that, though.

Why does it matter if the man is admired or shamed?

Do generic charities accept and process suicide insurance payments or estates?

Are you planning to do this?

Note the recent movie Seven Pounds.

Admirable, presuming that he expects the lives saved to be happy ones.

[-][anonymous]15y50

I'll just come out and say that - if we're allowed to ignore poorly foreseen consequences like insurance premiums going up - then yes, the action is admirable.

Roko: "If I knew someone was capable of this, I wouldn't want them as a friend or partner."

All the more reason for the man to go through with it, since he's so unappreciated and unwelcome.

Marshall: "He needs [...] a real problem to work with."

People dying preventable deaths is not a real problem?

I like this post, because it nails down my moral preferences quite nicely. I would not, under any circumstances do this. What does that tell me about my goals in life? It tells me that I place a very high priority upon my continued existence, and that even the dnation of £10^6 to a very worthy charity, which might save a thousand lives is not worth dying for.

3CronoDAS15y
Yes, but would you object to someone else attempting this?
5Roko15y
No, if random person wants to sacrifice their life for the greater good, then I have no objection. I would, however, suggest that they are lacking somewhat in humanity. There is such a thing as being altruistic beyond the human norm, and this is an example of it. If I knew someone was capable of this, I wouldn't want them as a friend or partner. Who knows when they might make one utilitarian calculation too many and kill us both? Perhaps I am paranoid about this because... I used to be like that.

I would, however, suggest that they are lacking somewhat in humanity. There is such a thing as being altruistic beyond the human norm, and this is an example of it.

Reminds me of one of the 101 Zen Stories http://www.101zenstories.com/index.php?story=13 :

"Hello, brother," Tanzan greeted him. "Won't you have a drink?"

"I never drink!" exclaimed Unsho solemnly.

"One who does not drink is not even human," said Tanzan.

"Do you mean to call me inhuman just because I do not indulge in intoxicating liquids!" exclaimed Unsho in anger. "Then if I am not human, what am I?"

"A Buddha," answered Tanzan.

3Nebu15y
What if the friend shared the same core values as you? If my friend had the same core value as me (e.g. it is worth killing two people to save a billion people from eternal torture), and were utilitarian, then perhaps I'd be "ok"[1] with my friend making "one utilitarian calculation too many" and killing both of us. 1: By "ok", I guess I mean I'd probably be very upset during those final moments where I'm dying, and then my consciousness would cease, my final thoughts to be damning my friend. But if I allow myself to imagine an after-life, I could see eventually (weeks after my death? months?) eventually grudgingly coming to accept that his/her choice was probably the rational one, and agreeing that (s)he "did the right thing".
-2John_Maxwell15y
You're not answering the question of whether the man did something admirable or shameful.
[-][anonymous]15y30

A significant recurring theme in the comments is that the man is essentially forcing a redistribution of wealth.

Speaking for myself, I have no in-principle problem with that. I broadly support capitalism because it is probably the system that gives the best overall result. But I'm perfectly happy to support redistribution if the benefits genuinely outweigh the costs.

"So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do."

But he also saved many people from having to do something they didn't want to do, namely, die. The balance is still in his favour: he chooses the lesser evil.

[-][anonymous]11y00

Would a middle ground option such as "permissible but not morally required" (i.e. neither admirable nor shameful) be valid?

Simple answer: Is the charity going to do more benefit with that money than he caused his family and friends? If so, then his actions were at least a net positive from a utilitarian standpoint. It doesn't necessarily follow that it was the best action, though. Could he have raised a comparable amount of money on his own to help people with, without resorting to killing himself? If so, then I am more inclined to believe that he simply had decided to kill himself, and took advantage of it in order to try to cause some benefit for the world, which I suppose I can give (limited) support to.

[-][anonymous]15y00

I would be concerned with the charity refusing to take 'blood money', or getting bad press if it does so.

1Roko15y
=! least convenient possible world, but true.

Offering insuring against sucicide seems pretty stupid to me. Like offering insurance against someone burning their own house down. So, presumably, this story is fictional.

I don't know of anyone who has actually done this, but it is indeed possible. At least in the United States, life insurance does cover death by suicide, as long as the policy was purchased two years before the suicide took place. Of course, the person purchasing the policy does have to disclose his medical history, including any past or ongoing treatment for depression, which insurers take into account when deciding how much to charge for a policy (or whether to offer one at all).

Yes, it's morbid, but I actually did the research on this; an otherwise healthy young man might be able to get a 10 year term life insurance policy with a payout of $1,000,000 for an annual premium of around $600 (and a $10 million policy for $6000).

3Mario15y
I think, then, that the harm associated with this man's suicide would have to take into account the rise in premiums he would be forcing on people in similar situations. His death may increase the amount a similar man would have to pay, decreasing the likelihood that he could afford insurance and increasing the harm that man's death would cause his dependents. Over time, those effects could swamp any short-term benefit to the charity.

Or, if the behavior became common, insurance companies could simply decline to cover suicide. The problems would arise if, say, a car accident were accused of being a covert suicide (but wouldn't we have this same problem before the 2-year limit?) Perhaps that's why insurance companies cover suicides - for peace of mind, so that you know they won't accuse your corpse of having done it on purpose.

8Nebu15y
I think we can consider the harm associate with this man's suicide causing a rise in premiums to be relatively negligible, seeing as people have committed suicide while insured in the past, and it hasn't made prices so incredibly high as to stop insurance companies from being able to sell similar policies today.
5jimmy15y
Not only that, but he never generated the wealth in the first place. His savings were his, sure, but the rest of the money was essentially conned from the insurance company. He did not make the world richer by sacrificing himself, he sacrificed himself to (dishonestly) reallocate resources. I'd say support his actions iff you would support stealing to give to charity.

the money was essentially conned from the insurance company.

I don't see it as "conned" (or perhaps I'm inferring some connotations that you don't intend to imply by that word?): The man took "suicide-insurance". That is to say, he signed a contract with the insurance company saying something along the lines of "I'll pay you $X per month for the rest of my life. If I don't commit suicide for 2 years, but then commit suicide after that, then you have to give me 1 million dollars."

I'm sure the insurance company fully understood the terms of the contract (in fact, it is practically certain that it was the insurance company itself which wrote out the contract). The insurance company fully understood the terms of the deal and agreed to it. They employ actuaries and lawyers go over the draft of their contracts to ensure it means exactly what they think it means. No party was mislead or misunderstood the terms. So how is that a con?

4brazil8413y
I agree, I don't think it's a con. It only seems like a con because you are betting with the insurance company about the contents of your brain and most people naturally assume that they understand the contents of their own brain better than some outside agency. However, I think that assumption is pretty clearly false. It seems that institutions have the benefit of a lot of past experience and can use that experience to understand people better (and predict their behavior better) than they understand or could predict themselves.
5MichaelVassar15y
Most people could acquire much more near term wealth via insurance than via work but could not acquire more near term wealth via theft (expected value) than via work.
1John_Maxwell15y
How was he dishonest?
5MichaelHoward15y
Because he didn't disclose to the insurance company that he was planning to commit suicide at the time he took out the policy(!)
1John_Maxwell15y
So? Not revealing info != dishonesty. Unless he signed a contract that stated that he had no intent to commit suicide, I don't think he ever lied. Let's say I am a proficient at counting cards while playing blackjack. I go to the casino to gamble and walk away richer--consistently. This case is actually very similar to the insurance one, in that in both cases I am making a bet with some sort of large organization, and I know more about the nature of the bet than the large organization does. Anyway, is the card counter dishonest? And if not, how is the man who commits suicide different?

Not revealing info != dishonesty.

Optimizing your decisions so that other people will form less accurate beliefs is dishonesty. Making literally false statements you expect other people to believe is just a special case of this.

If you decide not to reveal info because you predict that info will enable another person to accurately predict your behavior and decline to enter an agreement with you, you are being dishonest.

5John_Maxwell13y
Hm, I wrote that comment two years ago. My new view is that it's not much worth arguing over the definition of "dishonesty" so figuring out whether the guy is "dishonest" or not is just a word game--we should figure out if others having correct beliefs is a terminal value to us, and if so, how it trades off against other terminal values. (Or perhaps individually not acting in ways that give others incorrect beliefs is a terminal value.) As a consequentialist, I mostly say the ends justify the means. I am a little cautious due to the issues Eliezer discusses in this post, but I don't think I'm as cautious as Eliezer is--I have a fair amount of confidence in my ability to notice when my brain is going in to a failure mode like he describes.
3Nornagest13y
I'm not entirely comfortable with this line of thinking. Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can't help but think any heuristic that eliminates the distinction is missing something important. It all has to reduce to normality, after all. That said, biases do exist, and if we can come up with a plausible mechanism by which it'd be psychologically important without being consequentially important then I think I'd be happier with the conclusion. It might just come down to how difficult it is to prove.
5Vladimir_Nesov13y
The pragmatic distinction is that lies are easier to catch (or make common knowledge), so the lying must be done more carefully than mere withholding of relevant information. Seeing withholding of information as a moral right is a self-delusion part of normal hypocritic reasoning. Breaking it will make you a less effective hypocrite, all else equal.
0wedrifid13y
I assert that moral right overtly, embracing all relevant underlying connotations. I am in no way deluding myself regarding the basis for that assertion and it is not relevant to any hypocrisy that I may have.
2Vladimir_Nesov13y
You haven't unpacked anything, black box disagreements don't particularly help to change anyone's mind. We are probably even talking about different things (the idea of "moral right" seems confused to me more generally, maybe you have a better interpretation).
-2wedrifid13y
It seems to be your black box. I just claim the right to withhold information - and am not thereby deluded or hypocritical. (I am deluded and hypocritical in completely different ways.) It isn't language I use by preference, even if I am occasionally willing to go along with it when others are using it. I presented my rejection as a personal assertion for that reason. While I don't personally place much stock in objectively phrased morality I can certainly go along with the game of claiming social rights.
1Vladimir_Nesov13y
Should people in general withhold relevant information more or less? There is only hypocrisy here (bad conduct given a commons problem) if less is better and you act in a way that promotes more, and self-delusion if you also believe this behavior good.
-1wedrifid13y
It is no coincidence that one of the most effective solutions to a commons problem is the assignment of individual rights. People in general should not be obliged to share all relevant information with me, nor I with them. In the same way they should not be obliged to give me their stuff whenever I want it. Because that kind of social structure is unstable and has a predictable failure mode of extreme hypocrisy. No, my asserted right, if adhered to consistently (and I certainly encourage others to assert the same right for themselves) reduces the need for hypocrisy. This is in contrast to the advocation of superficially 'nice' sounding social rules to be supported by penalty of shaming and labeling - that is where the self delusional lies. I prefer to support conventions that might actually work and that don't unduly penalize those that abide by them.
0Vladimir_Nesov13y
Agreed that it's practical.
3[anonymous]13y
I agree that a distinction should be drawn but I disagree about where. I think the morally important distinction is not between withholding information and providing false information, but why and in what context you are misleading the other person. If he's trying to violate your rights, for example, or if he's prying into something that's none of his business, then lie away. If you are trying to screw him over by misleading him, then you are getting into a moral gray area, or possibly worse.
0Nornagest13y
Nah, that's just standard deontological vs. consequential thinking. If dishonesty is approached in consequential terms then it becomes just another act of (fully generalized) aggression -- something you don't want to do to someone except in self-defense or unless you'd also slash their tires, to borrow an Eliezer phrase, but not something that's forbidden in all cases. It only becomes problematic in general if there's a deontological prohibition against it. Looking at it that way doesn't clarify the distinction between lying by commission vs. lying by omission, though. There's something else going on there.
0[anonymous]13y
I don't know what you just said. For example you wrote: "that's just standard deontological vs. consequential thinking." What does that mean? Does that mean that I have in a single comment articulated both deontological and consequentialist thinking and set them at odds, simultaneously arguing both sides? Or are you saying I articulated one of these? If so, which one? For my part, I don't think my comment takes either side. Whether your view is deontological or consequentialist, you should agree on the basics, which includes that you have a right to self-defense. That is the context I am talking about in deciding whether the deception is moral. So I am not saying anything consequentialist here, if that's your point. A deontologisr should agree on the right to self defense, unless his moral axioms are badly chosen.
0Nornagest13y
I think your comment describes a consequentialist take on the subject of dishonesty and implicitly argues that the deontological version is incorrect. I agree with that conclusion, but I don't think it says anything unusual on the subject of dishonesty in particular.
0[anonymous]13y
You think the right to self defense is consequentialist? That's the first I've heard about that.
0Nornagest13y
In this context, and as a heuristic rather than a defining feature. Most systems of deontological ethics I've ever heard of don't allow for lying in self-defense; it's possible in principle to come up with one that does, but I've never seen a well-defined one in the wild. I was really looking more at the structure of your comment than at the specific example of self-defense, though: you described some examples of dishonesty aimed at minimizing harm and contrasted them with unambiguously negative-sum examples, which is a style of argument I associate (pretty strongly) with a pragmatic/consequential approach to ethics. My mistake if that's a bad assumption.
0[anonymous]13y
It's no different in principle from killing in self defense. If these systems don't allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense. Anyway, the fact that my point triggered a memory in you of a consequentialist versus deontological dispute does not change my point. If we delete everything you said about deontologists versus consequentialists, have you actually said something to deflect my point?
5wedrifid13y
I don't think that follows. These are deontologists we are talking about. They are in the business of making up a set of arbitrary rules and saying that's what people should do. Remembering to include a rule about being allowed to defend yourself physically doesn't mean they will remember to also allow people to lie in self defense. We can't assume deontologists are sane or reasonable. They are humans talking about morality!
4Peterdjones13y
Well, that' wasn't a caricature...!
-1wedrifid13y
I don't think it was. Just a fairly simple and non-technical description. A similar simplified description of consequentialist moralizing would not read all that much differently. The key sentence in the comment in terms of conveying perspective was "They are humans talking about morality!" I actually suggest the description errs on the side of a positive idealized spin. Morality just isn't that nice.
-2shokwave13y
That is actually how deontologists work, though. It's not a caricature when the people you're talking about say this is okay because it's Right and this isn't because it's Wrong and when you ask them why some things are Right and other things are Wrong, they try to conjure up the inherent Rightness and Wrongness of actions from nowhere. Seriously!
-2Alicorn13y
No.
-1shokwave13y
I have discussed this point with a few people, and the two who self-identified as non-religious deontologists explicitly assigned objective rightness and wrongness to actions. The kind of people who are using this word "deontologist" to refer to themselves actually are doing this.
2Alicorn13y
I use the word "deontologist" to refer to myself. I do assign objective rightness and wrongness to things (technically intentions, not actions, though I will talk loosely of actions). There is no meaningful sense in which murder could be wrong in a universe that did not contain any people (humans per se are not called for) because there would be no moral agents to commit wrong acts or be the victims of rights violations. In such an uninhabited universe, it would remain counterfactually wrong for any people to murder any other people if people were to come into existence. ("Counterfactually wrong" in much the same way that it would be wrong for me to steal my roommate's diamond tiara, if she had a diamond tiara, but since she doesn't it's a pointless statement.)
0Peterdjones13y
"Deontologist" and "Moral Objectivist" are not synonyms. Most deontologists are nonetheless objectivists. The reverse does not hold since, for instance, consequentiailists are not deontologists but are subjectivists. It is sill a caricature to say deontologists conjure up Right and Wrong out of nowhere. The most famous deontologist was probably Kant, who argued elaborately for his claims. The persistent problem in these discussions is the assumption that moral objectivism can only work like a quasi-empiricism, detecting some special domain of ethical facts. However, nobody seriously argues for it that way. As noted by Alicorn. moral laws can apply counterfactually just as easily as natural laws.
1wedrifid13y
That is certainly true but for my part I attribute that to them being humans engaging in moralizing, not their deonotology per se. The the 'objective rightness of their morals' thing can just as well be applied to consequentialist values.
2shokwave13y
Right; I trusted them when they said it was deontology that gave them absolute values - but of course, a moralizing human would say that.
0thomblake13y
'Rights' are most usefully thought of in political contexts; ethically, the question is not so much "Do I have a right to self-defense?" as "Should I defend myself?". For Kant (the principal deontologist), lying is inherently self-defeating. The point of lying is to make someone believe what you say; but, if everyone would lie in that circumstance, then no one would believe what you say. And so lying cannot be universalized for any circumstance, and so is disallowed by the criterion of universalizability.
2[anonymous]13y
This is only true if the other party is aware of the circumstance. If they are not - if they are already deceived about the circumstance - then if everyone lied in the circumstance, the other party would still be deceived. Therefore lying is not self-defeating.
0thomblake13y
I was just pointing out how Kant might justify self-defense but not lying in self-defense, in summary. If you'd like to disagree with Kant, I suggest doing so against more than an off-the-cuff summary. Though I don't recommend bothering with it, as his ethics is based on his metaphysics and his metaphysics is false.
2[anonymous]13y
Understood.
0Nornagest13y
I don't disagree with your point. I just don't see it as relevant to mine. There are any number of ways we can slice up a moral question: initiation of harm's one, protected categories like the "not any of your business" you mentioned are another, and my omission/commission distinction is a third. Bringing up one doesn't invalidate another.
3[anonymous]13y
But I think lying by omission can indeed be very bad, if you are using the lie of omission to defraud the other party, and that seems to be what is occurring in the scenario in question. Generally speaking, we are not obligated to inform random people walking down the street of the facts. That would be active assistance, which we do not owe to random strangers. In contrast, telling random strangers active lies puts them at risk, because if they act on those lies they may be harmed. So there you have a moral distinction between failing to inform people of the truth, and informing them of lies. But if you are already interacting with someone, for example if you are buying life insurance from them with the intention of killing yourself, then they are no longer random strangers, and your obligations to them increase.
2Nornagest13y
I am not arguing that lying by omission cannot be bad. Neither am I arguing for a specific policy toward lies of omission. I am arguing that folk ethics sees them as consistently less bad than lies of commission with the same consequences, and that a general discussion of the ethics of honesty ought to reflect this either by including reasons to do the same or by accounting for non-ethical reasons for the folk distinction. Otherwise you've got a theory that doesn't match the empirical data.
0RHollerith13y
That is how I feel.
-2wedrifid13y
Only if dogs have five legs if you call a tail a leg. Optimising your decisions so that other people will form less accurate beliefs can only be legitimately construed as dishonest if you say or otherwise communicate that it is your intention to produce accurate beliefs.
2MichaelHoward15y
Now I've thought more about it, if there's nothing in the agreement about suicide being intended at the time of application, then I think you're right. I think of insurance policies as having clauses in about revealing any information that might affect the likelihood of a claim, but I can understand why that might not apply to life insurance policies.
1[anonymous]15y
Because he didn't disclose to the insurance company that he was planning to commit suicide at the time he took out the policy(!)
4AllanCrossman15y
The Straight Dope has looked at this: http://preview.tinyurl.com/apvljw
[+][anonymous]15y-80