No, there's a lot of room between 'costs you nothing' and 'self-destructive'.
I got the impression that you aren't allowed any self-harm or evil acts. If you won't stop something for epsilon evil, then you care about it less than epsilon evil. If this is true for all epsilon, you only care an infinitesimal amount.
I don't mean "costs nothing" as in "no self-harm". I mean that a Kantian cares about not directly harming others, so directly harming others would be a cost to something. You could measure how much they care about something by how much they're willing to harm others for it. If they're only willing to harm others by zero, they care zero about it.
Also, I was pretty careful to say that you can't have a DUTY to help others self-destructively. But it's certainly permissible to do so (so long as its not aimed at self-destruction).
It's also permissible under nihilist ethics. I'm not going to say that nihilism is anti-suffering just because nihilism allows you to prevent it.
I judge an ethical system based on what someone holding to it must do, not what they can.
You are however prohibited from acting wrongly for the sake of others, or yourself. And that's just Kant saying "morality is the most important thing in the universe."
If you are prohibited from acting wrongly under any circumstances, then the most important thing is that you, personally, are moral. Everyone else acting immoral is an infinitely distant second.
No, we're using the same definition.
If someone insults me, I generally won't strike them or even respond, but that doesn't mean I'm not pissed off.
We are not using the same definition. When I say that someone following an ethical framework should care about suffering, I don't mean that it should make them feel bad. I mean that it should make them try to stop the suffering.
Although my exact words were "In what sense can you be considered to care about things your are not responsible for?", so technically the answer would be "In the sense that you feel bad about it."
I got the impression that you aren't allowed any self-harm or evil acts. If you won't stop something for epsilon evil, then you care about it less than epsilon evil. If this is true for all epsilon, you only care an infinitesimal amount.
This sounds right to me, so long as 'self-harm' is taken pretty restrictively, and not so as to include things like costing me $20.
In his discussion of the 'murderer at the door' case Kant takes pains to distinguish between 'harm' and 'wrong'. So while we should never wrong anyone, there's nothing intrinsically wrong wit...
I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.
I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).
Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")
The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.
Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.
Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.
Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.
(ducks before accusations of misusing "isomorphic")