This isn't true, at any rate, for Kant. Kant would say that you have a duty to help people in need when it doesn't require self-destructive or evil behavior on your part.
In other words, if it costs you nothing. You consider having no self-destructive or evil behavior on your part to be infinitely more valuable.
this is true of any consistent ethical theory
It is true by definition. That's what "forbidden" means.
And of course, Kant thinks we can and do care about lots of things we aren't morally responsible for.
We are not using the same definition of "care". I mean whatever motivates you to action. If you see no need to take action, you don't care.
In other words, if it costs you nothing. You consider having no self-destructive or evil behavior on your part to be infinitely more valuable.
No, there's a lot of room between 'costs you nothing' and 'self-destructive'. The question is whether or not a whole species or society could exist under universal obedience to a duty, and a duty that requires self-destruction for the sake of others would make life impossible. But obviously, helping others at some cost to you doesn't.
Also, I was pretty careful to say that you can't have a DUTY to help others self-...
I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.
I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).
Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")
The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.
Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.
Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.
Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.
(ducks before accusations of misusing "isomorphic")