Y'all know the rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
So, the fully naive system? Killing makes you a bad person, letting people die is neutral; saving lives makes you a good person, letting people live is neutral. Giving to charity is good, because sacrifice and wanting to help makes you a good person. There are sacred values (e.g. lives) and mundane ones (e.g. money) and trading between them makes you a bad person. What matters is being a good person, not effects like expected number of deaths, so running cost-benefit analyses is at best misguided and at worst evil. Is this a fair description of folk ethics?
If so, I would argue that the bar for doing better is very, very low. There are a zillion biases that apply: scope insensitivity, loss aversion that flips decisions depending on framing, need for closure, pressure to conform, Near/Far discrepancies, fuzzy judgements that mix up feasible and desirable, outright wishful thinking, prejudice against outgroups, overconfidence, and so on. In ethics, unless you're going to get punished for defecting against a norm, you don't have a stake, so biases can run free and don't get any feedback.
Now there are consequentialist arguments for virtue ethics, and general majoritarian-ish arguments for "norms aren't completely stupid", so this only argues for "keep roughly the same system but correct for known biases". But you at least need some kind of feedback. "QALYs per hour of effort" is pretty decent.
And this is a consequentialist argument. "If I try to kill some to save more, I'll almost certainly overestimate lives saved and underestimate knock-on effects" is a perfectly good argument. "Killing some to save more makes me a bad person"... not so much.
No, because we don't even know (yet?) how to formulate such a description. The actual decision procedures in our heads have still not been reverse-engineered, and even insofar as they have, they have still not been explained in game-theoretical and other important terms. We have only started to scratch the surface in this respect.
(Note also that there is a big difference between the principles that people will affirm in the abstract and those they apply in practice, and these inconsistencies are also still far ... (read more)