Will Crouch has written up a list of the most important unsolved problems in ethics:
The Practical List
-
What’s the optimal career choice? Professional philanthropy, influencing, research, or something more common-sensically virtuous?
-
What’s the optimal donation area? Development charities? Animal welfare charities? Extinction risk mitigation charities? Meta-charities? Or investing the money and donating later?
-
What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?
-
What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?
-
Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?
The Theoretical List
-
What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?
-
Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?
-
How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?
-
How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?
-
How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?
-
How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? Or should we prefer simpler moral theories?
-
Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much?
-
What sorts of entities have moral value? Humans, presumably. But what about non-human animals? Insects? The natural environment? Artificial intelligence?
-
What additional items should be on these lists?
Sorry Dave (if I can call you Dave), I saw your question but by the time I finished your comment I forgot to answer it.
If I didn't exist, those people would die. If I do nothing, those people will die. I don't think inaction is moral or immoral, it is just neutral.
It seems to me that justice only applies to actions. It would be unjust for me to kill 1 or 100 innocent people, but if 100 people die because I didn't kill 1, I did the just thing in not killing people personally.
This hypothetical, like most hypotheticals, has lots of unanswered questions. I think in order to make a solid decision about what is the best action (or inaction) we need more information. Does such a situation really exist in which killing 1 person is guaranteed to save the lives of 100? The thing about deterrence is that we are talking about counterfactuals (I think that is the right word, but it is underlined in red as I type it, so I'm not too sure). Might there not be another way to save those 100 lives without taking the 1? It seems to me the only instance in which taking the 1 life would be the right choice would be when there was absolutely no other way, but in life there are no absolutes, only probabilities.
I agree that in the real world the kind of situation I describe doesn't really arise. But asking about that hypothetical situation nevertheless reveals that our understandings of justice are very very different, and clarity about that question is what I was looking for. So, thanks.
And, yes, as you say, consequentialist ethics don't take what you're calling "justice" into account. If a consequentialist values saving innocent lives, they would typically consider it unethical to allow a hundred innocent people to die so that one may live.
I consider this an advantage to consequentialism. Come to that, I also consider it unjust, typically, to allow a hundred innocent people to die so that one may live.