But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.
Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking.
You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.
But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
We've now delved beyond the topic -- which is okay, I'm just pointing that out.
I'm not quite sure what you mean by that. I'm a duster, not a torturer, which means that there are some actions I just won't do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?
I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.
If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don't on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.
Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other's happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It's more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.
To finally circle back to your question, I'm not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I'm saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don't exactly share our values (I value my kids, they value theirs).
I've previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao's Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style "shut up and multiply" utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless "save the world" work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow's hierarchy that leaves him feeling guilty and thinking he's a bad person.
My own opinion and advice? Work your way up up Maslow's hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.
I think I basically agree with the "embrace existing moral intuitions" bit.
Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that's not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.