I'm only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don't think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.
The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn't that stopping them hurts other people (it doesn't), it's just that you can't stop them all. You don't need to be a utilitarian to think that if it's raining planes, Superman should start by catching the 747's.
For example, high-paying finance jobs are high-stress and many people don't like working them, but they're not actually bad for the world.
❤️ thanks!
For example, high-paying finance jobs are high-stress and many people don't like working them, but they're not actually bad for the world.
That is debatable.
I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.
I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.
Do you mean in the sense that people who aren't Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn't save people in other countries because that's bad somehow?
The latter. Superman's powers are magical, but our powers are intimately connected to the state of life for the less fortunate. We know that our economic prosperity is based on a mix of innovation and domination, and the more we reduce our involvement in the domination side of it, the more we address the real root of the problem.
As a full throated defender of pulling the lever (given traditional assumptions such as a lack of an audience, complete knowledge of each outcomes, productivity of the people on the tracks) , there are numerous issues with your proposals:
1.) Vague alternative: You seem to be pushing towards some form of virtue ethics/basic intuitionism, but there are numerous problems with this approach. Besides determining whose basic intuitions count and whose don't, or which virtues are important, there is very real problems when these virtues conflict. For instance, imagine you are walking at night, and are trying to cross a street. The sign says red, but no cars are around. Do you jaywalk? In this circumstance, one is forced to make a decision which pits two virtues/ intuitions against each other. The beauty of utilitarianism is that it allows us to choose in these circumstances.
2.) Subjective Morality: Yes, utilitarianism may not be "objective" in the sense that there is no intrinsic reason to value human flourishing, but I believe utilitarianism to be the viewpoint which closest conforms to what most people value. To illustrate why this matters, I take an example from Alex O'Connor. Image you need to decide what color to paint a room. Nobody has very strong opinions, but most people in your household prefer the color blue. Yes, blue might not be "objectively" the best, but if most of the people in your household like the color blue the most, there is little reason not to. We are all individually going to seek what we value, so we might as well collectively agree to a system which reflects the preferences of most people.
3.) Altruism in Disguise:
Another thing to notice is that virtue ethics can be a form of effective altruism when practiced in specific ways. In general, bettering yourself as a person by becoming more rational, less biased, etc, will in fact make the world a better place, and giving time to form meaningful relationships, engage in leisure, etc. can actually increase productivity in the long run.
You also seem advocate for fundamental changes in society, changes I am not sure I would agree with, but if your proposed changes are indeed the best way to increase the general happiness of the population, it would be, by definition, the goal of the EA movement. I think a lot of people look at the recent stuff with SBF and AI research and come to think the EA movement is only concerned with lofty existential risk scenarios, but there is a lot more to it then that.
Edit:
Almost forgot this, but citation: Alex O'Connor(in this video) formulated the blue room example. We use it differently (he uses it to argue against objective morality), but he verbalized it.
We are all individually going to seek what we value, so we might as well collectively agree to a system which reflects the preferences of most people.
Completely agree!
Hmm maybe I spoke too soon. There have been times in different societies when the majority of people would say slavery was good, and times (even before the modern age) when the majority of people would say slavery was bad. But slavery is bad, and according to my argument it's because it feels bad, even if you are mostly unconscious of it.
So my theory of social change is that we individually learn to feel and understand our emotions better, because emotions are the only reason we care about anything at all. The way emotions feel in my body is certainly very objective to me, and the more I understand them the more I can recognize them in other people. I'm not sure I'm willing to claim that recognizing other people's emotions is entirely objective, but when we see an angy politician ranting and raving, no one disagrees that they are angry.
So I would say we should collectively agree to a system which reflects the true preferences of most people, but there is a process of understanding what those preferences really are.
Effective altruism (EA) defines itself as "a philosophical and social movement that advocates taking actions which maximally benefit sentient beings, as determined by evidence and reason."
In practice, effective altruists advocate making as much money as possible and then donating a (ideally large) percentage of it to charity. They also advocate choosing charities based on systematic evaluation of their effectiveness, in order to maximize the benefit of their dollars.
EA has its roots in the philosophical theory of utilitarianism, which can be briefly described as the moral principle that one should take actions which maximize the well-being of all affected individuals. EA is often motivated by arguing that the classic "trolley problem" is an analogy for modern life, and then positing a utilitarian approach as the solution.
In the trolley problem, there is a runaway trolley barreling down a track towards five people who cannot get out of the way in time. You are standing in front of a lever which would divert the trolley onto a side track, but there is an unaware worker on the side track who would be killed if you do so. Do you pull the lever?
Effective altruists argue that this is the type of decision we are constantly faced with in the modern world. People are suffering and dying and those of us living in a situation of relative wealth and comfort must recognize that our action or inaction can make things better or worse. We should pursue the course of action which maximizes the good that we can effect in the world. Therefore, if we are in the position to do so, should we not pull the lever and save five lives, even though one may still die?
For example, getting a high paying job may contribute to income inequality, but if you donate a substantial amount to highly effective charities, then on the balance you are doing more good than harm. Or, getting elected for political office may require you to pander to lobbyists and corporate interests, but if you can push through legislation that addresses climate change or some other social good, then on the whole you are doing more good than harm. Lever pulled, lives saved.
Effective altruists then go on to describe systems and methods for determining exactly how to measure "good" and determine what action produces the "most" good. Here I am only concerned with the core motivations, so I won't go into details on any of that.
The Problem
The thing about moral decision making is that it requires some criteria on which to base an evaluation. We call this criteria values.
You cannot derive values from a purely logical standpoint, such as by asserting that you should pull the lever because 5 > 1. Why should we care at all? We know that we value life, but we don't really know what that means as a mathematical statement. And we might be able to explain how the value of life was created in our minds through the process of evolution, but that does not show whether this value is good or bad, or tell us whether or not we should adopt it.
Our values are derived from the physical sensations in our body that we call emotion, which arises automatically in response to whatever is happening right now. Healthy values fulfill our emotional needs, and unhealthy values do not.
Moral decision making is achieved through a complex combination of logic and emotion. Logic is used to analyze and reason about a situation, while emotion is used to understand what is desired or not desired. Without the emotional input, we cannot really tell what is good from what is bad.
If we think about the trolley problem with the full capacity of our intelligence, both logic and emotion, we find that it is secretly a Zen koan in disguise. If we do nothing, we watch helplessly as five people die and feel terrible. If we pull the lever, we are responsible for killing an innocent person and that also feels terrible. It turns out that the lesser of two evils, no matter how we define "lesser", still feels evil. How can we call that good? To fully engage with the hypothetical situation is to admit that it is a paradox, and maybe there can be no answer.
The Solution
I think the trolley problem is a brilliant analogy to our present condition. And while I think in its purest form the trolley problem is essentially unanswerable, when thinking about our present condition we can extend the analogy to add more context.
For one, how did it get to be that we are here standing in front of the lever while everyone else is out on the tracks? It's important to understand how we arrived at the place where we are forced into making this kind of decision.
For two, we are not facing one trolley problem, which we could probably deal with, but a continual succession of them, even if we are not aware of it. This is the real moral weight of the argument motivating EA--we are constantly pulling the lever or not, and facing every conceivable variation and extra complication of the basic dilemma that one could imagine.
The key insight found within our emotional response to the trolley problem is that having power over people's lives feels bad. Instead of trying to solve the endless succession of trolley problems, what if we tried to prevent ourselves from having to face them in the first place? What if it's not actually about making "less evil" decisions, but about how many or how few of those kind of decisions you have to make?
What's actually going on here is that we have completely forgotten how to let go of the lever and walk away.
It's time we admitted that the mere fact of making a lot of money or holding positions of authority is maintaining a system of oppression that is backed up by police, prisons, soldiers, and weapons of mass destruction. It is not merely that power corrupts, but that power is itself corrupted. I don't think it feels good to have power over other people's lives. Do you?
If we take the time to understand the emotional basis of effective altruism, we will find that is in emotional tones of guilt, fear, and anger. We feel we are caught in a perpetual trolley problem which we just can't seem to escape.
What if we chose to follow emotional tones of joy, love, and connection and see where it took us? And not in words and ideas as liberals do, but in the felt experience of our bodies. That is, what if we started following our conscience instead of trying to lead it where we think it should go?