Impartial ethics and personal decisions
Some moral questions I’ve seen discussed here:
- A trolley is about to run over five people, and the only way to prevent that is to push a fat bystander in front of the trolley to stop it. Should I?
- Is it better to allow 3^^^3 people to get a dust speck in their eye, or one man to be tortured for 50 years?
- Who should I save, if I have to pick between one very talented artist, and five random nobodies?
- Do I identify as an utilitarian? a consequentialist? a deontologist? a virtue ethicist?
Yet I spend time and money on my children and parents, that may be “better” spent elsewhere under many moral systems. And if I cared as much about my parents and children as I do about random strangers, many people would see me as somewhat of a monster.
In other words, “commonsense moral judgements” finds it normal to care differently about different groups; in roughly decreasing order:
- immediate family
- friends, pets, distant family
- neighbors, acquaintances, coworkers
- fellow citizens
- foreigners
- sometimes, animals
- (possibly, plants...)
In consequentialist / utilitarian discussions, a regular discussion is “who counts as agents worthy of moral concern” (humans? sentient beings? intelligent beings? those who feel pain? how about unborn beings?), which covers the later part of the spectrum. However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).
Let’s consider two rough categories of decisions:
- impersonal decisions: what should government policy be? By what standard should we judge moral systems? On which cause is charity money best spent? Who should I hire?
- personal decisions: where should I go on holidays this summer? Should I lend money to an unreliable friend? Should I take a part-time job so I can take care of my children and/or parents better? How much of my money should I devote to charity? In which country should I live?
Impartial utilitarianism and consequentialism (like the question at the head of this post) make sense for impersonal decisions (including when an individual is acting in a role that require impartiality - a ruler, a hiring manager, a judge), but clash with our usual intuitions for personal decisions. Is this because under those moral systems we should apply the same impartial standards for our personal decisions, or because those systems are only meant for discussing impersonal decisions, and personal decisions require additional standards ?
I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist (not that I mind much apart from confusion during the yearly survey; not knowing my values would be a problem, but not knowing which label I should stick on them? eh, who cares).
I also have similar ambivalence about Effective Altruism:
- If it means that I should care as much about poor people in third world countries than I do about my family and friends, then it’s a bit hard to swallow.
- However, if it means that assuming one is going to spend money to help people, one should better make sure that money helps them in the most effective way possible.
Scott’s “give ten percent” seems like a good compromise on the first point.
So what do you think? How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?
Other places this has been discussed:
- This was a big debate in ancient China, between the Confucians who considered it normal to have “care with distinctions” (愛有差等), whereas Mozi preached “universal love” (兼愛) in opposition to that, claiming that care with distinctions was a source of conflict and injustice.
- “Impartiality” is a big debate in philosophy - the question of whether partiality is acceptable or even required.
- The philosophical debate between “egoism and altruism” seems like it should cover this, but it feels a bit like a false dichotomy to me (it’s not even clear whether “care only for one’s friends and family” counts as altruism or egoism)
- “Special obligations” (towards Friends and family, those one made a promise to) is a common objection to impartial, impersonal moral theories
- The Ethics of Care seem to cover some of what I’m talking about.
- A middle part of the spectrum - fellow citizens versus foreigners - is discussed under Cosmopolitanism.
- Peter Singer’s “expanding circle of concern” presents moral progress as caring for a wider and wider group of people (counterpoint: Gwern's Narrowing Circle) (I haven't read it, so can't say much)
Other related points:
- The use of “care” here hides an important distinction between “how one feels” (My dog dying makes me feel worse than hearing about a schoolbus in China falling off a cliff) and “how one is motivated to act” (I would sacrifice my dog to save a schoolbus in China from falling off a cliff). Yet I think we have the gradations on both criteria.
- Hanson’s “far mode vs. near mode” seems pretty relevant here.
Should EA's be Superrational cooperators?
Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn't just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.
If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.
If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.
You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.
I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.
In game theoretic terms, Mark was a Cooperational agent.
- Altruistic - MaxOther
- Cooperational - MaxSum
- Individualist - MaxOwn
- Equalitarian - MinDiff
- Competitive - MaxDiff
- Aggressive - MinOther
Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it "altruism" is because even the most parochial EA's care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.
Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the "A" in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.
A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.
The question then is,
How should a consequentialist act locally?
The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.
My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.
Utilitarianism and Relativity Realism
Introduction
Most people on less wrong seem to be some kind of hedonic consequentialist. They think states with less suffering and more joy are better. Moreover, it is intuitive that if you can cause some improvement in human well-being to be achieved then (other things being equal) it is better to realize that improvement as soon as possible. Also, most people on this site seem to be realists about special relativity. That is they assume that any inertial reference frame is an equally valid point from which to describe reality rather than believing there is one true reference which offers a preferred description of reality. I will point out that these beliefs (plus some innocuous assumptions) lead quickly to paradox.
Relativity Realism
Before I continue I want to point out that empirical observations really are agnostic about the existence of a preferred reference frame. Indeed, it's a consequence of the theory of relativity itself that it's predictions are equally well explained by postulating a single true inertial reference frame and simply using the Lorentz contraction and time dilation equations to compute behavior for all moving objects. To see that this must be true not that if we take relativity seriously the laws of physics must work correctly in any reference frame. In particular, if we imagine designating one reference frame to be the true reference frame then, relativity itself, tells us that applying the laws of physics in that reference frame has to give us the correct results.
In other words once we accept Einstein's equations for length contraction and time dilation with velocity we can interpret those equations as either undermining the idea of a fixed ether against which objects move (any reference frame is equally valid) or that there really is a fixed ether but objects in motion behave in such a manner that we can't empirically distinguish what is at rest.
At first blush this second result seems so jury rigged that surely the simpler assumption is that there is no preferred reference frame. This relies on a false description of the situation. The question isn't, "do we assign a low prior probability to the laws of physics conspiring to hide the true rest frame from us?" Presumably we do. The question should be, "given that the laws of physics do conspire to make a special rest frame empirically indistinguishable from any other inertial frames what probability do we assign to such a frame existing?" After all it is a mathematical truth that the time dilation and length contraction do perfectly conspire to prevent us from measuring motion relative to some true rest frame (if it existed) so in deciding whether to believe in a preferred rest frame we aren't deciding between laws that would and wouldn't hide such a frame from us. We are only deciding whether, given we have such laws, whether we think such an undetectable true rest frame exists.
To make it even more plausible that there is some true rest frame I will remark (but not argue) that relativity is a pretty general phenomena that can be derived from any model that conserves momentum, where the forces obey the inverse square law and all propagate at a constant speed relative to some fixed background, matter is held together in equilibrium states of these forces and time is implicitly measured via the rate it takes these forces to propagate. In other words if you have atoms held together by EM forces and the time it takes physical processes to happen is governed by the time it takes either forces or matter to cross certain distances then relativity comes for free. So it isn't amazing that we might have a true prefered reference frame and yet it be impossible to experimentally determine that frame.
(As an aside this interpretation of relativity, fully consistent with all observables so far, makes for much better scifi since FTL travel doesn't allow anyone to go back in time).
A Paradox Resulting From Relativity Realism
Suppose we have two different brain implants that will be implanted in two different conscious but coma bound individuals. After a delay of 10 minutes after implantation the first device delivers an instantaneous burst of euphoria every second. The other delivers an instantaneous burst of discomfort every second. I assume we would all agree that (with sufficient additional assumptions) the world is a better place if we implant just a device of the euphoria inducing kind and a worse place if we just implant a device of the second kind. So assume the devices are appropriately calibrated so that the effect of implanting both is neutral (or very very nearly so). So far so good.
I think we can all agree that the world would be better off if we delayed implanting the discomforting device by 10 minutes (or equivalently implanted the pleasurable device 10 minutes earlier). If you dispute this conclusion then you get absurd results if you even admit the possibility of a universe that exists forever as in such a universe it is no better to permanently increase human welfare now than to delay that increase by 10 minutes or 10 centuries.
Now assume that the two individuals receiving the transplants are actually on spaceships moving in opposite directions at high rates of speed and the implantation is done at the instant they pass by each other. For simplicity we assume everyone else dies at this instant (or add an irrelevance of identical outcomes assumption and note that the two ships are moving at the same velocity relative to everyone else).
From the reference frame of the individual who received the beneficial implant we can analyze the situation as follows. Without loss of generality we can assume the ships are traveling at an appropriate speed so that for every second that pases in our reference frame only 1/2 a second passes on the other ship. Thus in this reference frame the first experience of discomfort is delayed by 10 minutes and then only occurs every other second. Now surely the world is no worse off because the discomfort occurs less frequently. But ignoring the fact that the discomforting device fires less frequently this is exactly equivalent to implanting the desirable device 10 minutes before the undesirable one. Thus, since implanting both in the same reference frame was neutral, it is actually favorable (better than not implanting them) to do so when the recipients are in fast moving reference frames moving in opposite directions. Note the same result holds if we assume the device only creates discomfort or euphoria a single time with the minor assumption that if two worlds only differ in events before time t then what happens after time t is irrelevant to which one is preferable.
However, the same analysis done in the reference frame of the unpleasant implant gives the exact opposite conclusion.
Avoiding the Paradox
Perhaps one might try and avoid the paradox by insisting that no experience truly occurs instantaneously. However, this is easily seen to be futile.
Assume that each device inflicts pleasure or discomfort for duration epsilon << 1 second. If you assume that the total badness of the uncomfortable experience is somehow mediated by changes in neurochemistry or other physical properties you are lead to the assumption that even described from the reference frame of the desirable implant the experience of 2*epsilon seconds of discomfort by the time dilated individual is really no worse than the experience of epsilon seconds of discomfort would be for someone with that implant in your reference frame. In other words when time is dilated the experience of pain per unit time is diluted. This leads to the exact same result as above.
On the other hand if we really do increase the weight we give to pain experienced by those undergoing time dilation an even simpler set of implants leads to paradox. These implants start working immediately, one generating a pleasant experience for 5 minutes the other an unpleasant experience for 5 minutes again calibrated so that installing both is overall neutral. Now by assumption from the reference frame of the beneficial implant things are overall worse (the longer duration of discomfort experienced by the other individual is overall worse than someone in the same reference frame getting the undesirable implant) and vice versa from the other reference frame.
The use of instantaneous experiences was merely a way to simplify the example but irrelevant to the underlying inequalities. Those inequalities are a result of the implicit time discounting forced by the assumption that other things being equal it is better for improvements to occur now rather than later combined with the fact that realism about relativity renders facts about simultaneity incoherent.
Personally, I think the only decent way of avoiding this paradox is to deny realism about relativity. Sure, it's a radical move. However, it's also a radical move to say it's not true that it's better to cure cancer now than in 10 centuries even if the human race will continue to exist forever. Indeed, even if you don't assume literally infinite duration of effects even an unbounded potential length of effect with probabilities that decrease sufficiently slowly is equally problematic.
Responses
I've deliberately avoided phrasing this dilemma in terms of a formal paradox and listing the assumptions necessary to generate the paradox. Partly this is laziness but it's also a desire to see how people are inclined to respond before I attempt to draw up formal conditions. After all I ultimately want to capture common views in the assumptions and if I don't know what people's reactions are I can't pick the right assumptions.
Does Existential Risk Justify Murder? -or- I Don't Want To Be A Supervillain
A few days ago I was rereading one of my favourite graphic novels. In it the supervillain commits mass murder to prevent nuclear war - he kills millions to save billions. This got me thinking about how a lot of LessWrong/Effective Altruism people approach existential risks (xrisks). An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom 2002). I'm going to point out an implication of this approach, show how this conflicts with a number of intuitions, and then try to clarify the conflict.
I. Implication:
If murder would reduce xrisk, one should commit the murder. The argument for this is that compared to billions or even trillions of future people, and/or the amount of valuable things they could instantiate (by experiencing happiness or pleasure, performing acts of kindness, creating great artworks, etc) the importance of one present person, and/or the badness of commiting (mass) murder is quite small. The large number on the 'future' side outweighs or cancels the far smaller number on the 'present' side.
I can think of a number of scenarios in which murder of one or more people could quite clearly reduce existential risk, such as the people who know the location of some secret refuge
Indeed at the extreme it would seem that reducing xrisk would justify some truly terrible things, like a preemptive nuclear strike on a rogue country.
This implication does not just hold for simplistic act-utilitarians, or consequentialists more broadly - it affects any moral theory that accords moral weight to future people and doesn't forbid murder.
This implication is implicitly endorsed in a common choice many of us make between focusing our resources on xrisk reduction as opposed to extreme poverty reduction. This is sometimes phrased as being about choosing to save one life now or far more future lives. While bearing in mind some complications (such as the debate over doing vs allowing and the Doctrine of Double Effect), it seems that 'letting several people die from extreme poverty to try to reduce xrisk' is in an important way similar to 'killing several people to try to reduce xrisk'.
II. Simple Objection:
A natural reaction to this implication is that this is wrong, one shouldn't commit murder to reduce xrisk. To evade some simple objections let us assume that we can be highly sure that the (mass) murder will indeed reduce xrisk: maybe no-one will find out about the murder, or it won't open a position for someone even worse.
Let us try and explain this reaction, and offer an objection: The idea that we should commit (mass) murder conflicts with some deeply held intuitions, such as the intuition that one shouldn't kill, and the intuition that one shouldn't punish a wrong-doer before she/he commits a crime.
One response - the most prominent advocate of which is probably Peter Singer - is to cast doubt onto our intuitions. We may have these intuitions, but they may have been induced by various means i.e. by evolution or society. Racist views were common in past societies. Moreover there is some evidence that humans may have a evolutionary predisposition to be racist. Nevertheless we reject racism, and therefore (so the argument goes) we should reject a number of other intuitions. So perhaps we should reject the intuitions we have, shrug off the squeamishness and agree that (mass) murder to reduce xrisk is justified.
[NB: I'm unsure about how convincing this response is. Two articles in Philosophy and Public Affairs dispute Singer's argument (Berker 2009) (Kamm 2009). One must also take into account the problem of applying our everyday intuitions to very unusual situations - see 'How Outlandish Can Imaginary Cases Be?' (Elster 2011)]
The trope of the supervillain justifying his or her crimes by claiming it had to be done for 'the greater good' (or similar) is well established. Tv tropes calls it Utopia Justifies The Means. I find myself slightly troubled when my moral beliefs lead me to agree with fictional supervillains. Nevertheless, is the best option to bite the bullet and side with the supervillains?
III. Complex Objection:
Let us return to the fictional example with which we started. Part of the reason his act seems wrong is that, in real life, the supervillain's mass murder was not necessary to prevent nuclear war - the Cold War ended without large-scale direct conflict between the USA and USSR. This seems to point the way to (some) clarification.
I find my intuitions change when the risk seems higher. While I'm unsure that murder is the right answer in the examples given above, it seems clearer in a situation where the disaster is in the midst of occurring, and murder or mass murder is the only way to prevent an existential disaster. The hypothetical that works for me is imagining some incredibly virulent disease or 'grey-goo' nano-replicator that has swept over Australia and is about to spread, and the only way to stop it is a nuclear strike.
One possibility is that my having a different intuition is simply because the situation is similar to hypotheticals that seem more familiar, such as shooting a hostage-taker or terrorist if that was the only way to prevent loss of innocent life.
But I'd like to suggest that it perhaps reflects a problem with xrisks, that it is the idea of doing something awful for a very uncertain benefit. The problem is the uncertainty. If a (mass) murder would prevent an existential disaster, then one should do it, but when it merely reduces xrisk it is less clear. Perhaps there should be some sort of probability threshold - if one has good reason to think the probability is over certain limits (10%, 50%, etc) then one is justified in committing gradually more heinous acts.
IV. Conclusion
In this post I've been trying to explain a troubling worry - to lay out my thinking - more than I have been trying to argue for or against an explicit claim. I have a problem with the claim that xrisk reduction is the most important task for humanity and/or me. On the one hand it seems convincing, yet on the other it seems to lead to some troubling implications - like justifying not focusing on extreme poverty reduction, or justifying (mass) murder.
Comments and criticism of the argument are welcomed. Also, I would be very interested in hearing people's opinions on this topic. Do you think that 'reducing xrisk' can justify murder? At what scale? Perhaps more importantly, does that bother you?
DISCLAIMER: I am in no way encouraging murder. Please do not commit murder.
What Deontology gets right
Let me preface this with an acknowledgement that Deontology has blind spots and that I'm not a Deontologist. Much like Logical Positivism, however, Deontology has good things to learn from that many Consequentialist decision algorithms miss.
Social Considerations
Your decision has consequences outside of the direct results. More specifically, if you decide to tell a lie, people are more likely to view you as a liar. This portion of consequences are easy to neglect when making a decision. So while Deontology over-corrects for this (for example, if you put a gun to my head and demand that I profess belief X, I'm going to say that I believe X, which a Deontological prohibition against lying forbids), it does so in a way that is better than many people's naive consequential thinking.
Deontological arguments are also better at convincing people that you have socially valued traits. People expect truth-tellers to tell the truth, so you want to be viewed as a truth-teller. "Lying doesn't work, so I don't lie" is a more awkward and involved argument than "lying is wrong". On a related note, Deonotological reasoning is easier for other people to model. Deontology can screen off the cost-benefit analysis that someone makes when thinking about their decisions, since all you need is the rules that they are following.
Habits and Policies
Decisions aren't made in a vacuum. They also form an implicit rule that people tend to follow. In other words, people form habits. They find it easier to do the same kinds of things that they've always done. Eating one piece of cake doesn't do measurable harm to your waistline, but having a policy of eating one piece of cake whenever you want to does.
If you're familiar with set theory, it's the distinction between {x|P(x)} and {x1, x2, x3...}. If you make decisions without consulting what policy P(x) you'd like to follow, you can make mistakes. Choosing x1 means not only having done x1, but also choosing a P(x) such that P(x1) is true.
When I sign a gay marriage petition, it doesn't just increase the chance that gay marriage gets enacted. It also makes me more likely to do other things that support the gay marriage movement, as well as make me more likely to sign worthwhile-sounding petitions in general. This is part of why I avoid social movements: trying to fight rape culture or conservatives or racism means that I'm more likely to do similar kinds of things when they don't help (Or alternatively, convince people to join whatever movement in question even when more support for that movement isn't helpful).
In short, the Deontological focus on following rules can help people enact the kinds of policies that they want to follow, even if they are bad at evaluating the value gained from following certain policies. It's a way of implementing a Schelling point, in other words - a way to choose a better policy even if breaking the policy this one time seems to work better.
Enforcing pro-social behavior
It's fairly straightforward to tell whether or not someone has crossed an arbitrary line separating pro-social and anti-social behavior. Evaluating someone's consequentialist reasoning, on the other hand, is much more difficult. Let's take, for example, the case of Christopher Dorner, the former LAPD officer who decided to expose and fight what he saw as a corrupt LAPD by declaring a personal war on them. A Deontological "don't kill cops" definitively indicts him as anti-social, whereas it's much more ambiguous whether or not trading some dead cops for a better police force is a good deal or not.
Pro-social reasons for selfish actions are also rather cheap to make or say. If you want a millionaire lifestyle, it's easy to say that your immoral business practices are for feeding starving children in Africa. It's a lot harder to say that your immoral business practices don't violate the rule "don't use immoral business practices". In general, rule-breaking is much easier to detect than utility functions you don't want to have around.
[Link] Machiavelli in historical context
In modern usage, the name "Machiavelli" is a byword for cynical, selfish scheming. In this post, a Renaissance scholar places Machiavelli the human being into historical context, illuminating that Machiavelli was not cynical so much as desirous of an accurate map of the territory, and not selfish at all but rather relentlessly goal-oriented. (The post starts slowly -- that's historical context for ya.) In writing Il Principe, Machiavelli (quite possibly unintentionally) committed to posterity two major breakthroughs, which we would now call (i) the creation of modern political science and history and (ii) the introduction of utilitarian/consequentialist ethics.
Consequentialism
In 1498, at the age of 29, Machiavelli was made a high official of the Florentine analogue of the State Department/Ministry of Foreign Affairs. His job was to shut up and do the impossible:
- Goal: Prevent Florence from being conquered by any of 10+ different incredibly enormous foreign powers.
- Resources: 100 bags of gold, 4 sheep, 1 wood, lots of books and a bust of Caesar.
- Go!
Modern Political Science
1508. The Italian territories destabilized by the Borgias are ripe for conquest. Everyone in Europe wants to go to war with everyone else and Italy will be the biggest battlefield. Machaivelli’s job now is to figure out who to ally with, and who to bribe. If he can’t predict the sides there’s no way to know where Florence should commit its precious resources. How will it fall out? Will Tudor claims on the French throne drive England to ally with Spain against France? Or will French and Spanish rival claims to Southern Italy lead France to recruit England against the houses of Aragon and Habsburg? Will the Holy Roman Emperor try to seize Milan from the French? Will the Ottomans ally with France to seize and divide the Spanish holdings in the Mediterranean? Will the Swiss finally wake up and notice that they have all the best armies in Europe and could conquer whatever the heck they wanted if they tried? (Seriously, Machiavelli spends a lot of time worrying about this possibility.) All the ambassadors from the great kingdoms and empires meet, and Machiavelli spends frantic months exchanging letters with colleagues evaluating the psychology of every prince, what each has to gain, to lose, to prove. He comes up with several probable scenarios and begins preparations. At last a courier rushes in with the news. The day has come. The alliance has formed. It is: everyone joins forces to attack Venice.O_O ????????Conclusion: must invent Modern Political Science.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)