In favour of total utilitarianism over average
While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people...
This post will argue that within the framework of hedonic utilitarianism, total utilitarianism should be preferenced over average utilitarianism. Preference utilitarianism will be left to future work. We will imagine collections of single experience people (SEPs) who only have a single experience that gains or loses them a certain amount of utility
Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: "It is good for a SEP with positive utility to occur if it doesn't affect anything else". This seems to be one of the most basic assumptions that one could choose to start with - it's practically equivalent to "It is good when good things occur". However, if it is true, then average utilitarianism is false, as a positive, but low utility SEP may bring the average utility down. It also leads to the sadistic conclusion, that if a large number of SEPs involve negative utility, we should add a SEP who suffers less over adding no-one. Total utilitarianism does lead to the repugnant conclusion, but contrary to perceptions, near zero, but still positive utility is not a state of terrible suffering like most people imagine. Instead it is a state where life is good and worth living on the whole.
On the other hand, average utilitarianism starts from its own "obviously true" axiom, that we should maximise the average expected utility for each person independent of the total utility. We note that average utilitarianism depends on a statement about aggregations (expected utility), while total utilitarianism depends on a statement about an individual occurrence that doesn't interact with any other SEPs. Given the complexities with aggregating utility, we should be more inclined towards trusting the statement about individual occurrences, then the one about a complex aggregate. This is far from conclusive, but I still believe that this is a useful exercise.
So why is average utilitarianism flawed? The strongest argument for average utilitarianism is the aforementioned "obviously true" assumption that we should maximise expected utility. Accepting this assumption would reduce the situation as follows:
Original situation -> expected utility
Given that we already exist, it is natural for us really want the average expected utility to be high and for us to want to preference it over increasing the population seeing as not existing is not inherently negative. However, while not existing is not negative in the absolute sense, it is still negative in the relative sense due to opportunity cost. It is plausibly good for more happy people to exist, so reducing the situation as we did above discards important information without justification. Another way of stating the situation is as follows: While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people. This situations can be represented as followed:
Original situation -> (expected utility, number of SEPs)
Since this is a tuple, it doesn't provide an automatic ranking for situations, but instead needs to be subject to another transformation before this can occur. It is now clear that the first model assumed away the possible importance of the number of SEPs without justification and therefore assumed its conclusion. Since the strongest argument for average utilitarianism is invalid, the question is what other reasons are there for believing in average utilitarianism? As we have already noted, the repugnant conclusion is much less repugnant than it is generally perceived. This leaves us with very little in the way of logical reasons to believe in average utilitarianism. On the other hand, as already discussed, there are very good reasons for believing in total utilitarianism, or at least something much closer to total utilitarianism than average utilitarianism.
I made this argument using SEPs for simplicity, but there's no reason why the same result shouldn't also apply to complete people. I also believe that this line of argumentation has implications for the anthropic principle.
The following was originally towards the start of the article. I think this is still an interesting approach, but I'm not convinced that it has any benefits over noting that it seems absurd that creating SEPs with positive utility could be bad or criticising the sadistic conclusion. I also think that my attempted formalisation needs a bit more work to make it correct. I also think that imagining combining universes provides a very interesting method for thinking about this problem.
Let's begin by considering a relatively simple argument for total utilitarianism. If we have a group of SEPs who experience different amounts of utility, we can quantify how good or bad each group is in utilitarian terms by imagining how much negative or positive utility a single SEP would need to have in order to balance out the existence of the group and result in the existence of the new group being neither good nor bad. If we accept that groups can cancel out like this, then this pushes us towards an aggregate model of utilitarianism because, for example, doubling the number of SEPs in a group where all the members have positive utility, seems very helpful when we want this group to cancel out a SEP with negative utility. Once we accept that sheer weight of numbers can allow a group with small, but positive utility, to cancel out SEPs with arbitrarily large amounts of negative utility and that the SEP required to cancel out a group is a valid measure of how good a group is in utilitarian terms, we have pretty much proven the repugnant condition. If if the actual aggregate function doesn't end up being a total function, proving the repugnant condition would provide us with a total-like function and would also disarm the most severe objection to total utilitarianism. (There is also a very convincing argument where you argue it is better to have a million people with 99 utility, then 100 utility and then you keep repeating until you end up with a ridiculously large number of people with small utility, I'll add a link if I can find it).
Let's try to state our assumptions more clearly and see if they are justified. Firstly, for a SEP with any amount of negative utility is it the case that there will be some amount of SEPs with small positive utility who would lead to a neutral universe if no other SEPs existed? Firstly, I'll note that this happens in both average and aggregate utilitarianism. Secondly, this seems pretty much equivalent to the torture vs. dust specks problem. We can convert the large negative utility into dust specks, then let each SEP with a small positive utility cancel out a dust speck.
Secondly, is the ability to cancel out a person with negative utility a good metric to measure how good a situation is in utilitarian terms? Let's suppose we were able to show something a bit more general, that if universe1 is better than universe2 and universe3 is better than universe4 then universe1&3 is better than universe2&4 where universea^b has all the SEPs in universea and universeb. Furthermore, if universe1 is just as good as universe2 then universe1&c is just as good as universe2&c. If these axioms are true, then it will be perfectly valid to cancel out groups equivalent to empty universe, before comparing the remaining SEPs in order to determine whether one universe is better than another. Why would we believe this? Well, it seems rather strange to think that whether a SEP should occur or not depends on what is happening elsewhere in the universe given that one SEP experiencing utility does not affect how another SEP experiences utility. It seems bizarrely inconsistent that we might want one half of the universe to be what would be a worse universe (an empty universe) if it existed on it's own.
In defense of philosophy
The meaning of words
This article aims to challenge the notion that the meaning of the words should and must be understood as the propositional or denotation content, in preference to the implied or connotational content. This is an assumption that I held for most of my life and which I suspect a great deal of aspiring rationalists will naturally tend towards. But before I begin, I must first clarify the argument that I am making. When a rationalist is engaged in conversation, it is very likely that they are seeking truth and that they want (or would at least claim to want) to know the truth regardless of the emotions that it might stir up. Emotions are seen as something that must be overcome and subjected to logic. The person who would object to statement due to its phrasing, rather than its propositional content is seen as acting irrationally. And these beliefs are indeed these are true to a large extent. Those who hide from emotions are often coddling themselves and those who object due to phrasing are often subverting the rules of fair play. But there are also situations where using particular words necessarily implies more than the strict denotational content and trying to ignore these connotations is foolhardy. For many people, this last sentence alone may be all that needs to be said on this topic, but I believe that there is still some value in breaking down precisely what words actually mean.
So why is there a widespread belief within certain circles that the meaning of a word or sentence is its denotational content? I would answer that this is a result of a desire to enforce norms that result in productive conversation. In general conversation, people will often take offense in a way that derails the conversation into a discussion of what is or is not offensive, instead of substantive disagreements. One way to address this problem is to create a norm that each person should only be criticised on their denotations, rather connotations. In practise, it is considerably more complicated as particularly blatant connotations will be treated by denotations, but this is a minor point. The larger point is that meaning consisting of purely the connotations is merely a social norm within a particular context and not an absolute truth.
This means that when the social norms are different and people complain about connotations in other social settings, the issues isn't that they don't understand how words work. The issue isn't that they can't tell the difference between a connotation and a denotation. The issue is that they are operating within different social norms. Sometimes people are defecting from these norms, such as when they engage in an excessively motivated reading, but this isn't a given. Instead, it must be seen the operating within a framework of meaning as denotation is merely a social, not an objective, norm, regardless of this norm’s considerable merits.
Creating lists
Suppose you are trying to create a list. It may be of the "best" popular science books, or most controversial movies of the last twenty years, tips for getting over a breakup or the most interesting cat gifs posted in the last few days.
There are many reasons for wanting to create one of these lists, but only a few main simple methods:
- Voting model - This is the simplest model, but popularity doesn't always equal quality. It is also particularly problematic for regularly updated lists (like Reddit), where a constantly changing audience can result in large amounts of duplicate content and where easily consumable content has an advantage.
- Curator model - A single expect can often do an admirable job of collecting high-quality content, but this is subject to their own personal biases. It is also effort intensive to evaluate different curators to see if they have done a good job.
- Voting model with (content) rules - This can cut out the irrelevant or sugary content that is often upvoted, but creating good rules is hard. Often there is no objective line between high and low-quality content. These rules can often result in conflict.
- Voting model with sections - This is a solution to some of the limitations of 1 and 3. Instead of declaring some things off-topic outright, they can be thrown into their own section. This is the optimal solution, but is usually neglected.
- Voting model with selection - This covers any model where only certain people are allowed to vote. Sometimes selection is extraordinarily rigorous, however, it can still be very effective when it isn't. As an example, Metafilter charges a $5 one-time only fee and that is sufficient to keep the quality high.
Mark Manson and Rationality
As those of you on the Less Wrong chat may know, Mark Manson is my favourite personal development author. I thought I'd share those articles that are most related to rationality, as I figured that they would have the greatest chance of being appreciated.
Immediately after writing this article, I realised that I left one thing unclear, so I'll explain it now. Why have I included articles discussing the terms "life purpose" and "finding yourself"? The reason is that I think that it is very important to provide linguistic bridges between some of the vague everyday language that people often use and the more precise language expected by rationalists.
Why I’m wrong about everything (and so are you):
“When looked at from this perspective, personal development can actually be quite scientific. The hypotheses are our beliefs. Our actions and behaviors are the experiments. The resulting internal emotions and thought patterns are our data. We can then take those and compare them to our original beliefs and then integrate them into our overall understanding of our needs and emotional make-up for the future.”
…
“You test those beliefs out in the real world and get real-world feedback and emotional data from them. You may find that you, in fact, don’t enjoy writing every day as much as you thought you would. You may discover that you actually have a lot of trouble expressing some of your more exquisite thoughts than you first assumed. You realize that there’s a lot of failure and rejection involved in writing and that kind of takes the fun out of it. You also find that you spend more time on your site’s design and presentation than you do on the writing itself, that that is what you actually seem to be enjoying. And so you integrate that new information and adjust your goals and behaviors accordingly.”
7 strange questions that can help you find your life purpose:
Mark Manson deconstructs the notion of “life purpose”, replacing it with a question that is much more tractable:
“Part of the problem is the concept of “life purpose” itself. The idea that we were each born for some higher purpose and it’s now our cosmic mission to find it. This is the same kind of shitty logic used to justify things like spirit crystals or that your lucky number is 34 (but only on Tuesdays or during full moons).
Here’s the truth. We exist on this earth for some undetermined period of time. During that time we do things. Some of these things are important. Some of them are unimportant. And those important things give our lives meaning and happiness. The unimportant ones basically just kill time.
So when people say, “What should I do with my life?” or “What is my life purpose?” what they’re actually asking is: “What can I do with my time that is important?””
5 lessons from 5 years travelling the world:
While this isn’t the only way that the cliche of “finding yourself” can be broken down into something more understandable, it is quite a good attempt:
“Many people embark on journeys around the world in order to “find themselves.” In fact, it’s sort of cliché, the type of thing that sounds deep and important but doesn’t actually mean anything.
Whenever somebody claims they want to travel to “find themselves,” this is what I think they mean: They want to remove all of the major external influences from their lives, put themselves into a random and neutral environment, and then see what person they turn out to be.
By removing their external influences — the overbearing boss at work, the nagging mother, the pressure of a few unsavory friends — they’re then able to see how they actually feel about their life back home.
So perhaps a better way to put it is that you don’t travel to “find yourself,” you travel in order to get a more accurate perception of who you were back home, and whether you actually like that person or not.””
Mark Manson attacks one of the biggest myths in our society:
“In our culture, many of us idealize love. We see it as some lofty cure-all for all of life’s problems. Our movies and our stories and our history all celebrate it as life’s ultimate goal, the final solution for all of our pain and struggle. And because we idealize love, we overestimate it. As a result, our relationships pay a price.
When we believe that “all we need is love,” then like Lennon, we’re more likely to ignore fundamental values such as respect, humility and commitment towards the people we care about. After all, if love solves everything, then why bother with all the other stuff — all of the hard stuff?
But if, like Reznor, we believe that “love is not enough,” then we understand that healthy relationships require more than pure emotion or lofty passions. We understand that there are things more important in our lives and our relationships than simply being in love. And the success of our relationships hinges on these deeper and more important values.”
6 Healthy Relationship Habits Most People Think Are Toxic:
Edit: Read the warning in the comments
I included this article because of the discussion of the first habit.
"There’s this guy. His name is John Gottman. And he is like the Michael Jordan of relationship research. Not only has he been studying intimate relationships for more than 40 years, but he practically invented the field.
His “thin-slicing” process boasts a staggering 91% success rate in predicting whether newly-wed couples will divorce within 10 years — a staggeringly high result for any psychological research.
...
Gottman devised the process of “thin-slicing” relationships, a technique where he hooks couples up to all sorts of biometric devices and then records them having short conversations about their problems. Gottman then goes back and analyzes the conversation frame by frame looking at biometric data, body language, tonality and specific words chosen. He then combines all of this data together to predict whether your marriage sucks or not.
And the first thing Gottman says in almost all of his books is this: The idea that couples must communicate and resolve all of their problems is a myth."
Others
I highly recommend these articles. They are based on research to an extent, but also upon his experiences, so they are not completely research based. If that's what you want, then you should try looking for a review article.
Updating on hypotheticals
This post is based on a discussion with ChristianKl on Less Wrong Chat. Thanks!
Many people disagreed with my previous writings on hypotheticals on Less Wrong (link1, link2). For those who still aren’t convinced, I’ll provide another argument on why you should take hypotheticals seriously. Suppose you are discussing the issue of whether it’d be okay to flick the switch if a train was about to collide with and destroy an entire world as a way to try to sell someone on utilitarian ethics (see trolley problem). The other person objects that this is an unrealistic situation and so there is no point wasting time on this discussion.
This may seem unreasonable, but I suppose a person who believes that their time is very valuable may not feel that it is actually worth their time indulging in the hypothetical that A->B unless the other person is willing to explain to them why this result would relate to how we should act in the real world. This might be especially likely to be true if they have had similar discussion before and so they have a low prior that the other person will be able to relate it to the real world.
However, at this stage, they almost certainly have to update, in the sense that if you are following the rule of updating on new evidence, you have most likely already received new evidence. The argument is as follows: As soon as you have heard A->B (if it would save a world, I would flick a switch), your brain has already performed a surface level evaluation on that argument. Realistically, the thinker in the situation probably knows that it is really tough to make the argument that we should allow an entire world to be destroyed instead of ending one life. Now, the fact that it is tough to argue against something doesn’t mean that it should be accepted. For example, many philosophical proofs or halves of mathematical paradoxes seem very hard to argue against at first, but we may have an intuitive sense that there is a flaw there to be found if we are smart enough and look hard enough.
However, even if we aren’t confident in the logic we still have to update our priors, once we know that there is an argument for it that at least appears to check out. Obviously we will update to a much lesser degree than if we were confident in the logic, but we still have to update to some extent, even if we think the chance of A->B being analogous to the real world is incredibly small, as there will always be *some* chance that it is analogous assuming the other person isn’t talking nonsense. So even though the analogy hardly seems to fit the real world and even though you’ve perhaps spent only second thinking about whether A->B checks out, you’ve still got to update. I'll add another quick note: you only have to update on the first instance, when you see the same or a very similar problem again, you don't have to update.
How does this play out? An intellectually honest response would be along the lines of: “Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”. This is much more honest than simply trying to dismiss the hypothetical by stating that A is nothing like reality.
There’s one objection that I need to answer. Maybe you say that you haven’t considered A->B at all. I would be really skeptical of this. There is a small chance I’m committing the typical mind fallacy, but I’m pretty sure that your mind considered both A->B and “this is analogous with reality” and you decided to argue for the second because you didn’t find a strong counter-argument against A->B. And if you did actually find a strong counter-argument, but choose to challenge the hypothetical instead, why not use your counter-argument? Why not engage with your opponent directly and take down their argument as this is more persuasive than dodging the question? There probably are situations where this seems reasonable, such if the argument against A->B is very long and complicated, but you think it is much easier to convince the other person that the situation isn’t analogous. These situations might exist, but I would suspect that these situations are relatively rare.
Survey Articles: A justification
There seems to be a growing consensus among the community that while Less Wrong is great at improving epistemic rationality, it is rather lacking when it comes to resources for instrumental rationality. I've been thinking about how to address this. This can be very hard because many of the questions most important to instrumental rationality lack an objective answer and depend heavily on individual circumstance. Consider for example the question, "How do I become a more interesting person?", that is the first survey article I've published. One person might easily have the resources to go travelling and gain new experiences, while another person might be prevented by their financial situation. One person may enjoy the process of broadening their experience by reading, while another may simply detest books. Ignoring these individual circumstances will lead to much of the advice being unsuitable
It therefore seems that in a general resource, that is forced by its very nature to ignore individual circumstances, that the best response would be to gather together as many ideas as possible. It is hoped that each rationalist has the capacity to critically examine each suggestion that is proposed and reject those that would be counterproductive. This differs from a standard list article as, instead of limiting itself to an arbitrary number of ideas, or only using ideas thought of by the author, I have made a comprehensive list and taken ideas from different sources. Taking ideas from different sources is extremely important - a single person can only possess so much creativity. It also decreases the influence of the author's subjective point of view - I might never have said something myself, but I might be willing to include it in a list of ideas. Another problem with lists is that if they are wordy, they take a long time to read through, while if they are concise, they may be misunderstood. Summarising whilst linking to a source means that extra detail is available for those who need it.
One flaw is that the production of this lists will always be greatly subjective. I really like Mark Manson and am probably going to quote him a lot in these lists, but another person might love The Secret and quote it everywhere instead. Regardless of this subjectivity, if you think that a particular source lacks value, then you can choose to just ignore that source and just read the rest of the article. If there is a noticeable omission, that can be addressed in the comments, or, in extreme cases, by producing a rival list. So I think that these articles can work well regardless of subjectivity.
What problem is this designed to solve?
This has already been discussed above, but I want to go into more detail about the current process when someone has one of these subjective questions. The current process probably looks like Googling the question or searching the question on a trusted source (ie. Quora or Reddit). There are many good answers and good ideas, but they are spread out all over the Internet. It is very possible for someone to fail to find a suggestion that would have helped them. Gathering together a large number of different resources helps to minimise this. It also helps people to discover new sources that they might not have thought to look at.
What feedback am I after?
As well as general support or criticisms of the idea, I'd also like to see some suggestions on which questions you'd love to see a survey for.
Survey Article: How do I become a more interesting person?
This post surveys a number of different sources and opinions on how to be a more interesting person. This isn’t merely about improving yourself socially or making your interactions more enjoyable, but also about achieving your full potential as a human being. In this post, I mention specific activities, but it is important to choose activities that align with your personal interests as otherwise, you are much less likely to invest the time and effort required to master them.
Please read this article which explains the need that these survey articles fill.
Quora:
How do I become a more interesting person?
Moses Namkung argues being interesting is about being curious, “restlessly seeking out knowledge” and accumulating new experiences. He claims that they have “merged their personal interests with their work/main purpose in life” and pursue productive activities instead of just vegetating.
Kat Li suggests travelling, learning a language and experiencing foreign cultures. These will help you develop a new way of seeing the world. She notes that although it is good to have a wide variety of experiences, it is generally worthwhile to have at least one area in which you are a true expert, so that you have something that is unique.
Scott Danzig defines interesting as knowing something others don’t, being able to do something others can’t or something that is different. He further suggests that creating a sense of mystery by not revealing certain information can make you more interesting too.
How can you live an interesting life?
Leo Polovets suggests three rules. Firstly, to be willing to do things by yourself, instead of needing someone to come with you. Secondly, saying yes to as many opportunities as possible. Thirdly, to stop caring about what is normal or expected.
Emmet Meehan says that you should be different, but not different just for the sake of being different. He says that if you wear neon green shoe laces it should be because you want neon green shoe laces. He also suggests that something as simple as reading a new book or listening to a new radio station will make you more interesting.
Bud Hennekes explains that some of his best experiences have come from talking about strangers. He notes that if you surround yourself with interesting people you are more likely to end up being interesting yourself. He also argues that you should step outside of your comfort zone and not be afraid of failure, as failure is often interesting in itself.
Many of Greg Strange‘s best experiences came as a result of avoiding preplanning or backup plans. “Go where you will have to depend on your wits, your bravado and your humor. No matter what happens, no matter how good or bad the trip turns out, no matter whether you regret it or not, you did it and you did it on your own.”
Michael Huggins tells some interesting stories of how minor events helped him discover new interests. For example, he saw someone play a few bars of harpsichord on television and when he investigated further he discovered he had a fascination with Baroque music.
Wikihow:
Wikihow suggests going to local events (such as markets or festivals) or reading a new book every month. You may also consider taking courses online (good sites include Khan Academy, Coursera or Udemy).
Lifehacker:
Career Sherpa notes that if you are interested in others, they are likely to reciprocate and be more interested in you. It is often stated that you should ask open ended questions. For example, “Tell me about your family” provides your conversation partner more space to steer the conversation to something interesting than “How many brothers do you have?”.
Mark Manson:
Mark Manson explains that in order to be interesting, you have to take risks. “If you live a non-polarizing life, then you are not going to be attractive or unattractive. You’re just going to be boring. More of the same. Dime a dozen”
He also suggests developing artistic taste. Compare “I really liked Terminator” to “Terminator was great. But what was more interesting to me is that it was the first movie in which you ended up rooting for the villain”. He suggests trying to appreciate the value of all kinds of film and music – as opposed to dismissing entire genres and to judge art based on its intentions, not just results. He suggests that the best way to get into a new genre is to start by consuming the media that is generally considered the best or most critically acclaimed.
Forbes
Forbes suggests embracing your “innate weirdness” and doing “something. anything”. It also suggests finding a cause that you care strongly about since even those who don’t care so much about it themselves can admire your passion.
Succeed Socially
Succeed socially warns that can be “extremely well-round and accomplished”, but you’ll still need some degree of social skills to be socially successful. It also discusses the issue of topics that are socially practical to know. It argues that there are significant benefits to picking up this knowledge, but sometimes “even if it would be practical to learn about them, we still can’t be bothered, and we can live with the consequences”.
Further ideas:
- Keeping up with the news will give you easy topics of conversation
- Subscribing to a Word of the Day site. Sometimes all it takes to make an idea interesting is to say it in a different way
- Checking out a new style of music. People who share your tastes are likely to have similar personalities.
- Picking up a new hobby will make you more interesting. Several suggestions have been listed so far. A few more popular ones are dancing, cooking and learning to play an instrument.
- Becoming more interesting is about developing what is unique about yourself. The advice commonly given is “be yourself”, but I find it much clearer to say, “be the best version of yourself”. This clarifies that it is perfectly fine to change the things that are holding you back socially, so long as you don’t compromise your individuality.
Regarding suggestions: I want this article to focus more on being an interesting person, rather than being an interesting conversationalist. Some ideas about conversations slipped in here, but I'll probably shift them over to the article on conversation when I get around to writing it.
Philosophical schools are approaches not positions
One of the great challenges of learning philosophy is trying to understand the difference between different schools of thought. Often it can be almost impossible to craft a definition that is specific enough to be understandable, whilst also being general enough to convey to breadth of that school of thought. I would suggest that this is a result of trying to define a school as taking a particular position in a debate, when they would be better defined as taking a particular approach to answering a question.
Take for example dualism and monism. Dualists believe that there exist two substances (typically the material substance and some kind of soul/consciousness), while monists believe that there only exists one. The question of whether this debate is defined precisely enough to actually be answerable immediately crops up. Few people would object to labelling the traditional Christian model of souls which went to an afterlife as being a Dualist model or a model of our universe with no conscious beings whatsoever as being monist. However, providing a good, general definition of what would count as two substances and what would count as one seems extraordinarily difficult. The question then arises of whether the dualism vs. monism debate is actually in a form that is answerable.
In contrast, if Dualism and Monism are thought of as approaches, then there can conceivably exist some situations Dualism is clearly better, some situations where Monism is clearly better and some situations where it is debatable. Rather than labelling the situation to be unanswerable, it would be better to call it possibly unanswerable.
Once it is accepted that dualism and monism are approaches, rather than positions the debate becomes much clearer. We can define these approaches as follows: Monism argues for describing reality as containing a single substance, while dualism argues for describing reality as containing two substances: typically one being physical and the other being mental or spiritual. I originally wrote this sentence using the word ‘modelling’ instead of ‘describing’, but I changed it because I wanted to be neutral on the issue on whether we can talk about what actually exists or can only talk about models of reality. If it was meaningful to talk about whether one or two substances actually existed (as opposed to simply being useful models), then the monism and dualism approaches would collapse down to being positions. However, the assumption that they have a "real" existence, if that is actually a valid concept, should not be made at the outset, and hence we describe them as approaches.
Can we still have our dualism vs. monism debate? Sure, kind of. We begin by using philosophy to establish the facts. In some cases, only one description may match the situation, but in other cases, it may be ambiguous. If this occurs, we could allow a debate to occur over which is the better description . This seems like a positional debate, but simply understanding that it is a descriptional debate changes how the debate plays out. Some people would argue that this question isn’t a job for philosophers, but for linguists, and I acknowledge that's there's a lot of validity to this point of view. Secondly, these approaches could be crystalised into actual positions. This would involve creating criteria for one side to win and the other to lose. Many philosophers who belong to monism, for example, would dislike the "crystalised" monism for not representing their name, so it might be wise to give these crystilised positions their own name.
We also consider free will. Instead of understanding the free will school of philosophy to hold the position that F0 exists where F0 is what is really meant by free will, it is better to understand it as an general approach that argues that there is some aspect of reality accurately described by the phrase “free will”. Some people will find this definition unsatisfactory and almost tauntological, but no more precise statement can be made if we want to capture the actual breadth of thought. If you want to know what this person actually believes, then you’ll have to ask them to define what they are using free will to mean.
This discussion also leads us a better way to teach people about these terms. The first part is to explain how the particular approach tries to describe reality. The second is to explain why particular situations or thought experiments seems to make more sense with this description.
While I have maintained that philosophical schools should be understood as approaches, rather than positions, I admit the possibility than in a few cases philosophers might have actually managed to come to consensus and make the opposing schools of thought positions rather than approaches. This analysis would not apply to them. However, if these cases do in fact exist, the appear to be far and few between.
Note: I'm not completely happy with the monism, dualism example, I'll probably replace it later when I come across a better example for demonstrating my point.
The Trolley Problem and Reversibility
The most famous problem used when discussing consequentialism is that of the tram problem. A tram is hurtling towards the 5 people on the track, but if you flick a switch it will change tracks and kill only the one person instead. Utilitarians would say that you should flick the switch as it is better for there to be a single death than five. Some deontologists might agree with this, however, much more would object and argue that you don’t have the right to make that decision. This problem has different variations, such as one where you push someone in front of the train instead of them being on the track, but we’ll consider this one, as if it is accepted then it moves you a large way towards utilitarianism.
Let’s suppose that someone flicks the switch, but then realises the other side was actually correct and that they shouldn’t have flicked it. Do they now have an obligation to flick the switch back? What is interesting is that if they had just walked into the room and the train was heading towards the one person, they would have had an obligation *not* to flick the switch, but, having flicked it, it seems that they have an obligation to flick it back the other way.
Where this gets more puzzling is when we imagine Bob having observed Aaron flicking the switch? Arguably, if Aaron had no right to flick the switch, then Bob would have obligation to flick it back (or, if not an obligation, this would surely count as a moral good?). It is hard to argue against this conclusion, assuming that there is a strong moral obligation for Aaron not to flick the switch, along the lines of “Do not kill”. This logic seems consistent with how we act in other situations; if someone had tried to kill someone or steal something important from them; then most people would reverse or prevent the action if they could.
But what if Aaron reveals that he was only flicking the switch because Cameron had flicked it first? Then Bob would be obligated to leave it alone, as Aaron would be doing what Bob was planning to do: prevent interference. We can also complicate it by imagining that a strong gust of wind was about to come and flick the switch, but Bob flicked it first. Is there now a duty to undo Bob's flick of the switch or does that fact that the switch was going to flick anyway abrogate that duty? This obligation to trace back the history seems very strange indeed. I can’t see any pathway to find a logical contradiction, but I can’t imagine that many people would defend this state of affairs.
But perhaps the key principle here is non-interference. When Aaron flicks the switch, he has interfered and so he arguably has the limited right to undo his interference. But when Bob decides to reverse this, perhaps this counts as interference also. So while Bob receives credit for preventing Aaron’s interference, this is outweighed by committing interference himself - acts are generally considered more important than omissions. This would lead to Bob being required to take no action, as there wouldn’t be any morally acceptable pathway with which to take action.
I’m not sure I find this line of thought convincing. If we don’t want anyone interfering with the situation, couldn’t we lock the switch in place before anyone (including Aaron) gets the chance or even the notion to interfere? It would seem rather strange to argue that we have to leave the door open to interference even before we know anyone is planning to do so. Next suppose that we don’t have glue, but we can install a mechanism that will flick the switch back if anyone tries to flick it. Principally, this doesn’t seem any different from installing glue.
Next, suppose we don’t have a machine to flick it back, so instead we install Bob. It seems that installing Bob is just as moral as installing an actual mechanism. It would seem rather strange to argue that “installing” Bob is moral, but any action he takes is immoral. There might be cases where “installing” someone is moral, but certain actions they take will be immoral. One example would be “installing” a policeman to enforce a law that is imperfect. We can expect the decision to hire the policeman to be moral if the law is general good, but, in certain circumstances, flaws in this law might make enforcement immoral. But here, we are imagining that *any* action Bob takes is immoral interference. It therefore seems strange to suggest that installing him could somehow be moral and so this line of thought seems to lead to a contradiction.
We consider one last situation: that we aren't allowed to interfere and that setting up a mechanism to stop interference also counts as interference. We first imagine that Obama has ordered a drone attack that is going to kill a (robot, just go with it) terrorist. He knows that the drone attack will cause collateral damage, but it will also prevent the terrorist from killing many more people on American soil. He wakes up the next morning and realises that he was wrong to violate the deontological principles, so he calls off the attack. Are there any deotologists who would argue that he doesn’t have the right to rescind his order? Rescinding the order does not seem to count as "further interference", instead it seems to count as "preventing his interference from occurring". Flicking the switch back seems functionally identical to rescinding the order. The train hasn’t hit the intersection; so there isn’t any casual entanglement, so it seems like flicking the switch is best characterised as preventing the interference from occurring. If we want to make the scenarios even more similar, we can imagine that flicking the switch doesn't force the train to go down one track or another, but instead orders the driver to take one particular track. It doesn't seem like changing this aspect of the problem should alter the morality at all.
This post has shown that deontological objections to the Trolley Problem tend to lead to non-obvious philosophical commitments that are not very well known. I didn't write this post so much as to try to show that deontology is wrong, as to start as conversation and help deontologists understand and refine their commitments better.
I also wanted to include one paragraph I wrote in the comments: Let's assume that the train will arrive at the intersection in five minutes. If you pull the lever one way, then pull it back the other, you'll save someone from losing their job. There is no chance that the lever will get stuck out that you won't be able to complete the operation on trying. Clearly pulling the lever, then pulling it back is superior to not touching it. This seems to indicate that the sin isn't pulling the lever, but pulling it without the intent to pull it back. If the sin is pulling it without intent to pull it back, then it would seem very strange that gaining the intent to pull it back, then pulling it back would be a sin.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)