Moral Anti-Epistemology

0 Lukas_Gloor 24 April 2015 03:30AM

This post is a half-baked idea that I'm posting here in order to get feedback and further brainstorming. There seem to be some interesting parallels between epistemology and ethics.

Part 1: Moral Anti-Epistemology

"Anti-Epistemology" refers to bad rules of reasoning that exist not because they are useful/truth-tracking, but because they are good at preserving people's cherished beliefs about the world. But cherished beliefs don't just concern factual questions, they also very much concern moral issues. Therefore, we should expect there to be a lot of moral anti-epistemology. 

Tradition as a moral argument, tu quoque, opposition to the use of thought experiments, the noncentral fallacy, slogans like "morality is from humans for humans" – all these are instances of the same general phenomenon. This is trivial and doesn't add much to the already well-known fact that humans often rationalize, but it does add the memetic perspective: Moral rationalizations sometimes concern more than a singular instance, they can affect the entire way people reason about morality. And like with religion or pseudoscience in epistemology about factual claims, there could be entire memeplexes centered around moral anti-epistemology. 

A complication is that metaethics is complicated; it is unclear what exactly moral reasoning is, and whether everyone is trying to do the same thing when they engage in what they think of as moral reasoning. Labelling something "moral anti-epistemology" would suggest that there is a correct way to think about morality. Is there? As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal. And this is where moral anti-epistemology might cause troubles. 

Are there reasons to assume that certain popular ethical beliefs are a result of moral anti-epistemology? Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), but what is it about deontology that relies on "faulty moral reasoning", if indeed there is something about it that does? How much of it relies on the noncentral fallacy, for instance? Is Yvain's personal opinion that "much of deontology is just an attempt to formalize and justify this fallacy" correct? The perspective of moral anti-epistemology would suggest that it is the other way around: Deontology might be the by-product of people applying the noncentral fallacy, which is done because it helps protect cherished beliefs. Which beliefs would that be? Perhaps the strongly felt intuition that "Some things are JUST WRONG?", which doesn't handle fuzzy concepts/boundaries well and therefore has to be combined with a dogmatic approach. It sounds somewhat plausible, but also really speculative. 

Part 2: Memetics

A lot of people are skeptical towards these memetical just-so stories. They argue that the points made are either too trivial, or too speculative. I have the intuition that a memetic perspective often helps clarify things, and my thoughts about applying the concept of anti-epistemology to ethics seemed like an insight, but I have a hard time coming up with how my expectations about the world have changed because of it. What, if anything, is the value of the idea I just presented? Can I now form a prediction to test whether deontologists want to primarily formalize and justify the noncentral fallacy, or whether they instead want to justify something else by making use of the noncentral fallacy?

Anti-epistemology is a more general model of what is going on in the world than rationalizations are, so it should all reduce to rationalizations in the end. So it shouldn't be worrying that I don't magically find more stuff. Perhaps my expectations were too high and I should be content with having found a way to categorize moral rationalizations, the knowledge of which will make me slightly quicker at spotting or predicting them.

Thoughts?

Comment author: jkaufman 09 December 2014 08:47:47PM 4 points [-]

I'm not seeing where in Dagon's comment they indicate preference utilitarianism vs (ex) hedonic?

Comment author: Lukas_Gloor 10 December 2014 12:41:09PM 0 points [-]

I see what you mean. Why I thought he meant preference:

1) talks about "utility of all humans", whereas a classical utilitarian would more likely have used something like "well-being". However, you can interpret is as a general placeholder for "whatever matters".

3) is also something that you mention in economics usually, associated with preference-models. Here again, it is true that diminishing marginal utility also applies for classical utilitarianism.

Comment author: MathiasZaman 09 December 2014 09:20:53AM 2 points [-]

If you want to completely optimize your life for creating more global utilons then, yes, utilitarianism requires extreme self-sacrifice. The time you spend playing that video-game or hanging out with friends netted you utility/happiness, but you could have spend that time working and donating the money to an effective charity. That tasty cheese you ate probably made you quite happy, but it didn't maximize utility. Better switch to the bare minimum you need to work the highest-paying job you can manage and give all the money you don't strictly need to an effective charity.

Of course, humans (generally) can't manage that. You won't be able to function at a high-paying job if you can't occasionally indulge in some tasty food or if your Fun-bar is in the red all the time. (Or, for that matter, most of your other bars. You'll probably spend a lot of time lying on the floor crying if you live like this.

While it might be morally optimal for you to ignore your own needs and work on the biggest gains you can manage, this isn't something that can be required of (most) people. You can use utilitarianism as a framework to base you decisions on without giving up everything. Giving up 100% of your income to a good charity might be morally optimal, but [giving 10% still makes a huge impact[(https://www.givingwhatwecan.org) and allows you a comfortable life yourself.

I don't think being perfectly utilitarian is something (most) humans should strive for. Use it as guidelines to influence the world around you, but don't let it drive you crazy.

Or to quote someone on skype::

[Considering yourself a bad person because utilitarianism] is like saying Usain Bolt is slow because he runs at such a tiny fraction of the speed of light.

Comment author: Lukas_Gloor 09 December 2014 02:59:41PM *  0 points [-]

That's a great quote! Despite its brevity it explains a big part of what I used hundreds of words to explain:)

Comment author: Dagon 09 December 2014 09:17:09AM *  4 points [-]

"Utilitarianism" for many people includes a few beliefs that add up to this requirement.

  • 1) Utility of all humans is more-or-less equal in importance.
  • 2) it's morally required to make decisions that maximize total utility.
  • 3) there is declining marginal utility for resources.

Item 3 implies that movement of wealth from someone who has more to someone who has less increases total utility. #1 means that this includes your wealth. #2 means it's obligatory.

Note that I'm not a utilitarian, and I don't believe #1 or #2. Anyone who actually does believe these, please feel free to correct me or rephrase to be more accurate.

Comment author: Lukas_Gloor 09 December 2014 02:52:12PM -1 points [-]

This sounds like preference utilitarianism, the view that what matters for a person is the extent to which her utility function ("preferences") is fulfilled. In academic ethics outside of Lesswrong, "utilitarianism" refers to a family of ethical views, of which the most commonly associated one is Bentham's "classical utilitarianism", where "utility" is very specifically defined as "suffering minus happiness" that a person experiences over time.

Comment author: Lukas_Gloor 09 December 2014 02:34:29PM *  7 points [-]

I thought about this question a while ago and have been meaning to write about it sometime. This is a good opportunity.

Terminology: Other commenters are pointing out that there are differing definitions of the word "utilitarianism". I think it is clear that the article in question is talking about utilitarianism as an ethical theory (or rather, a family of ethical theories). As such, utilitarianism is a form of consequentialism, the view that doing "the right thing" is what produces the best state of affairs. Utilitarianism is different from other forms of consequentialism in that the thing people consider good/valuable/worth achieving is directly tied to conscious beings. An example of a non-utilitarian consequentialist theory would be the belief that knowledge is the most important thing, and that we should all strive to advance science (at all costs).

In regard to the question, there are two interesting points that are immediately worth pointing out:

1) Utilitarianism (and any sort of consequentialism), if it is indeed demanding, is only demanding in certain empirical situations. If the world is already perfect, you don't have to do anything! 2) For every consequentialist view, there are empirical situations where achieving the best consequences is extremely demanding. Just imagine that the desired state of affairs is really hard to attain.

So my first reply to people who criticise utilitarianism for it being too demanding is the following: Yes, it's very unfortunate that the world is so messed up, but it's not the fault of the utilitarians!

Further, the quoted statement in bold speaks of certain actions being "not just admirable, but morally obligatory". I find this framing misleading. I believe that people should taboo words like "morally obligatory" in ethical discussions. It makes it seem like there is some external moral standard that humans are supposed to obey, but what would it be, and more importantly, why should we care? In my disclaimer on terminology, I wrote that I'm referring to utilitarianism as an ethical theory. I don't intend this to mean that utilitarians are committed to the claim that there are universally valid "ethical truths". I would define "utilitarian" as: "Someone who would voluntarily take a pill that turns them into a robot that goes on to perfectly maximize expected utility". With "utility" being defined as "world-states that are good for sentient individuals", with "good" being defined in non-moral terms, depending on which branch of utilitarianism one subscribes to (could be that e.g. preference-fulfillment is important to you, or contentment, or sum of happiness minus suffering). According to this interpretation, a utilitarian would not be committed to the view that non-utilitarian people are "making a mistake" -- perhaps they just care about different things!

According to the meta ethical view I just sketched, which is meta ethical anti-realism, the demandingness of utilitarianism loses its scariness. If something is requested of you against your will, you're going to object all the more if the request is more demanding. However, if you have a particular goal in life and find out that the circumstances are unfortunately quite dire, so achieving your goal will be very hard, your objection will be directed towards the state of the world, not towards your own goal (hopefully anyway, sometimes people irrationally do the other thing).

Yes, utilitarianism ranks actions according to how much expected utility they produce, and only one action will be "best". However, it would be very misleading to apply moral terms like "only the best action is right, all the others are wrong". Unlike deontology, where all you need to do is to not violate a set of rules, utilitarianism should be thought of as an open-ended game where you can score points, and all you try is to score the most points. Yes, there is just one best path of action, but it can still make a huge difference whether you e.g. take the fifteenth best action or the nineteenth. For utilitarians, moral praise is merely instrumental: They want to blame and praise people in a way that produces the best outcome. This includes praising people for things that are less than perfect, for instance.

So in part, the demandingness objection against utiltiarianism relies on an uncharitable interpretation/definition of "utilitarianism", which commits utilitarians to believe in moral realism. (I consider this interpretation uncharitable because I think the entire concept of "moral realism" is, like libertarian free will, a confused idea that cannot be defined in clear terms without losing at least part of the connotations we intuitively considered important.

Another reason why I think the demandingness objection is a bad objection is because people usually apply it in a naive, short-sighted way. The author of the quote in question did so, for instance: "It also appears to imply that donating all your money to charity beyond what you need to survive (…)" This is wrong. It only implies donating all your money to charity beyond what you need to be maximally productive in the long run. Empirical studies show that being poor decreases the quality of your decision-making. Further, putting too much pressure on yourself often leads to burnout, which leads to a significant loss of productivity in the long run. I find that people tend to overestimate how demanding a typical utilitarian life is. But they are right insofar as there could be situations where trying to achieve the utilitarian goal results in significant self-sacrifice. Such situations are definitely logically possible, but I think they are much more rare than people think.

The reason this is the case is because people tend to conflate "trying to act like a perfectly rational, super-productive utilitarian robot would act" and "trying to maximise expected utility given all your personal constraints". Utilitarianism implies the latter, not the former. Utilitarianism refers to desiring a specific overall outcome, not to a specific decision-procedure for every action you are taking. It is perfectly in line with utilitarianism to come to a conclusion such as: "My personality happens to be such that thinking about all the suffering in the world every day is just too much for me, I literally couldn't keep it up for more than two months. I want to make a budget for charity once every year, I donate what's in that budget, and for the rest of the time, I try to not worry much about it." If it is indeed the case that doing things differently will lead to this person giving up the entire endeavour of donating money, then this is literally the best thing to do for this person. Humans need some degree of happiness and luxury if they want to remain productive and clear-headed in the long run.

The whole thing is also extremely person-dependent. For some people, "trying to maximise expected utility given all your personal constraints" will look more like "trying to act like a perfectly rational, super-productive utilitarian robot would act" than for other people. Some people are just naturally better at achieving a goal than other people, this depends on both the goals and on the personality traits and assets of the person in question.

Finally, let's ask whether "trying to maximise expected utility given all your personal constraints" will, on average, given real-world circumstances, prove to be demanding or not. I suggest to define "demanding" as follows: goal A is more demanding than goal B if people who try to rationally achieve A have a lower average happiness across a time period than people who try to rationally achieve goal B. If you were to empirically measure this, I would suggest contacting people at random times during day or night to ask them to report how they are feeling at this very moment. When it comes to momentary happiness, it is trivial that trying to maximise your momentary happiness will lead to you being happier than trying to be utilitarian. Utilitarians might object, citing the paradox of hedonism: When people only focus on their own personal happiness, their life will soon feel sad. However, this would be making the exact same mistake I discussed earlier. If it is truly the case that explicitly focusing on your personal happiness makes you miserable, then of course the rational thing to do for a person with this goal would be to self-modify and convince yourself to follow a different goal.

There is a distinction between the experiencing self and the remembering self, which is why it would be a completely different question to ask people "how happy are you with your life on the whole". For instance, I read somewhere that mothers (compared to women without children) tend to be less happy in the average moment, but more happy with their life as a whole. What is it that you care about more? I would assume that people are happy with their life on the whole if they know what they want in life, if they think they made good choices regarding the goals that they have, and if they got closer to their goals more and more. At least for the first part of this, knowing what you want in life, utilitarianism does very well.

Comment author: Lukas_Gloor 24 November 2014 05:36:17PM 3 points [-]

According to how I understand the proposed view (which might well be wrong!), there seems to be a difficulty that your natural zero affects how to do tradeoffs with the welfare of pre-existing beings. How would the view deal with the following cases:

Case_1: Agent A has the means to bring being B into existence, but if no further preparations are taken, B will be absolutely miserable. If agent A takes away resources from pre-existing being C in order to later give them to B, thereby causing a great deal of suffering to C, B's life-prospects can be improved to a total welfare of slightly above zero. If the natural zero is sufficiently negative, would such a transaction be permissible?

Case_2: If it's not permissible, it seems that we must penalize cases where the natural zero starts out negative. However, how about a case where the natural zero is just slightly negative, but agent A only needs to invest a tiny effort in order to guarantee being B a hugely positive life. Would that always be impermissible?

Comment author: Lukas_Gloor 06 August 2014 07:12:56PM *  2 points [-]

facts about what preferences one should have

The "should" here is not defined clearly enough (or at all!), even though this seems to be the central point in the debate. We have the intuition that the question is meaningful, but I suspect that it really isn't. I don't understand what this could possibly mean -- expect for trivial cases where you already specify a goal. I would leave it at "Most intelligent beings in the multiverse share similar preferences", with perhaps adding a qualifier like "evolved/intelligently designed". Note that this would then be answering a slightly different question than 3., 4. and 5.

My own view is roughly a 4.3 on the spectrum from 4. to 5.

The way "complexity of value" is used by Eliezer seems to suggest that he adheres to view 3, although I could well imagine him also going for 4 or 5.

I'm unsure about 6; I suspect/hope that you can just define "winning" clearly enough in whatever utility function you're interested in and decision theory will sort itself out. But maybe it's more complicated.

Comment author: mwengler 30 March 2014 03:00:51PM *  5 points [-]

I'm of the belief that the central problem in modern society is that we inherited a bad moral philosophy

So you gave up consequentialism because virtue ethics had better consequences?

Comment author: Lukas_Gloor 31 March 2014 09:52:59AM 0 points [-]

Exactly, the way people talk about this on LW confuses me. I think I agree with everything, but it is framed in a weird way.

Comment author: blacktrance 27 March 2014 04:31:03PM *  4 points [-]

I can personally attest that thinking about ethics has significantly affected my life and has given me a lot of insight.

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations

This is a problem only if you assume that morality is external, as utilitarianism, Kantianism, and similar ethical systems are. If you take an internal approach to morality, as in contractarianism, virtue ethics, and egoism, this isn't a problem.

Comment author: Lukas_Gloor 31 March 2014 09:51:14AM *  -1 points [-]

Agreed, but I'd like to point out that this is a false dichotomy: Utilitarianism can be the conclusion when following an internal approach. And seen that way, it doesn't feel like you need to pressure yourself to follow some external standard. You simply need to pressure yourself to follow your own standard, i.e. make the best out of akrasia, addictions and sub-utility functions of your non-rational self that you would choose get rid of if you had a magic pill that could do so.

Comment author: Qiaochu_Yuan 27 March 2014 04:38:31PM 27 points [-]

Agreed. In general, I think a lot of the discussion of ethics on LW conflates ethics-for-AI with ethics-for-humans, which are two very different subjects and which should be approached very differently (e.g. I think virtue ethics is great for humans but I don't even know what it would mean to make an AI a virtue ethicist).

Comment author: Lukas_Gloor 31 March 2014 09:49:03AM 2 points [-]

If I paraphrased your position as "In order to act well according to some consequentialist goals, it makes sense for humans to follow a virtue ethical decision-procedure?", would you agree?

View more: Prev | Next