6 min read25th Apr 2011562 comments

44

When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?

First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.

My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.

So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.

Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?

Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?

Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?

Others prefer to combine applied ethics and normative ethics so that the breakdown becomes: normative ethics vs. metaethics, or 'first order' moral questions (normative ethics) vs. 'second order' questions (metaethics).

Mainstream views in metaethics

To illustrate how people can give different answers to the questions of metaethics, let me summarize some of the mainstream philosophical positions in metaethics.

Cognitivism vs. non-cognitivism: This is a debate about what is happening when people engage in moral discourse. When someone says "Murder is wrong," are they trying to state a fact about murder, that it has the property of being wrong? Or are they merely expressing a negative emotion toward murder, as if they had gasped aloud and said "Murder!" with a disapproving tone?

Another way of saying this is that cognitivists think moral discourse is 'truth-apt' - that is, moral statements are the kinds of things that can be true or false. Some cognitivists think that all moral claims are in fact false (error theory), just as the atheist thinks that claims about gods are usually meant to be fact-stating but in fact are all false because gods don't exist.1 Other cognitivists think that at least some moral claims are true. Naturalism holds that moral judgments are true or false because of natural facts,2 while non-naturalism holds that moral judgments are true or false because of non-natural facts.3 Weak cognitivism holds that moral judgments can be true or false not because they agree with certain (natural or non-natural) opinion-independent facts, but because our considered opinions determine the moral facts.4

Non-cognitivists, in contrast, tend to think that moral discourse is not truth-apt. Ayer (1936) held that moral sentences express our emotions ("Murder? Yuck!") about certain actions. This is called emotivism or expressivism. Another theory is prescriptivism, the idea that moral sentences express commands ("Don't murder!").5 Or perhaps moral judgments express our acceptance of certain norms (norm expressivism).6 Or maybe our moral judgments express our dispositions to form sentiments of approval or disapproval (quasi-realism).7

Moral psychology: One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism). Another debate concerns whether motivation depends on both beliefs and desires (the Humean theory of motivation), or whether some beliefs are by themselves intrinsically motivating (non-Humean theories of motivation).

More recently, researchers have run a number of experiments to test the mechanisms by which people make moral judgments. I will list a few of the most surprising and famous results:

  • Whether we judge an action as 'intentional' or not often depends on the judged goodness or badness of the action, not the internal states of the agent.8
  • Our moral judgments are significantly affected by whether we are in the presence of freshly baked bread or a low concentration of fart spray that only the subconscious mind can detect.9
  • Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.10
  • People tend to insist that certain things are right or wrong even when a hypothetical situation is constructed such that they admit they can give no reason for their judgment.11
  • We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains.12
  • People give harsher moral judgments when they feel clean.13

Moral epistemology: Different views on cognitivism vs. non-cognitivism and moral psychology suggest different views of moral epistemology. How can we know moral facts? Non-cognitivists and error theorists think there are no moral facts to be known. Those who believe moral facts answer to non-natural facts tend to think that moral knowledge comes from intuition, which somehow has access to non-natural facts. Moral naturalists tend to think that moral facts can be accessed simply by doing science.

Tying it all together

I will not be trying very hard to fit my pluralistic moral reductionism into these categories. I'll be arguing about the substance, not the symbols. But it still helps to have a concept of the subject matter by way of such examples.

Maybe mainstream metaethics will make more sense in flowchart form. Here's a flowchart I adapted from Miller (2003). If you don't understand the bottom-most branching, read chapter 9 of Miller's book or else just don't worry about it. (Click through for full size.)

Next post: Conceptual Analysis and Moral Theory

Previous post: Heading Toward: No-Nonsense Metaethics

Notes

1 This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for he thinks that murder is not wrong or right. Rather, the error theorist claims that all moral statements which presuppose the existence of a moral property are false, because no such moral properties exist. See Joyce (2004). Mackie (1977) is the classic statement of error theory.

2 Sturgeon (1988); Boyd (1988); Brink (1989); Brandt (1979); Railton (1986); Jackson (1998). I have written introductions to the three major versions of moral naturalism: Cornell realism, Railton's moral reductionism (1, 2), and Jackson's moral functionalism.

3 Moore (1903); McDowell (1998); Wiggins (1987).

4 For an overview of such theories, see Miller (2003), chapter 7.

5 See Carnap (1937), p. 23-25; Hare (1952).

6 Gibbard (1990).

7 Blackburn (1984).

8 The Knobe Effect. See Knobe (2003).

9 Schnall et al. (2008); Baron & Thomley (1994).

10 Young et al. (2010). I interviewed the author of this study here.

11 This is moral dumfounding. See Haidt (2001).

12 Greene (2007).

13 Zhong et al. (2010).

References

Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26(6): 766-784.

Blackburn (1984). Spreading the Word. Oxford University Press.

Brandt (1979). A Theory of the Good and the Right. Oxford University Press.

Brink (1989). Moral Realism and the Foundations of Ethics. Cambridge University Press.

Boyd (1988). How to be a Moral Realist. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 181-122). Cornell University Press.

Carnap (1937). Philosophy and Logical Syntax. Kegan Paul, Trench, Trubner & Co.

Gibbard (1990). Wise Choices, Apt Feelings. Clarendon Press.

Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. MIT Press.

Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834

Hare (1952). The Language of Morals. Oxford University Press.

Jackson (1998). From Metaphysics to Ethics. Oxford UniversityPress.

Joyce (2001). The Myth of Morality. Cambridge University Press.

Knobe (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63: 190-193.

Mackie (1977). Ethics: Inventing Right and Wrong. Penguin.

McDowell (1998). Mind, Value, and Reality. Harvard University Press.

Miller (2003). An Introduction to Contemporary Metaethics. Polity.

Moore (1903). Principia Ethica. Cambridge University Press.

Schnall, Haidt, Clore, & Jordan (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8): 1096-1109.

Sturgeon (1988). Moral explanations. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 229-255). Cornell University Press.

Railton (1986). Moral realism. Philosophical Review, 95: 163-207.

Wiggins (1987). A sensible subjectivism. In Needs, Values, Truth (pp. 185-214). Blackwell.

Young, Camprodon, Hauser, Pascual-Leone, & Saxe (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proceedings of the National Academy of Sciences, 107: 6753-6758.

Zhong, Strejcek, & Sivanathan (2010). A clean self can render harsh moral judgment. Journal of Experimental Social Psychology, 46 (5): 859-862

New Comment
562 comments, sorted by Click to highlight new comments since: Today at 2:29 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hm. What is this post for? It doesn't explain the ideas it refers to in any detail sufficient to feel what they mean, and from what it does tell, the ideas seem pretty crazy/simplistic, paying attention to strange categories, like that philpapers survey. (The part before "Mainstream views in metaethics" section does seem to address the topic of the post, but the rest is pretty bizarre. If that was the point, it should've been made, I think, but it probably wasn't.)

My posts are now going to feel naked to me whenever they lack a comment from you complaining that the post isn't book-length, covering every detail of a given topic. :)

Like I said, I don't have much interest in fitting my views into the established categories, but I wanted to give people an overview of how metaethics is usually done so they at least have some illustrations of what the subject matter is.

And if you find mainstream metaethics bizarre, well... welcome to a diseased discipline.

6Amanojack13y
Since you understand how diseased the discipline of ethics is, I'm hoping the next post in the series will focus heavily on clearing up the semantic issues that have made it so diseased. I don't think any real sense can be made of metaethics until the very nature of what someone is doing when they utter an ethical statement is covered. We use language to do a lot of things: express emotions, make other people do stuff, signal, intimidate, get our thoughts into other people's minds, parrot what someone else said - and often more than one of these at a time. Since we presumably are trying to get at the speaker's intention, we really can't know the "meaning" without asking the speaker, yet various metaethical theorists call themselves emotivists, error theorists, prescriptivists, and so on. It seems to me the choice of an meta-ethical theory boils down to a choice of what the theorists wants to presume people are trying to do when they use the word ought. Surely no one can deny that sometimes some people do indeed intend "You ought not steal" as a command, or as a way of expressing disgust at the notion of theft, or simply as a means of intimidation. My meta-meta-ethical theory is that it all depends on what the person uttering the statement intends to accomplish by saying it. A debate between these meta-ethical theories sounds very likely to revolve around whose definition of ought is "correct". In short, I think the main reason ethics is so diseased as a discipline is that the theorists are trying to argue whose definition is better, rather than acknowledging that it is pretty hard for anyone to know what each person intends by their moralistic language.
5Clippy13y
My definition of "ought" is correct.
3lukeprog13y
Yes, I agree with all this.
0[anonymous]13y
Maybe the preoccupation with "statements" is part of the disease. After all, there would probably be ethics even without language or with a very different language. And after all, when investigating x, you should investigate x, not statements about x.
3CuSithBell13y
But first you need to identify x. Which is a question about the meaning of a word.
0Amanojack13y
Though Bongo is surely right there would be moral sentiments even without language, now we are dealing with something identified: specific emotions like empathy, sense of justice, disgust, indignation, pity. Yeah those would exist without language. And yes, language has made things much more complicated, and the preoccupation with analyzing sentences makes it even even worse. If people can realize all that without looking at the very nature of communication, that would be great, but in my experience most people feel hesitant about scrapping so many centuries of philosophy and need to see how the language makes such a mess of things before they can truly feel comfortable with it. If Bongo is ready to scrap language analysis now and drop all the silly -isms, I'm preaching to the choir.
2Amanojack13y
Ethics is unique, at least to me, in that I still have no idea what the heck people are even referring to most of the time when they use moralistic language. I can't investigate X until I know what X is even supposed to be about. Most of the time there is a fundamental failure to communicate, even regarding the definition of the field itself. And whenever there isn't such a failure, the problem disappears and all discussants agree as if nothing.
-9Gray13y
2lukeprog13y
I should add that nobody who has read and understood the sequences should be surprised by what I'll describe as 'pluralistic moral reductionism.' I'm writing this sequence because I think this basic view on standard metaethical questions hasn't yet been articulated clearly enough for my satisfaction. And then, I want to make a bit of progress on the hard questions of 'metaethics' (it depends where you draw the boundary around 'metaethics') - but only after I've swept away the easy questions of metaethics.

This post covered at least as much material as my old college moral philosophy classes did in a month. It also left me feeling more confident that I understood all the terms involved than that month of classes did. Thank you for being able to explain difficult things clearly and concisely.

I request an explanation of why my comment telling Luke he did a good job is more highly upvoted than the post Luke did a good job on. If you agree with me that Luke did a good job strongly enough to upvote the statement, why not upvote Luke?

Couldn't that just be due to a higher number of total votes (both up an down) for the OP? I would assume fewer people read each comment, and downvoters may have decided to only weigh in on the OP. A hypothetical controversial post could have a karma of 8, with 10 downvotes negating 10 upvotes, and a supportive comment could have 9 upvotes due to half of the upvotes of the first post giving it their vote. The comment has higher karma, but lower volatility, so to speak.

0wedrifid13y
Good explanation.
4prase13y
I have upvoted your comment because it gives a feedback to the author, which should be encouraged (negative feedback leads to improvement, but surely we don't want to read only disapproval, do we?). Not always when I upvote a comment, I agree with its content.
3TheOtherDave13y
Oddly, the comment is now less upvoted than the post, but your request for an explanation is being downvoted. I'm kinda curious as to the underlying thought processes now, myself.
6NancyLebovitz13y
This is making me wonder if karma can cause people to model LW as having a group mind, and if people generally think of social groups which are too large to model each individual as being group minds.
1TheOtherDave13y
I'm not sure if it's related to what you're wondering, but if it helps clarify anything I'll add that I don't exactly know what a group mind is, or what exactly it means to model a group as one, but that when I ask questions of a forum (or, as in this case, mention to a forum that I'm curious about something) I expect that a large number of individuals will read the question, decide individually whether they have a useful answer and whether they feel like providing it, and act accordingly. In this case, more specifically, I figured that the people whose voting patterns matched the group-level behavior -- e.g., the ones who upvoted Yvain but not Luke at first, or who downvoted Yvain's request for explanation -- might address my curiosity with personal anecdotes... and potentially that various other people would weigh in with theories.
1NancyLebovitz13y
What I was thinking of with the "group mind" is that it can be tempting if one is flamed by a few people in a group, to feel as though the whole group is on the attack.
1wedrifid13y
For my part I model karma interactions and group thinking processes here via subgroups (which are not necessarily mutually exclusive). There are also a few who get their own model - which is either a compliment, insult or in some cases both.
0[anonymous]13y
Tolerate tolerance? For example, I downvoted the post, but not your comment.
0RobinZ13y
I expect to upvote this after I can see how it fits into the sequence better.
2Emile13y
WrongBot said something similar, but I found it a bit hard to follow, especially since I'm unfamiliar with some of the terminology like "natural facts", and also because keeping track of a lot of newly-introduced terminology describing the various positions is not easy.
0lukeprog13y
Thanks!

I still have a hard time seeing how any of this is going to go somewhere useful.

4lukeprog13y
Luckily, for the moment, some people are already finding it useful.
2thomblake13y
Here is my understanding: Ethics is the study of what one has most reason to do or want. On that definition, it is directly relevant to instrumental rationality. And if we want to discover the facts about ethics, we should determine what sort of things those facts would be, so that we might recognize them when we've found them - this, on one view, is the role of metaethics. This post is an intro to current thought on metaethics, which should at least make more clear the scope of the problem to any who would like to pursue it.

What does "natural fact" mean?

4lukeprog13y
It means different things to different people. Moore (1903) wrote: Alternatively, Baldwin (1993) suggests: Warnock's (1960) interpretation of Moore was: Miller (2003) concludes:
8Will_Newsome13y
If you plan on using the word 'naturalistic' to describe your meta-ethics at some point, I hope you give a better definition than these philosophers have given. "Naturalistic" often seems to be a way of saying "there is no magic involved!", but it's not like metaphysical phenomena are necessarily magical. Using logical properties of symmetric decision algorithms to solve timeless coordination problems, for instance, doesn't fit into Miller's definition of natural properties, but it's probably somewhat tied up into some facets of meta-ethics (or morality at the very least, but that line is easily blurred and probably basically shouldn't exist in a correct technical solution). I'm really just trying to keep a relevant distinction between "naturalistic" and "metaphysical" which are both interesting and valid, instead of having two categories "naturalistic" and "magical" where you get points for pointing out how non-magical and naturalistic a proposed solution is. This stems from a general fear of causal / timeful / reductionist explanations that could miss important points about teleology / timelessness / pattern attractors / emergence, e.g. the distinction between timeless and causal decision theory or between timeless and causal validity semantics (if there is one), which have great bearing on reflective/temporal consistency and seem very central to meta-ethics. I don't think you're heading there with your solution to meta-ethics, but as an aside I'm still confused about what it is you're trying to solve if you're not addressing any of these questions that seem very central. Your past selves' utility functions are just evidence. Meta-ethics should tell you how to treat that evidence, just as it should tell you how future selves should treat your present utility function as evidence. Figuring out what my past selves' or others' utility functions are in some sense is of course a necessary step, but even after you have that data you still need to figure out what the
3torekp13y
A better answer than any that Luke cited would start with the network of causal laws paradigmatically considered "natural," such as those of physics and chemistry, then work toward properties, relations, objects and facts. There might (as a matter of logical possibility) have been other clusters of causal laws, such as supernatural or non-natural laws, but these would be widely separated from the natural laws with little interaction (pineal gland only?) or very non-harmonious interaction (gods defying physics). We had a discussion about this earlier. I will try to dig up a link.

I am increasingly getting the perception that morality/ethics is useless hogwash. I already believed that to be the case before Less Wrong and I am not sure why I ever bothered to take it seriously again. I guess I was impressed that people who are concerned with 'refining the art of rationality' talk about it and concluded that after all there must be something to it. But I have yet to come across a single argument that would warrant the use of any terminology related to moral philosophy.

The article Say Not "Complexity" should have been about mo... (read more)

3[anonymous]13y
I would argue that the problem is not with morality, but with how it is being approached here. This is a starting point for understanding morality. is utilitarianism, which seems to be the house approach to morality - the very approach which you find unpersuasive. Not quite. It's possible to wish a person dead, while being reluctant to kill him yourself, and even while considering anyone who does kill him a murderer who needs to be caught and brought to justice. Morality derives from preferences in a way, but it is indirect. An analogous phenomenon is the market price. The market price of a good derives from the preferences of everyone participating in the market, but the derivation is indirect. The price of a good isn't merely what you would prefer to pay for it, because that's always zero. Nor is it merely what the seller would prefer to be paid for it, because there is no upper limit on what he would charge if he could. Rather, the market price is set by supply and demand, and supply and demand depend in large part on preferences. So price derives from preferences, but the derivation is indirect, and it is mediated by interaction between people. Morality, I think, is similar. It derives from preferences indirectly, by way of interaction. This leaves open the possibility that morality is as variable as prices, but I think that because of the preferences that it rests on, it is much, much less variable, though not invariable. Natural selection holds these preferences largely in check. For example, if some genetic line of people were to develop a preference for being slaughtered, they would quickly die out.
-1XiXiDu13y
This just shows that human wants are inconsistent, that humans are holding conflicting ideas simultaneously, why invoke 'morality' in this context? People or road blockades, what's the difference? I just don't see why one would talk about morality here. The preferences of other people are simply more complex road blockades on the way towards your goal. Some of those blockades are artistically appealing so you try to be careful in removing them...why invoke 'morality' in this context?
2[anonymous]13y
But these two desires are not inconsistent, because for someone to die by, say, natural causes, is not the same thing as for him to die by your own hand. You could say the same thing about socks. E.g., "I just don't see why one would talk about socks here. Socks are simply complex arrangements of molecules. Why invoke "sock" in this context?" What are you going to do instead of invoking "sock"? Are you going to describe the socks molecule by molecule as a way of avoiding using the word "sock"? That would be cumbersome, to say the least. Nor would it be any more true. Socks are real. They aren't imaginary. That they're made out of molecules does not stop them from being real. All this can be said about morality. What are you going to do instead of invoking "morality"? Are you going to describe people's reactions as a way of avoiding using the word "morality"? That would be cumbersome, to say the least. Nor would it be any more true. Morality is real. It isn't imaginary. That it's made out of people's reactions doesn't stop it from being real. Denying the reality of morality simply because it is made out of people's reactions, is like denying the reality of socks simply because they're made out of molecules.
3XiXiDu13y
Consider the the trolley problem. Naively you kill the fat guy if you care about other people and also if you only care about yourself, because you want others to kill the fat guy as well because you are more likely to be one of the many people tied to the rails than the fat guy. Of course there is the question about how killing one fat guy to save more people and similar decisions could erode society. Yet it is solely a question about wants, about the preferences of the agents involved. I don't see how it could be helpful to add terminology derived from moral philosophy here or elsewhere.
-4Peterdjones13y
It is meaningful wherever it is meaningful to discuss whether there are wants people should and shouldn't have.
1XiXiDu13y
I am going to use moral terminology in the appropriate cultural context. But why would one use it on a site that supposedly tries to dissolve problems using reductionism as a general heuristic? I am also using the term "free will" because people model their decisions according to that vague and ultimately futile concept. But if possible (if I am not too lazy) I avoid using any of those bogus memes. Of course, it is real. Cthulhu is also real, it is a fictional cosmic entity. But if someone acts according to their fear of Cthulhu I am not going to resolve their fear by talking about it in terms of the Lovecraft Mythos but in terms of mental illness. How so? Can you give an example where the use of terminology derived from moral philosophy is useful instead of obfuscating?
-1XiXiDu13y
Consider the Is–ought problem. The basis for every ought statement is what I believe to be correct with respect to my goals. If you want to reach a certain goal and I want to help you and believe to know a better solution than you do then I tell you what you ought to do because 1.) you want to reach a goal 2.) I want you to reach your goal 3.) my brain does exhibit a certain epistemic state making me believe to be able to satisfy #1 & #2.
-2[anonymous]13y
It is no more a philosophical puzzle that needs dissolving than prices are a philosophical puzzle that need dissolving. I think that the concept of "free will" may indeed be more wholly a philosopher's invention, just as the concept of "qualia" is in my view wholly a philosopher's invention. But the everyday concepts from which it derives are not a philosopher's invention. I think that the everyday concept that philosophers turned into the concept of "free will" is the concept of the uncoerced and intentional act - a concept employed when we decide what to do about people who've annoyed us. We ask: did he mean to do it? Was he forced to do it? We have good reason for asking these questions. Philosophers invent bogus memes that we should try to free ourselves of. I think that "qualia" are one of those memes. But philosophers didn't invent morality. They simply talked a lot of nonsense about it. Morality is real in the sense that prices are real and in a sense that Cthulhu is not real. Some people talk about money in the way that you want to talk about morality, so that's a nice analogy to our discussion and I'll spend a couple of paragraphs on it. They say that the value of money is merely a collective delusion - that I value a dollar only because other people value a dollar, and that they value a dollar only because, ultimately, I value a dollar. So they say that it's all a great big collective delusion. They say that if people woke up one day and realized that a dollar was just a piece of paper, then we would stop using dollars. But while there is a grain of truth to that (especially about fiat money), there's also much that's misleading in it. Money is a medium of exchange that solves real problems. The value of money may be in a sense circular (i.e., it's valued by people because it's valued by people), but actually a lot of things are circular. A lot of natural adaptations are circular, for example symbiosis. Flowers are the way they are because bees are th
-5Peterdjones13y
1Morendil13y
Otherwise known as The True Knowledge.
0hairyfigment13y
You just did use it. Now, in this case we could probably rephrase your statement without too much trouble. But it does not seem at all obvious that doing this for all of our beliefs has positive expected value if we just want to maximize epistemic or instrumental rationality.
0endoself13y
I agree with most of this. The only reason for using the word morality is when talking to someone who does not realize that "Whatever you want." is the only answer that really can be given to the question of "What should I do next?". (Does that sentence make sense?) The main thing I have to add to this is what Eliezer describes here. The causal 'reason' that I want people to be happy is because of the desires in my brain, but the motivational 'reason' is because happiness matches {happiness + survival + justice + individuality + ...}, which sounds stupid, but that is how I make decisions; I look for what best matches against that pattern. These two reasons are important to distinguish - "If neutrinos make me believe '2 + 3 = 6', then 2 + 3 = 5". Here, people use that world 'morality' to describe an idealized version of their decision processes rather than to describe the desires embodied in their brain in order to emphasize that point, and also because of the large number of people that find this pseudo-equivalence nonobvious.
0XiXiDu13y
If you are confused about facts in the world then you are talking about epistemic rationality, why would one invoke 'morality' in this context?
-2endoself13y
I'm not sure I understand this. Are you objecting to my use of the word 'idealized', on the grounds that preferences and facts are different things and uncertainty is about facts? I would disagree with that. Someone might have two conflicting but very strong preferences. For example, someone might be opposed to homosexuality based on a feeling of disgust but also have a strong feeling that people should have some sort of right to self-determination. Upon sufficient thought, they may decide that the latter outweighs the former and may stop feeling disgust at homosexuals as a result of that introspection. I believe that this situation is one that occurs regularly among humans.
-1Peterdjones13y
But that is not the answer if someone wants to murder someone. What you have here is actually a reductio ad absurdam o the simplistic theory that morals=desires.
4NMJablonski13y
It only isn't the answer if you have a problem with that particular person being murdered, or perhaps an objection to killing as a principle. I also would object to wanton, chaotic, and criminal killings, but that is because I have a complex network of preferences that inform that objection, not because murder has some intrinsic property of absolute "wrongness". It is all preferences, and to think otherwise is the most frequent and absurd delusion still prevalent in rationalist communities. Even when a moralistic rationalist admits that moral truths and absolutes do not exist, they continue operating as if they do. They will say: "Well, there may not be absolute morality, but we can still tell which actions are best for (survival of human race / equality among humans / etc)." The survival of the human race is a preference! One which not all possible agents share, as we are all keenly aware of in our discussions of the threat posed by superintelligent AI's that don't share our values. There is no obligation for any mind to adopt any values. You can complain about that reality. You can insist that your preferences are the one, true, good and noble preferences, but no rational agent is obligated, in any empirical sense, to agree with you.
-5Peterdjones13y
3Amanojack13y
It just depends on if "should" is interpreted as "what would best fulfill my wants now" or "what would best fulfill your wants now" (or as something else entirely). We can't make sense of ethical language until we realize different people mean different things by it.
2Gray13y
And that's what morality always was in the first place. It's a way of getting other people to do otherwise than what they wanted to do. No one would be convinced by "I don't want you to kill people", but if you can convince someone that "It is wrong to kill people", then you've created conflict in that person's desires. I wonder, in the end, if people here truly want to "be rational" about morality. Myself, I'm not rational about morality, I go along with it. I don't critique it in my personal life. For instance, I refuse to murder someone, no matter how rational it might be to murder someone. Stick to epistemic rationality, and instrumental rationality, but avoid at all costs normative rationality, is my opinion.
3[anonymous]13y
This is a widespread but mistaken theory of morality. After all, we don't - and can't - convincingly say that just any old thing is "wrong". Here, I'll alternate between saying that actually wrong things are wrong, and saying that random things that you don't want are wrong. Actually wrong: "it's wrong to kill people." Yup, it is. You just don't want it: "it's wrong for you to arrest me just because I stabbed this innocent bystander to death." Yeah, right. Actually wrong: "it's wrong to mug people." No kidding. You just don't want it: "it's wrong for you to lock your door when you leave the house, because it's wrong for you to do anything to prevent me from coming into your house and taking everything you own to sell on the black market". Not convincing. If there were nothing more to things being wrong than that you use the word "wrong" to get people to do things, then there would be no difference between these four attempts to get people to do something. But there is: in the first and third case, the claim that the action is wrong is true (and therefore makes a convincing argument). In the second and fourth case, the claim is false (and therefore makes for an unconvincing argument). Sure, you can use the word "wrong" to get people to do things that you want them to do, but you can use a lot of words for that. For example, if you're somebody's mother and you want them to avoid driving when they're very sleepy, you can tell them that it's "dangerous" to drive in that condition. But as with the word "wrong", you can't use the word "dangerous" for just any situation, because it's not true in just any situation. When a proposed action is really dangerous - or really wrong - then you can use that fact to convince them not to pursue that action. But it's still a fact, independent of whether you use it to get other people to do things you want.
0Amanojack13y
Objective ethics on LW? I'm a little shocked. This whole post is basically argument from popularity (perhaps more accurate to call it argument from convincingness). Judgments of valuation may be universal or quasi-universal, but they are always subjective. Words like "right" and "wrong" (and "innocent" and "own") and other objective moralistic terms obscure this, so let me do some un-obscuring. You have this backwards: The claim makes a convincing argument (to you and many others), therefore you call the claim "right"; or the claim makes an unconvincing argument against the action, therefore you call the claim "wrong." Notice you had to tuck in the word "innocent," which already implies your conclusion that it is "actually wrong" to harm the bystander. Here you used the word "own," which again already implies your conclusion that it is wrong to steal it. Both examples are purely circular. Most people are disgusted by killing and theft, and they may be counterproductive from most people's points of view, but that is just about all we can say about the matter - and all we need to say. We are disgusted, so we ban such actions. Moral right and wrong are not objective facts. The fact that you and I subjectively experience a moral reaction to killing and theft may be an objective fact, but the wrongness itself is not objective, even though it may be universal or near-universal (that is, even though almost everyone else may feel the same way). Universal subjective valuation is not objective valuation (this latter term is, I contend, completely meaningless - unless someone can supply a useful definition). Although he was speaking in the context of economics, Ludwig von Mises gave the most succinct explanation of why all valuation is subjective when he said, "We originally want or desire an object not because it is agreeable or good, but we call it agreeable or good because we want or desire it."
-2[anonymous]13y
You could say that about any word in the English language. Let's try this with the word "rain". On many occasions, a person may say "it's raining and therefore you should take an umbrella". On some occasions this claim will be false and people will know that it's false (e.g. because they looked out a window and saw that it wasn't raining), and so the argument will not be convincing. What you're doing here can be applied to this rain scenario. You could say: That is, the claim that it's raining makes a convincing argument on some occasions, and on those occasions you call the claim "right". On other occasions, the claim makes an unconvincing argument, and on those occasions you call the claim "wrong". So there, we've applied your theory about the concept of morality, to the concept of rain. Your theory could equally well be applied to any concept at all. That is, your theory is that when we are convinced by arguments that employ claims about morality, then we call the claims "right". But you could equally well come up with the theory that when we are convinced by arguments that employ claims about rain, then we call the claims "right". So what have we demonstrated? With your help, we have demonstrated that in this respect, morality is like rain. And like everything else. Morality is like atoms. Morality is like gravity - in this respect. You have highlighted a property of morality which is shared by absolutely everything else in the universe that we have a word for. And this property is, that you can come up with this reverse theory of it, according to which we call claims employing the term "right" when we are convinced by arguments using those claims. For me to be guilty of begging the question I would have to be trying to prove that a murder was committed in the hypothetical scenario. But it's a hypothetical scenario in which it is specified that the person committed murder. Here's the hypothetical scenario, more explicit: someone has just committed a murder
1Amanojack13y
You misread me, though perhaps that was my fault. Does the bold help? I was talking about you (Constant), not "you" in the general sense. I wasn't presenting a theory of morality; I was shedding light on yours by suggesting that you are only calling these things right or wrong because you find the arguments convincing. No, you'd have to be trying to justify your statement that "it is wrong to kill people," which it seems you were (likewise for the theft example). Maybe your unusual phrasing confused me as to what you were trying to show with that. Anyway, the daughter posts seem to show we agree on more than it appears here, so bygones. As for the rest about "[my] reverse theory of morality," that's all from the above misunderstanding. (Sorry to waste time with my unclear wording.)
-2[anonymous]13y
Okay, but even on this reading you could "shed" similar "light" on absolutely any term that I ever use. You're not proving anything special about morality by that. To do that would require finding differences between morality and, say, rain, or apples. But if we were arguing about apples you could make precisely the same move that you made in this discussion about morality. Here's a parallel back-and-forth employing apples. Somebody says: I reply: Here, let me construct an example with apples. Somebody goes to Tiffany's, points to a large diamond on display, and says to an employee, "that is an apple, therefore you should be willing to sell it to me for five dollars, which is a great price for an apple." This claim is false, and therefore makes for an unconvincing argument. Somebody replies: * I interpret "right" and "wrong" here as meaning "true" and "false", because claims are true or false, and these are referring to claims here. To which they follow up: ** I am continuing the previous interpretation of "right" and "wrong" as meaning, in context here, "true" or "false". If this is not what you meant then I can easily substitute in what you actually meant, make the corresponding changes, and make the same point as I am making here. What all this boils down to is that my interlocutor is saying that I am only calling claims about apples true or false because I find the arguments that employ these claims convincing or unconvincing. For example, if I happen to be in Tiffany's and somebody points to one of the big shiny glassy-looking things with an enormous price tag and says to an employee, "that is an apple, and therefore you should be happy to accept $5 for it", then I will find that person's argument unconvincing. My interlocutor's point is that I am only calling that person's claim (that that object is an apple) false because I find his argument (that the employee should sell it to him for $5) unconvincing. Whereas my own account is as follows: I first o
1Amanojack13y
It seems to me that right and wrong being objective, just like truth and falsehood, is what you've been trying to prove all this time. To equate "right and wrong" with "true and false" by assumption would be to, well you know, beg the question. It's not surprising that it always comes back to circularity, because a circular argument is the same in effect as an unjustified assertion, and in fact that's become the theme of not just our exchange here, but this entire thread: "objective ethics are true by assertion." I think we agreed elsewhere that ethical sentiments are at least quasi-universal; is there something else we needed to agree on? Because the rest just looks like wordplay to me.
-2[anonymous]13y
I'm not equating moral right and wrong with true and false. I was disambiguating some ambiguous words that you employed. The word "right" is ambiguous, because in one context it can mean "morally righteous", and in another context it can mean "true". I disambiguated the words in a certain direction because of the immediate textual context. Apparently that was not what you meant. Okay - so ideally I should go back and disambiguate the words in the opposite direction. However, I can tell you right now it will come to the same result. I don't really want to belabor this point so unless you insist, I'm not actually going to write yet another comment in which I disambiguate your terms "right" and 'wrong" in the moral direction.
0CuSithBell13y
But, ah, you can observe the properties of the object in question, and see that it has very few in common with the set of things that has generated the term "apple" in your mind, and many in common with "diamond". Is this the same sense in which you say we can simply "recognize" things as fundamentally good or evil? That would make these terms refer to "what my parents thought was good or evil, perturbed by a generation of meaning-learning". The problem there is - apples are generally recognizable. People disagree on what is right or wrong. Are even apples objective?
2[anonymous]13y
People can disagree about gray areas between any two neighboring terms. Take the word "apple". Apple trees are, according to Wikipedia, the species "Malus domestica". But as evolutionary biologists postulated (correctly, as it turns out), species are gradually formed over hundreds or thousands or millions of years, and the question of what is "the first apple tree" is a question for which there is no crystal clear answer, nor would there be even if we had a complete record of every ancestor of the apple tree going back to the one-celled organisms. Rather, the proto-species that gave rise to the apple tree gradually evolves into the apple tree, and about very early apple trees two fully informed rational people might very well disagree about which ones are apple trees and which ones are proto-apple trees. This is nothing other than the sorites problem, the problem of the heap, the problem of the vagueness of concepts. It is universal and is not specifically true about moral questions. Morality is, I have argued, an aspect of custom. And it's true that people can disagree, on occasion, about whether some particular act violates custom. So custom is, like apples, vague to some degree. Both apples and custom can be used as examples of the sorites problem, if you're sick of talking about sand heaps. But custom is not radically indeterminate. Customs exist, just as apples exist.
2Amanojack13y
Well I agree with this basically, and it reminds me of John Hasnas writing about customary legal systems. I find that when showing this to people I disagree with about ethics we usually end up in agreement:
2[anonymous]13y
The quote from John Hasnas seems to be very close to my own view.
0CuSithBell13y
Ah, okay! We don't disagree then. Thanks for clearing that up! ETA: Actually, with that clarification, I'd expect many others to agree as well - at least, it seems like what you mean by "custom" and what other posters have called "stuff people want you to do" coincide.
0[anonymous]13y
An important point is that nobody gets to unilaterally decide what is or is not custom. That's in contrast to, say, personal preference, which each person does get to decide for themselves.
0CuSithBell13y
Right. Though I'd argue that custom implies that morality is objective, and therefore that custom can be incorrect, so that someone can coherently say that their own society's customs are immoral (though probably from within a subculture that supports those alternate customs).
-3wedrifid13y
Not a good analogy. The objective element of 'wrong' is entirely different in nature to that of 'dangerous' even though by many definitions it does, in fact, exist.
2[anonymous]13y
The word "danger" illustrates a point about logic. The logical point is that the fact that X is often used to persuade people does not mean that the nature of X is that it is " a way of getting other people to do otherwise than what they wanted to do". The common use of the word "danger" is an illustration of this logical point. The illustration is correct.
-1[anonymous]13y
The objectivity of 'danger' is entirely different to that of 'wrong'. As such using it as an argument here is misleading and confused.
3NMJablonski13y
Upvoted to both of you for an interesting discussion. It has reached the point it usually does in metaethics where I have to ask for someone to explain: What the hell does it mean for something to be objectively wrong? (This isn't targeted at you specifically wedrifid, it just isn't clear to me what the objectivity of "wrongness" could possibly refer to)
3Amanojack13y
Yeah, no one can ever seem to explain what "objectively wrong" would even mean. That's because to call an action wrong is to imply that there is a negative value placed on that action, and for that to be the case you need a valuer. Someone has to do the valuing. Maybe a large group of people - or maybe everyone - values the action negatively, but that is still nothing more than a bunch of individuals engaging in subjective valuation. It may be universal subjective valuation, or maybe they think it's God's subjective valuation, but if so it seems better to spell that out plainly than to obscure it with the authoritative- and scientific-sounding modifier objective.
0Peterdjones13y
The fact that something is done by a subject doesn't necessarily make it subjective. It takes a subject to add 2 and 2, but the answer is objective. There are many ideas as to what "objectively right" could mean. Two of Kant's famous suggestions are "act only on that maxim you would wish to be universal law" and "treat people always as ends and never as means".
0NMJablonski13y
This encapsulates my thoughts on metaethics entirely.
1[anonymous]13y
A hard question. But I will try to give a brief answer. Morality is an aspect of social custom. Roughly, it is those customs that are enforced especially vigorously. But an important here is that while some customs are somewhat arbitrary and vary from place to place, other customs are much less arbitrary. It is these least arbitrary moral customs that we most commonly think of as universal morality applicable to and recognized by all humanity. Here's an example: go anywhere in the world as a tourist, and (in full view of a lot of typical people who are minding their own business, maybe traveling, maybe buying or selling, maybe chatting) push somebody in front of a train, killing them. Just a random person. See how people around you react. Recommendation: do this as a thought experiment, not an actual experiment. I'll tell you right now how people around the world will react: they'll be horrified, and they'll try to detain you or incapacitate you, possibly kill you. They will have a word in their language for what you just did, which will translate very well to the English word "murder". But why is this? Why aren't customs fully arbitrary? This puzzle, I think, is best understood if we think of society as a many-player game. That is, we apply the concepts of game theory to the problem. Custom is a Nash equilibrium. To follow custom is to act in accordance with your equilibrium strategy in this Nash equilibrium. Nash equilibria are not fully arbitrary - and this explains right away at least the general point that customs are not fully arbitrary. While not arbitrary, Nash equilibria are not necessarily unique, particularly since different societies exist in different environmental conditions, and so different societies can have different sets of customs. However, the customs of all societies around the world, or at least all societies with very few exceptions, share common elements. People across the world will be appalled if you kill someone arbitrarily. People ac
0Amanojack13y
I don't know about the Nash equilibria, but I agree with most everything you've written here. I'd just prefer to call that (quasi-)universal subjective ethics, and to use language that reflects that, as there are exceptions - call them psychopaths or whatever, but in the interest of accuracy. And the other problem with the objectivist interpretation of custom is that sometimes customs do have to change, and sometimes customs are barbaric. It seems that what you were getting at with "actually wrong" in your initial post was the idea that these kind of moral sentiments are universal, which I can buy, but even that is a bit of a leaky generalization.
-2wedrifid13y
Pardon me. I deleted my comment before I noticed that someone had replied. (I didn't think replying to Constant was going to be beneficial. To be honest I didn't share your perception of interestingness of the conversation, even though I was a participant.) Very little practically speaking. It is a somewhat related concept to subjectively objective. It doesn't make the value judgements any less subjective it is just that they happen to be built into the word definitions themselves. It doesn't make words like 'should' and 'wrong' any more useful when people with different values are arguing it just takes one of the meanings of 'should' as it is used practically and makes it explicit. I think the sophisticated name may be something related to moral cognitivism, probably with a 'realism' thrown in somewhere for good measure.
-1[anonymous]13y
I am not comparing the objectivity of "danger" to the objectivity of "wrong". I am not stating or implying that their objectivity is the same or similar. I am using the word "danger" as an illustration of a point. The point is correct, and the illustration is correct. That "danger" has different objectivity from "wrong" is not relevant to the point I was illustrating.
-1wedrifid13y
There is an objective sense in which an analogy is good or bad, related closely to the concept of reference class tennis. Having one technical similarity does not make an analogy an appropriate one and certainly does not prevent it from being misleading. This example of 'for example' is objectively 'bad'.
0Amanojack13y
That's one of the things morality has been, and it could indeed be the main thing, but my point above is it all depends on what the person means. Even though getting other people to do something might be the main and most important role of moral language historically, it only invites confusion to overgeneralize here - though I know how tempting it is to simplify all this ethical nonsense floating around in one fell swoop. Some people do simply use "ought" to mean, "It is in your best interest to," without any desire to get the person to do something. Some people mean "God would disapprove," and maybe they really don't care if that makes you refrain from doing it or not, but they're just letting you know. These little counterexamples ruin the generalization, then we're back to square one. I think the only way to really simplify ethics is to acknowledge that people mean all sorts of things by it, and let each person - if anyone cares - explain what they intended in each case. No, scratch that. The reason ethics is so confused is precisely because people have tried to simplify a whole bunch of disparate-but-somewhat-interrelated notions into a single type of phrasing. A full explanation of everything that is called "ethics" would require examination of religion, politics, sociology, psychology, and much more. For most things that we think we want ethics for, such as AI, instead of trying to figure out that complex of sundry notions shoehorned into the category of ethics, I think we'd be better off just assiduously hugging the query for each question we want to answer about how to get the results we want in the "moral" sphere (things that hit on your moral emotions, like empathy, indignation, etc.). Mostly I'm interested in this series of posts for the promise it presents for doing away with most of the confusion generated by wordplay such as "objective ethics," which I consider to be just an artifact of language.
0Clippy13y
What if "should" is interpreted as "Instantiators of decision theories similar to this one would achieve a higher value on their utility function if similar decision theories would yield this action as output"?
0CuSithBell13y
One thing it seems to be used for around here is "what should you never do even if you should". E.g. it's usually a really bad idea (wrt your own wants) to murder someone, even in a large proportion of cases where you think it's a good idea.
0endoself13y
If you don't want someone to murder, you can try to stop them, but they aren't going to agree to not murder unless they want to.
-1Peterdjones13y
Want to before they have had their preferences re arranged by moral exhortation, or after?
0endoself13y
I was referring to only fully logical arguments. Obviously it is possible to prevent someone from murdering by expressing extreme disapproval or locking them up.
0FAWS13y
I agree with you that morality can mostly be framed in terms of volation and an adequate decision theory, but I think you are oversimplifying. For example consider people talking about what other people should want purely for their own good. That might be explainable in terms of projecting their own wants in some way (or perhaps selfish self-delusion), but it doesn't seem like something you could easily predict in advance from reasoning about wants if you were unfamiliar with how people act among each other.
-1Morendil13y
Talking about wants isn't necessarily any simpler than talking about shoulds. We seem to be just as confused about either. For instance, how many people say they want to be thin, yet overeat and avoid exercise?
2Alicorn13y
I think "I want to be thin" has an implied "ceteris paribus". Ceteris ain't paribus. You could as well say, "How many people say they want to have money, yet spend it on housing, feeding, and clothing themselves and avoid stealing?"
4Clippy13y
I want to have money, I don't spend it on clothing, and I do avoid stealing. Edit: This information may or may not be relevant to anyone's point.
0Morendil13y
There seems to be a difference here - how much money you earn isn't perceived as entirely a matter of choice, or at any rate there will be a significant and unavoidable lead time between deciding to earn more and actually earning more. Whereas body shape is within our immediate sphere of control: if we eat less and work out more, we'll weigh less and bulk up muscle mass, with results expected within days to weeks. When I say "I can move my arm if I want", this is readily demonstrated by moving my arm. Is this the same sense of "want" that people have in mind when they say "I want to eat less" or "I want to quit smoking"? The distinction that seems to appear here is between volition - making use of the connection between our brains and our various actuators - and preference - the model we use to evaluate whether an imagined state of the world is more desirable than another. We conflate both in the term "want". We are often quite confused as to what volitions will bring about states of the world that agree with our preferences. (How many times have you heard "That's not what I wanted to say/write"?)
5Alicorn13y
I categorically reject your disanalogy from both directions. I have been eating about half as much as usual for the past week or so, because I'm on antibiotics that screw with my appetite. I look the same. Once, I did physically intense jujitsu twice a week for months on end, at least quadrupling the amount of physical activity I got in each week. I looked the same. If "eating less and working out more" put my shape under my "immediate sphere of control" with results "within days to weeks", this would not be the result. You are wrong. Your statements may apply to people with certain metabolic privileges, but not beyond. By contrast, if I suddenly decide that I want more money, I have a number of avenues by which I could arrange that, at least on a small scale. It would be mistaken of me to conclude from this abundance of available financial opportunity that everyone chooses to have the amount of money they have, and that people with less money are choosing to take fewer of the equally abundant opportunities they share with the rich.
0Morendil13y
OK, allowing that the examples may have been poorly chosen - the main point I'm making is that people often a) say they want something, b) act in ways that do not bring about what they say they want. Your response above seems to be that when people say "I want to be thin", they are speaking strictly in terms of preference: they are expressing that they would prefer their world to be just as it is now, with the one amendment that they are a certain body type rather than their current. Similarly when saying they want money. There are other cases where volition and preferences appear at odds more clearly. People say "I want to quit smoking", but they don't, when it's their own voluntary actions which bring about an undesired state. The distinction seems useful, even if we may disagree on the specifics of how hard it is to align volition and preference in particular cases. I'm not the first to observe that "What do you want" is a deeper question than it looks like, and that's what I meant to say in the original comment. When you examine it closely "do people actually want to smoke" isn't a much simpler question than "should there be a law against people smoking" or "is it right or wrong to smoke". It is possible that these questions are in fact entangled in such a way that to fully answer one is also to answer the others.
3Alicorn13y
I think people sometimes use wanting language strictly in terms of preferences. I think people sometimes have outright contradictory wants. I think people are subject to compulsive or semi-compulsive behaviors that make calling "revealed preference!" on their actions a risky business. The post you linked to (I can't quite tell by your phrasing if you are aware that I wrote it) is about setting priorities between various desiderata, not about declaring some of those desiderata unreal because they take a backseat.
0Morendil13y
Yup. Not sure if you mean to imply I've been saying that. That wasn't my intention.
0wedrifid13y
This all seems true with the exception of 'by contrast'. You seem to have clearly illustrated a similarity between weight loss and financial gain. There are things that are under people's control but which things are under a given person's control vary by the individual and the circumstances. In both cases people drastically overestimate the extent to which the outcome is a matter of 'choice'.
0Alicorn13y
The "by contrast" paragraph is meant to illustrate how and why I reject the disanalogy "from both directions".
0Amanojack13y
The real distinction is between what you want to do now and what you want your future self to do later, though there's some word confusion obscuring that point. English is pretty bad at dealing with these types of distinctions, which is probably why this is a recurring discussion item.
1Amanojack13y
People aren't confused about what they want in any given moment. They want to eat donuts, but they don't want to have eaten donuts. They don't want to exercise, but they do want to have exercised.
3[anonymous]13y
This is a pretty good reason humans not as a single moral agent, but as a collection of past, present, and future moral agents.
-2XiXiDu13y
Oughts are instrumental and wants are terminal. See my comments here and here.
5timtyler13y
Disagree - I don't think that is supported by the dictionary. For instance, I want more money - which is widely regarded as being instrumental. Maybe you need to spell out what you actually meant here.
1XiXiDu13y
Oughts and wants are not mutually exclusive in their first-order desirability. You ought to do what you want is a basic axiom of volition. That implies that you also want what you ought. Yet a distinction, if minor, between ought and want is that the former is often a second-order desire as it is instrumental to the latter primary goal.
-2timtyler13y
Wants are fairly straighforwads, but oughts are often tangled up with society, manipulation and signalling. You appear to be presuming some other definition of ought - without making it terribly clear what it is that you are talking about.
0XiXiDu13y
When it comes to goals then in a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. And that is the reason why we perceive oughts to be mainly a fact about society, you ought not to be indifferent about the goals of other agents if they are instrumental to what you want. "Ought" statements are subjectively objective as they refer to the interrelationship between your goals and the necessary actions to achieve them. "Ought" statements point out the necessary consistency between means and ends. If you need pursue action X to achieve "want" Y you ought to want to do Y.
-3Peterdjones13y
Again, whether you ought to do what you want depends on what you want.
4NMJablonski13y
Can you demonstrate that what you just said is true? EDIT: And perhaps provide a definition of "ought"?
-3[anonymous]13y
This idea (utilitarism) is old, and fraught with problems. Firstly, there is the question of what the correct thing to optimize really is. Should one optimize total happiness or average happiness? Or would it make more sense, for example, to maximize the happiness of the most unhappy person in the population - a max-min problem, i.e a "worst case" optimization procedure? (note that what this in essense is the difference between considering "human rights" and "total happiness", which do not always go hand in hand) And even with all these three things to optimize considered, there's whole spectrum of weighted optimization problems which sit between worst case and average case. Who chooses what is best and most fair? Is the happiness of everybody weighted equally? Or are some people more deserving of happiness than others? Does everybody have an equal capaicty for happiness? Does a higher population equate to more happiness in total? How does time factor into the equation? Do you maximize happiness now? Or do you put effort into developing a perfect society now, for the greater happiness to come? Not to mention the obvious problem of utility. Let's be charitable and assume that utility means something, and can be measured - already a leap of faith. But then, ask yourself - why assume utility is one dimensional? And if utility were many dimensional - how will one trade off the different dimensions of utility? Is it more important to minimize suffering than to increase happiness - are two things really numerical values which lie on the same scale? And what if we found a pleasure center in the brain which produces "utility"? Would it be better for us to discard our coperal bodies, and all the rest of these silly and irrational "goals", "dreams" and "aspirations" in favor of forever pushing and stimulating this part of the brain for a little bit more meaningless satisfaction? But what I really want to get at, and here I start to get preachy, that existential meaning is n
5Sniffnoy13y
You correctly point out problems with classical utilitarianism; nonetheless, downvoted for equating utilitarianism in general with classical utilitarianism in particular, as well as being irrelevant to the comment it was replying to. And a few other things.
-3timtyler13y
Am not clear from your comment what your beef is. No: morality is also to do with how to behave yourself and ways of manipulating others in addition to its signalling role.

I'm taking a college metaethics class right now, and you have just neatly summarized everything it covers. Thanks!

I must confess I'm having trouble with that flowchart, specifically the first question about whether a moral judgment expresses a belief, and emotivism being on the "no" side. Doesn't, "Ew, murder" express the belief that murder is icky?

To put it another way, I'm having trouble reconciling the map of what people argue about the nature of morality, with what I know of how at least my brain processes moral belief and judgment.

That is, ISTM that moral judgments at the level where emotion and motivation are expressed do not carry any factu... (read more)

6wedrifid13y
No. The belief and that feeling and expression will be correlated but one is not the other. It isn't especially difficult or unlikely for them to different. It would be possible to declare a model in which the "Ew, murder" reaction is defined as an expression of belief. But it isn't a natural one and would not fit with the meaning of natural language.
4pjeby13y
That depends on how you define "belief". My definition is that a "belief" is a representation in your brain that you use to make predictions or judgments about reality. The emotion experienced in response to thinking of the prohibited or "icky" behavior is the direct functional expression of that belief. I have noticed that sometimes people on LW use the term "alief" to refer to such beliefs, but I don't consider that a natural usage. In natural usage, people refer to intellectual vs. emotional beliefs, rather than artificially limiting the term "belief" to only include verbal symbolism and abstract propositions.
0wedrifid13y
The definition as you actually write it here isn't bad. The conclusion just doesn't directly follow the way you say it does unless you modify that definition with some extra bits to make the world a simpler place.
1lukeprog13y
wedrifid is correct. Another way to grok the distinction: Imagine that you were testifying at a murder trial, and somebody asked if you if you had killed your mother with a lawnmower. You reply "Lawnmower!" with a disgusted tone. Now, the prosecutor asks, "Do you mean to claim that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting?" And you could rightly reply, "It may be the case that I believe that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting, but I have claimed no such things merely by saying 'Lawnmower!'"
7pjeby13y
You're speaking of claims in language; I'm speaking of brain function. Functionally, I have observed that the emotions behind such statements are an integral portion of the "belief", and that verbal descriptions of belief such as "murder is bad" or "you shouldn't murder" are attempts to explain or justify the feeling. (In practice, the things I work with are less morally relevant than murder, but the process is the same.) (See also your note that people continue to justify their judgments on the basis of confabulated consequences even when the situation has been specifically constructed to remove them as a consideration.)
3ata13y
I don't think that's a belief. What factual questions would distinguish a world where murder is icky from one where murder is not icky?
2pjeby13y
Beliefs can be wrong, but that doesn't make them non-beliefs. Any belief of the form "X is Y" (especially where Y is a judgment of goodness or badness) is likely either an instance of the mind projection fallacy, or a simple by-definition tautology. Again, however, this doesn't make it not-a-belief, it's just a mistaken or poorly-understood belief. (For example, expansion to "I find murder to be icky" trivially fixes the error.)

Where do the views expressed in the book The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It fit in? I'm assuming this is some form of non-cognitivism?

0lukeprog13y
It has been a few years since I read Greene's dissertation. I need to re-read it sometime. Perhaps someone else can answer...

Non-cognitivists, in contrast, think that moral discourse is not truth-apt.

Technically, that's not quite right (except for the early emotivists, etc.). Contemporary expressivists and quasi-realists insist that they can capture the truth-aptness of moral discourse (given the minimalist's understanding that to assert 'P is true' is equivalent to asserting just 'P'). So they will generally explain what's distinctive about their metaethics in some other way, e.g. by appeal to the idea that it's our moral attitudes rather than their contents that have a certain central explanatory role...

0lukeprog13y
Fair enough. I adjusted the wording in the original post. Thanks.

lukeprog, where would you place David Gauthier in your flow chart?

Some cognitivists think that [...] Other cognitivists think that [...]

Is there a test of the real world that could tell us that some of them are right and others think wrong? If not, what is the value of describing their thoughts?

It's clear to me that applied and normative ethics deal with real and important questions. They are, respectively, heuristics for certain situations, and analysis of possible failure modes of these heuristics.

But I don't understand what metaethics deals with. You write:

Metaethics: What does moral language mean? Do moral fa

... (read more)
5[anonymous]13y
This is nicely put. I second the request: what is a metaethical question that could have a useful answer? It would be especially nice if the usefulness was clear from the question itself, and not from the answer that lukeprog is preparing to give.
0Peterdjones13y
Exact definitions are easy to come by, so long as you are not bothered about correctness. Let morality=42, for instance. If you are bothered about correctness, you need to solve metaethics, the question of what morality is, before you can exactly and correctly define "morality". I can understand the impatience with philosophy -- "why can't they just solve these problems"--because that was my reaction when I first encountered it some 35 years ago. Did I solve philosophy? I only managed to nibble away at some edges. That's all anyone ever manages.
9wedrifid13y
How dare you! 42 isn't even prime, let alone right.
4DanArmak13y
The problem isn't that I don't know the answer. The problem is that I don't understand the question. "Morality" is a word. "Understanding morality" is, first of all, understanding what people mean when they use that word. I already know the answer to that question: they mean a complex set of evolved behaviors that have to do with selecting and judging behaviors and other agents. Now that I've answered that question, if you claim there is a further unanswered question, you will need to specify what it is exactly. Otherwise it's no different from saying we must "solve the question of what a Platonic ideal is". There are many important questions about morality that need to be answered - how exactly people make moral decisions, how to predict and manipulate them, how to modify our own behavior to be more consistent, etc. But these are part of applied and normative ethics. I don't understand what metaethics is.
-2Peterdjones13y
Understanding morality is second of all deciding what, if anything, it actually is. Water actually is H2O, but you can use the word without knowing that, and you can't find out what water is just by studying how the word is used.
2DanArmak13y
I think you don't understand my question. "Water" is H2O. And we can study H2O. "Morality" is a complex set of evolved behaviors, etc. We can study those behaviors. This is (ETA:) descriptive ethics. What is metaethics, though? And do you think there are questions to be asked about morals which are not questions about the different human behaviors that are sometimes labeled as morally relevant? Do you think there exists something in the universe, independent of human beings and the accidents of our evolution, that is called "morals"? The original post indicated that some philosophers think so.
-3Peterdjones13y
The study of those behaviours is descriptive ethics. The prescription of those behaviours is normative ethics. We can ask whether some de facto behaviour we have observed is really moral. And that raises the question of what "really moral" means. And that is metaethics and has a number of possible solutions, positive and negative, which are clearly outlined in the original positing. And metaethics does not vanish just because the Platonic approach is rejected.
4DanArmak13y
We can also ask whether some de facto behavior is really vorpal. That raises the question of what "really vorpal" means. Luckily, I can tell you what it really means: nothing at all. If you claim the word "moral" means something that I - and most people who use that word - don't know that it means, then 1) you have to tell us what it means as the start of any discussion instead of asking us what it means, and 2) you should really use a new word for your new idea. Thanks for the correction.
0Peterdjones13y
Negative solutions are possible, as I said. I didn't claim that.. I did say that a precise and correct definition requires coming up with a correct theory. But coming up with a correct theory only requires the imprecise pretheoretical definition, and everyone already has that. (I wasn't asking for it because I don't know it, I was asking for it to remind people that they already have it). If I had promised a correct theory, I would have implictly promised a post-theoretic definition to go with it. But I didn't make the first promise, so I am not commtted to the second. The whole thing is aimed as a correction to the ideas that you need to have, or can have, completely clear and accurate deffinitions from the get go. People should read carefully, and note that I never claimed to have a New Idea.
1DanArmak13y
I take it you mean negative solutions to the question: does "morality" have a meaning we don't precisely know yet? What I'm saying is that it's your burden to show that we should be considering this question at all. It's not clear to me what this question means or how and why it arises in your mind. It's as if you said you were going to spend a year researching exactly what cars mean. And I asked: what does it mean for cars to "mean" something that we don't know? It's clearly not the same as saying the word "cars" refers to something, because it can't refer to something we don't know about; a word is defined only by the way we use it. And cars at least exist as physical objects, unlike morality. So before we talk about possible answers (or the lack of them), I'm asking you to explain to me the question being discussed. What does the question mean? What kind of objects can be the answer - can morality "mean" that ice cream is sweet, or is that wrong type of answer? What is the test used to judge if an answer is true or false? Is there a possibility two people will never agree even though one of their answers is objectively true (like in literature, and unlike in mathematics)? If we only have an inaccurate definition for morality right now, and someone proposes an accurate one, how can we tell if it's correct?
0Peterdjones13y
No, by negative answers, I mean things like error theories in metaethics. I think your other questions don't have obvious answers. If you think that the lack of obvious answers should lead to something like "ditch the whole thing", we could have a debate about that. Otherwise, you're not saying anything that hasn't been said already.

Where does pluralistic moral reductionism go on the flowchart?

6Wei Dai13y
Given that Luke named his theory "pluralistic moral reductionism", Eliezer said his theory is closest to "moral functionalism", and Luke said his views are similar to Eliezers, I think one can safely deduce that it belongs somewhere around the bottom of the chart, not far away from "analytic moral functionalism" and "standard moral reductionism". :)
1endoself13y
Based on how I would answer the questions listed and that my views are similar to Eliezer's, I agree. The last question, as I understand it, is equivalent to "If you had a full description of all possible worlds, could you then say which choices are right in each world? Say "no" if you instead think that you would you have to additionally actually observe the real world to make moral choices." I might be misunderstanding something, since this seems like an obvious "yes", but I might be understanding 'too much', perhaps by conflating two things that some philosophers claim to be different due to their confusion.
0lukeprog13y
It doesn't fit anywhere on the chart cuz it's just so freaking meta, yo. :)

But don't most philosophers do that: try to assemble all the other philosophers' positions in a chart while maintaining that his own position is too nuanced to be assigned a point on a chart :)

3lukeprog13y
My tone was facetious, but the content of my sentence above was literal. I don't think it's an advantage that my theory does or doesn't fit neatly on the above chart. It's just that my theory of metaethics doesn't quite have the same aims or subject matter as the theories presented on this chart. But anyway, you'll see what I mean once I have time to finish writing up the sequence...
0Amanojack13y
Perhaps, but another general trend in philosophy seems to be that people spend centuries arguing over definitions. Anyone who points that out will be necessarily making a meta-critique and hence not be a point on a chart (not that lukeprog's theory will necessarily be like that; just have to wait and see).
2gjm8y
It isn't; someone might perfectly well hold that ethical sentences express both propositions and emotional attitudes. But those people would not be classified as emotivists. It happens that some people hold the more specific position called emotivism, and it's useful to have a word for it.
0Richard_Kennaway8y
Because most people cannot count any higher than one.

Sometimes one hears the term "moral realism," and in fact that term appears pretty often in your bibliography but not in the main text of your post. Would I be right to think that it comprises everything on the flowchart downstream of the answer "Yes" to the question "Are those beliefs about facts that are constituted by something other than human opinion?"?

1lukeprog13y
There are many definitions of moral realism. See here. But yes, your intuition here is roughly the one about the meaning of 'moral realism' shared by mainstream philosophers.

Flowchart is gone :|

Watching this series with interest. I liked the taboo thing in the first post; reminds me of my favorite Hume quote:

'tis usual for men to use words for ideas, and to talk instead of thinking in their reasonings.

Tangent: I think Ayer's observation was correct but he had the implication backwards. The English sentence "Yuck!" contains the assertion "That is bad." and is truth-apt.

I have launched into arguments with people after they expressed distaste, and I think it was at least properly grammatical. A start: "What's yucky about that?"

4Scott Alexander13y
When I was in Thailand, I saw some local tribesmen eat a popular snack of giant beetles. I said "Yuck!" and couldn't watch them. However, I recognize that there's nothing weirder about eating a bug than about eating a chicken and that they're perfectly healthy and nutritious to people who haven't been raised to fear eating them.
0Amanojack13y
To interpret "Yuck!" as "That is bad/yucky" is to turn what is ostensibly an expression of subjective experience into an ostensibly "objective" statement. You may as well keep it subjective and interpret it as "I am experiencing revulsion." But you'd have to be a pretty cunning arguer to get into a debate about whether another person is really having a subjective experience of revulsion!
1thomblake13y
It's both - expressing revulsion has a normative component, and so does even experiencing revulsion. To illustrate: If I eat something and exclaim, "Oishii!", that not only expresses that I am "experiencing deliciousness", but also that the thing I'm tasting "is delicious" - my wife can try it out with the expectation that when she eats it she will also "experience deliciousness". It is a good-tasting thing.
4Amanojack13y
It still sounds just like two people experiencing subjective deliciousness. What if a third person, or a dog, or Clippy, finds it not so delicious?

This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for they think that murder is not wrong or right.

Should that be "The error theorist can't hold that a statement like 'Murder is not wrong' is true"?

(Also, it's not clear to me that classifying error theory as cognitivist is correct. If it claims that all moral statements are based on a fundamentally mistaken intuition, so that "Murder is wrong" has no more factual content than "Murder is flibberty", then is ... (read more)

3Alicorn13y
No. The error theorist may hold "murder is not wrong" and "murder is not right" to be true. Ey just has to hold "murder is wrong" and "murder is right" to be false, and if ey wants to endorse the "not" statements I guess a rule that "things don't have to be either right or wrong" must operate in the background.
1lukeprog13y
ata, In this case, I managed to say it correctly the first time. :) If you're not sure about this stuff, you can read the first chapter of Joyce's 'The Myth of Morality', the central statement of contemporary error theory.
5ata13y
I can see how an error theorist would agree with "Murder is not wrong" in the same sense in which I'd agree with "Murder is not purple", but it's a strange and not very useful sense. My impression had been that error theorists claim that there are no "right" or "wrong" buckets to sort things into in the first place, rather than proposing that both buckets are there but empty — more like ignosticism than atheism. Am I mistaken about that?
9Larks13y
Error theorists believe that when people say "Murder is wrong", those people are actually trying to claim that it is a fact that murder has the property of being wrong. However, those people are incorrect (error theorists think) because Murder does not have the property of being wrong - because nothing has the property of being wrong. It's not about whether or not there are buckets - error theory just says that most people think there is stuff in buckets, but they're incorrect.
5prase13y
I smell a peculiar odour of inconsistency. (That means, add some modifier, as "morally", to the second "wrong", else it sounds really weird.)
0lukeprog13y
Exactly.

How neat is the dichotomy between cognitivists and non-cognitivists? Are there significant philosophical factions holding positions such as

  • "Murder is wrong" is a statement of belief, but it also expresses an emotion (and morality's peculiar charm comes from appealing both to a person's analytical mind and to their instincts)
  • Some people approach morality as a system of beliefs, others as gut reactions, and this is connected to their personalities in interesting ways
  • Or perhaps the same person can shift over time from gut reactions to believing
... (read more)

I'm wondering whether emotive responses lack logical content, and also whether belief-based morality requires emotive backing (failure of utilitarianism--yuck!) to move people to action.

[-][anonymous]13y00

Off-Topic: At least for me, your text feels like it is "cut off" -- it does not seem to have a closure -- like a classical solo concert which is stopped after the final cadence of the soloist, before the orchestra sets in again. Is this intended?

One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism).

One of the first two "moral judgements" in this confusing sentence is probably a typo. "Defeasible" just makes things more confusing. Maybe follow the vein of your linked Wikipedia paragraph more closely?

Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.

The way this is worded makes it seem that the result is produced by static magnetic fields. And that makes it sound like 19th century pseudo-science.

We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from our older 'chimp' brains.

And the way this is worded makes it seem that you think that the neo-cortex is something that evolved since we separated from the chimps.

Moral nat

... (read more)
1lukeprog13y
I was trying to make use of Greene's phrase: 'inner chimp.' But you're right; it's not that accurate. I've adjusted the wording above.
3Perplexed13y
I don't think it is Greene's phrase. I spent some time searching, and can find only one place where he used it - a 2007 RadioLab interview with Krulwich. I would be willing to bet that he was primed to use that phrase by the journalist. He doesn't even use the word chimp in the cited paper. In any case, Greene's arguments are incoherent even by the usual lax standards of evolutionary psychology and consequentialist naturalistic ethics. He suggests that a consequentialist foundation for ethics is superior to a deontological foundation because 'consequentialist moral intuitions' flow from a more recently evolved portion of the brain. Now it should be obvious that one cannot jump from 'more recently evolved' to 'superior as a moral basis'. You can't even get from 'more recently evolved' to 'more characteristically human'. Maybe you can get to 'more idiosyncratically human'. But even that only helps if you are comparing moral judgements on which deontologists and consequentialists differ. But Greene does not do that. Instead of comparing two different judgements about the same situation, he discusses two different situations, in both of which pretty-much everyone's moral intuitions agree. He calls the intuitions that everyone has in one situation 'consequentialist' and the intuitions in the other situation 'deontological'! Now, most people would object that deontology has nothing to do with intuition. Greene has an answer: And so, having completely restructured the playing field, he reaches the following conclusions: Let me get this straight. The portions of our brains that generate what Green dubs 'deontological intuitions' are evolutionarily ancient, present in all animals. So Greene dismisses those intuitions as "morally irrelevant" since they ultimately arise from "factors having to do with the constraints and circumstances of our evolutionary history". But our 'consequentialist intuitions' are morally relevant because they come from the neo-cortex; a region o
6lukeprog13y
I remember Greene's position being more nuanced than that, but it's been a while since I read his dissertation. In any case, I'm not defending his view. I only claimed that (in its revised wording) "We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains."
4AlephNeil13y
That's obvious to my prefrontal cortex, but my inner chimp finds the idea desperately appealing.
-1Peterdjones13y
That's a distinction that makes sense if deontology is hardwired whilst consequentialism varies with evidence.