This article is just some major questions concerning morality, then broken up into sub-questions to try to assist somebody in answering the major question; it's not a criticism of any morality in particular, but rather what I hope is a useful way to consider any moral system, and hopefully to help people challenge their own assumptions about their own moral systems.  I don't expect responses to try to answer these questions; indeed, I'd prefer you don't.  My preferred responses would be changes, additions, clarifications, or challenges to the questions or to the objective of this article.

 

First major question: Could you morally advocate other people adopt your moral system?

 

This isn't as trivial a question as it seems on its face.  Take a strawman hedonism, for a very simple example.  Is a hedonist's pleasure maximized by encouraging other people to pursue -their- pleasure?  Or would it be better served by convincing them to pursue other people's (a class of people of which our strawman hedonist is a member) pleasure?

 

It's not merely selfish moralities which suffer meta-moral problems.  I've encountered a few near-Comtean altruists who will readily admit their morality makes them miserable; the idea that other people are worse off than them fills them with a deep guilt which they cannot resolve.  If their goal is truly the happiness of others, spreading their moral system is a short-term evil.  (It may be a long-term good, depending on how they do their accounting, but non-moral altruism isn't actually a rare quality, so I think an honest accounting would suggest their moral system doesn't add much additional altruism to the system, only a lot of guilt about the fact that not much altruistic action is taking place.)

 

Note: I use the word "altruism" here in its modern, non-Comtean sense.  Altruism is that which benefits others.

 

Does your moral system make you unhappy, on the whole?  Does it, like most moral systems, place a value on happiness?  Would it make the average person less or more happy, if they and they alone adopted it?  Are your expectations of the moral value of your moral system predicated on an unrealistic scenario of universal acceptance?  Maybe your moral system isn't itself very moral.

 

Second: Do you think your moral system makes you a more moral person?

 

Does your moral system promote moral actions?  What percentage of your actions concerning your morality are spent feeling good because you feel like you've effectively promoted your moral system, rather than promoting the values inherent in it?

 

Do you behave any differently than you would if you operated under a "common law" morality, such as social norms and laws?  That is, does your ethical system make you behave differently than if you didn't possess it?  Are you evaluating the merits of your moral system solely on how it answers hypothetical situations, rather than how it addresses your day-to-day life?


Does your moral system promote behaviors you're uncomfortable with and/or could not actually do, such as pushing people in the way of trolleys to save more people?

 

Third: Does your moral system promote morality, or itself as a moral system?

 

Is the primary contribution of your moral system to your life adding outrage that other people -don't- follow your moral system?  Do you feel that people who follow other moral systems are immoral even if they end up behaving in exactly the same way you do?  Does your moral system imply complex calculations which aren't actually taking place?  Is the primary purpose of your moral system encouraging moral behavior, or defining what the moral behavior would have been after the fact?

 

Considered as a meme or memeplex, does your moral system seem better suited to propagating itself than to encouraging morality?  Do you think "The primary purpose of this moral system is ensuring that these morals continue to exist" could be an accurate description of your moral system?  Does the moral system promote the belief that people who don't follow it are completely immoral?

 

Fourth: Is the major purpose of your morality morality itself?

 

This is a rather tough question to elaborate with further questions, so I suppose I should try to clarify a bit first: Take a strawman utilitarianism where "utility" -really is- what the morality is all about, where somebody has painstakingly gone through and assigned utility points to various things (this is kind of common in game-based moral systems, where you're just accumulating some kind of moral points, positive or negative).  Or imagine (tough, I know) a religious morality where the sole objective of the moral system is satisfying God's will.  That is, does your moral system define morality to be about something abstract and immeasurable, defined only in the context of your moral system?  Is your moral system a tautology, which must be accepted to even be meaningful?

 

This one can be difficult to identify from the inside, because to some extent -all- human morality is tautological; you have to identify it with respect to other moralities, to see if it's a unique island of tautology, or whether it applies to human moral concerns in the general case.  With that in mind, when you argue with other people about your ethical system, do they -always- seem to miss the point?  Do they keep trying to reframe moral questions in terms of other moral systems?  Do they bring up things which have nothing to do with (your) morality?

New Comment
63 comments, sorted by Click to highlight new comments since:

Is this an accurate summary or restatement?

  1. A system of morals should endorse its adherents teaching it to other people; it should not be exclusive or socially self-undermining. You should not think, "I mustn't teach the true morality to those people, because the world will be worse if I did."
  2. People who learn a system of morals should subsequently behave better, according to that system's own evaluation; it should not be personally self-undermining. You should not think, "Wow, I wish I had never learned about the true morality; since believing it, I've become worse at what it says I should do!"
  3. A system of morals should actually talk about actions, not just about believing that system; it should not be solafideist.
  4. A system of morals should not be for its own sake.

On 1, is that some other people, or all other people?

[-]Larks130

Anyone interested in such issues should read Parfit's Reasons and Persons, or use "self-effacing" as a keyword in the literature.

"Self-effacing" is a very useful keyword I hadn't been aware of. Thank you!

Interesting post, but I did not have any problems answering any of the above questions. But to be honest, its been a long while since I have had a good, honest conversation which challenged my moral system.

As you probably saw in the post before, I am a deontologist.

Note: I use the word "altruism" here in its modern, non-Comtean sense. Altruism is that which benefits others.

Who means that by Altruism? Are movie stars altruistic because people enjoy their movies? Anyone who produces something someone else values is altruistic?

The word for that which produces value is productive, not Altruism.

Thank you for the post. My current moral system is freshly baked and only a few years old. I have not yet thrown many challenging questions at it, and you provided a concentrated blast of said questions in an easy to use format.

I did not have trouble answering the questions other than really understanding the grit of the 'tautology' section at the end. I am comfortable with the answers and don't currently see any obvious holes in my moral system that make me nervous.

For the record, I am a 'particles and forces' nihilist, as in everything in the universe derives from a small number of fairly simple equations, and there's no magic, meaning, or purpose to anything. I happen to be alive because my genetics happen to be set up that way; I would prefer to keep living for the same reason. That preference to keep living, and to keep doing the things I happen to be configured to find enjoyable, is the ultimate long term goal of my moral system.

It just so happens that the path which maximizes my ultimate long term goal is one that generally obeys laws, generally engages in long term behaviour, and generally cooperates in modern society. While it also encourages others to do the same, the mechanism by which they do so may be arbitrary so long as the results are the same.

If anything means something to you, then the universe has meaning for you. If you have a purpose for any things, then those things have a purpose for you. The universe is full of meanings and purposes that people have.

I think it is a common error, that I've shared myself, in denying meaning to things (meaning, purpose, morality) because the world is full of crazy people associating crazy meanings with those terms. But why let the crazy people own all the words, when there are perfectly sane and useful senses of those words?

In my case, I used to call myself amoral because I rejected the various notions of objective morality as the stuff of lunatics. But on later consideration, I found I had a number of preferences about the behavior of others that sure looked much like the morality of others, I just didn't assert my moral preferences as imperatives from God/The Universe/Reason/Truth, but as imperatives from me. Wouldn't it be handy to have a word for moral preferences without crazy and confused conceptual baggage? I think so, so I now assert a non crazy morality conception of morality as morality.

I think for similar reasons, I think of morality as descriptive rather than philosophical. That is, humans have certain moral sentiments, on average, that have evolved in to place because cooperation is so darned important a part of the survival advantage of humans. Attempts to instrospect on the moral feelings and derive "ought" from them are a side effect of the sentiments and rationality that has evolved to serve humans so well. Looking for general moral truths beyond "this is right because it feels right" is at best besides the point and at worst impossible.

So you're proposing that morality should be catholic?

[-]TimS10

Since just about every moral system claims to be universal, isn't that some evidence that whatever moral system is true / valid should also be universal?

(I assume you mean universal when you said "catholic" - if you meant the organization headquartered in Rome, nevermind)

Since just about every moral system claims to be universal, isn't that some evidence that whatever moral system is true / valid should also be universal?

For this to work would imply some sort of sense by which people acquire information about True Morality, which is then reflected in what they call morality. How does this sense collect its information?

[-]TimS00

If there is a True Morality, I think that "True Morality --> sensus moralitus --> morally correct choices" is how it would work.

Our complete inability to identify anything like a universal "sensus moralitus" is evidence that there's no such thing as True Morality. Mirror neurons or similar candidates seem unable to explain the wide divergence in actual moral practices across time and culture.

I could make a similar argument from the different beliefs about the physical world to argue that there is no such thing as Truth.

I could make a similar argument from the different tastes in food to argue that there is no such thing as Yummy.

If I'm interpreting your comment correctly, I think you are confusing my argument with its converse.

If there is a True Morality, I think that "True Morality --> sensus moralitus --> morally correct choices" is how it would work.

Why? Our knowledge of descriptive truths about the external world operates on this sort of perceptual model, but not our knowledge of all truth. For instance, our knowledge of mathematical truths does not appear to rely on a sensus mathematicus. Why think the perceptual model would be the appropriate one for moral truth?

[-]TimS00

It's a fair question. In brief, my sense is that moral disputes look like empirical disputes from an outside view. Moral disputants look like empirical disputants, not mathematical disputants.
Falsely believing we have a "sensus moralitus" when we actually don't seems like a complete explanation of the politics-is-the-mindkiller phenomena.

Empirical disputes tend to move from generalizations to particulars, since perception is regarded as the ultimate arbiter, and our perception is of particulars. So if two people disagree about whether oppositely charged objects attract or repel one another (a generalization), one of them might say, "Well, let's see if this positively charged metal block attracts or repels this negatively charged block." We rely on the fact that agents with similar perceptual systems will often agree about particular perceptions, and this is leveraged to resolve disagreement about general claims.

Moral disputes, on the other hand, tend to move in the opposite direction, from particulars to generalizations. Disputants start out disagreeing about the right thing to do in a particular circumstance, and they attempt to resolve the disagreement by appeal to general principles. In this case, we think that agents with similar biological and cultural backgrounds will tend to agree about general moral principles ("avoidable suffering is bad", "discrimination based on irrelevant characteristics is bad", etc.) and leverage this agreement to attempt to resolve particular disagreements. So the direction of justification is the opposite of what one would expect from the perceptual model.

This suggests to me that if there are moral truths, then our knowledge of them is probably not best explained using the perceptual model. I do agree that moral disagreements aren't entirely like mathematical disagreements either, but I only brought up the mathematical case as an example of there being other "kinds of truth". I didn't intend to claim that morality and mathematics will share an epistemology. I would say that knowing moral truths is a lot more like knowing truths about, say, the rules for rational thinking.

Moral disputes, on the other hand, tend to move in the opposite direction, from particulars to generalizations.

Nah. It's bi-directional, in roughly equal proportions.

Straw-man relativism is not catholic (with a lower-case c, but universal would be better). Strict utilitarianism is subjective, and what is applicable in one situation is not universally applicable, and isn’t universal in that way.

[-]smk00

On #4, I'm fine with my morality existing for it's own sake. I don't need a justification for the things from which I derive justification.

[-][anonymous]00

Are you using tautology in some non-standard way here? If all human morality is tautological, then (at least as the word is generally used) all human morality is true, because all tautologies are true. But this seems impossible, since they will contradict one another.

A green sky will be green. A pink invisible unicorn is pink. A moral system would be moral. All tautologies, none are true.

[-][anonymous]20

Either they're true, or they're not tautologies. Tautologies are always true. In this case, the first two seem to me at least to be true, and I don't understand the third.

Think of it this way:

"If a system is moral, then it is moral"

For example:

"If murder is moral, then murder is moral."

[-][anonymous]00

I guess I don't understand what you're getting at.

[-]TimS00

I think there may be a terminological dispute. Part of the most recent sequence discusses the distinction between true and valid.

In brief, the colloquial word "true" is overloaded. Defining true as "corresponds with reality" we recognize that mathematics and first order logic are not true in that sense. Instead, we should use the word valid.

Under that definition, tautologies are never true, they are only valid. Essentially, resurrecting logical positivism. I think the resurrection fails, so I'm unimpressed. But usage of that terminology is the best steelman I can see for Petruchio's post.

[-][anonymous]00

But usage of that terminology is the best steelman I can see for Petruchio's post.

Fair enough. I also don't think this use of 'valid' is a good idea (how do we distinguish between sound mathematical arguments and, at the extreme, arguments that validly take contradictions for premises?). Also, what happened to dear old Tarski?

I guess the answer to my original question is 'yes, "tautology" is being used in a non-standard way'.

Tautologies are always true.

That's what they drill into us, but I really don't like that definition.

I prefer to say that statements which accurately describe this universe are "true". Tautologies are just tautological.

That's not a bad version. What I got from some of the sequences was that tautologies are true in all possible universes, they are true by definition, and that makes them useless.

Yeah...it's really just a semantic thing.

I like my technical jargon to match up with everyday use. In everyday life, when we say "that's true", we mean that that's real, not just that it's logically consistent and self justifying.

When your jargon doesn't match the everyday use, people get confused...like above, hen reached the conclusion that all morality is "true" because it is a tautology, with the implication being that all moral statements are right - that's an example of the sort of confusion that can occur.

statements which accurately describe this universe

You might want to add “but inaccurately describe at least one different universe”, otherwise tautologies are also true.

Hmm...maybe I shouldn't redefine truth after all, because you just used my new definition of truth in conjunction with the with the old definition of "accurate". Which is my fault, for using "accurate" in the definition...I've gotten too accustomed to using the common laymen's definition for "true" and "accurate" in my mind after internally redefining what they mean to fit the lay notion.

I guess we need a one syllable word for "statement which increases one's knowledge about the universe within which one exists". Thus "statements which are [insert word here] restrict the set of universes one is in" would be a tautology.

I just instinctively put "true" into "[insert word here]"...I really wish we had just originally started out using "true" to mean this...Accurate even means the act of precisely interacting with a point in space. I don't understand why we chose to define "tautologies" as true and accurate, when they pinpoint nothing whatsoever...

OTOH, we're not logically omniscient, so certain statements are useful to hear even if they are correct in all universes (e.g. “3107418240490043721350750035888567930037346022842727545720161948823206440518081504556346829671723286782437916272838033415471073108501919548529007337724822783525742386454014691736602477652346609 equals 1634733645809253848443133883865090859841783670033092312181110852389333100104508151212118167511579 times 1900871281664822113126851573935413975471896789968515493666638539088027103802104498957191261465571”).

You are right - I deliberately avoided the use of the term "useful statements" for this reason.

1) There are tautological statements

2) There are [insert word here] statements.

3) There are useful statements (these can be tautological, [insert word here], or false).

But we don't have a word for [insert word here]...well, prior to taking logic 101, laymen usually insert "true" into the slot, but for some reason we've decided to define the term "true" such that it refers to both tautologies and [insert word here], while neglecting to create a term exclusively for [insert word here].

That's my objection. Approaching [insert word here] is the goal of the sciences...we practically worship [insert word here], in a way that we do not worship tautologies. We aught to have a word that refers to it exclusively. I'd prefer that word to be "Truth", but then the mathematicians went and permanently broadened the meaning of that word, and now we can't have nice things anymore, so we need some other word.

“Empirically true statements”?

[-][anonymous]00

[insert word here]

Informative?

See above discussion.

It's means: a statement which is true in our universe, but is not a tautology.

I guess we need a one syllable word for "statement which increases one's knowledge about the universe within which one exists". Thus "statements which are [insert word here] restrict the set of universes one is in" would be a tautology.

[-][anonymous]00

Right, I was suggesting the word 'informative'.

Oh, sorry.

Yeah, there are a few candidates - "informative", "real", etc.

The trouble is that we are smashing through the layman's definition again. If we define "informative" as [insert word here], then we must also say that a calculus textbook is not at all informative.

[-][anonymous]00

Well, how about these tautologies: All green apples are green. All sunny days are days. All cats are cats.

All of those are true, and they accurately describe this (and every) universe.

I don't know if they drill anything into us: a tautology is just a technical term for a proposition that's true on every interpretation (that is, whatever 'cat' means in the above, the sentences is always true). This is just standard use, I'm not trying to defend it as the only possible answer to the question. But to say something like 'moral systems are tautological' as in the OP is sort of nonsensical. On the standard use of 'tautology' that statement is obviously false. And I don't know what non-standard use does better. We can use 'tautological' to mean anything we want, I'm just asking the OPoster "what are we using this word to mean?"

And all moral systems are, by definition, moral. (If they weren't, they'd be immoral systems!)

"Cats are cats" doesn't tell us anything about this universe. It doesn't tell us whether cats exist, it doesn't tell us what cats would be like if they did exist. It's just a self referencing label.

You can call it "true" if you want to use that definition of truth, but it isn't describing anything real.

[-][anonymous]00

And a moral system is, by definition, moral.

I don't think that's true (and it doesn't sound like a tautology either). For example, Aristotle has a moral system, and in it, he endorses slavery. Suppose slavery is evil. From this, we can say 'his moral system, insofar as it endorses slavery, is evil'. Another way to put that would be 'Aristotle's moral system is immoral.'

Is that true? I think it's at least in the ball park. In any case, it's not a contradiction, which the denial of a tautology would be. It sounds to me like 'moral systems are tautological' is just kind of an incoherent claim. I can't even tell what the thought is supposed to be.

Oh, that's just a case of two words sounding the same (like "I can" vs "pick up the can").

Aristotle's moral[1] system is immoral [2].

Moral[1] - of or pertaining to good and bad

Moral[2] - good

"Aristotle's system of identifying good and bad is not good".

[-][anonymous]00

Ah! Okay, so the original claim should be read as something like "All good systems are tautological".

...Could you explain what that means?

Wait, how did you get from the original claim to there?

The original claim should go: "All moral[2] systems are good" or "All moral[1] systems lie somewhere on the spectrum of goodness and badness."

I mean, you could certainly argue that all "good systems are tautological", in a sense. You'd be saying that good and bad are defined solely by the speaker, and when I say that "murder is bad", it is only "bad" because I define it as so. What you'd really be saying is that all moral systems are tautological (as in, they do not represent objective statements about the universe, and are arbitrarily defined).

But that statement doesn't follow from the original claim, does it?

[-][anonymous]00

Well, the original claim I got confused about was this:

...to some extent -all- human morality is tautological...

So do you think the original claim is saying that (to some extent) all moral statements are arbitrarily defined?

Yeah, I do think that is what the author meant.

The point that he is making is that even though morality is arbitrarily defined, it is important that moral systems map onto real world things.

For example, if you are a utilitarian you've arbitrarily decided that you want to maximize utility, But you aren't done yet - "increasing utility" has got to actually mean something. Where/what is the utility, in the real world? How can you know if you have increased or decreased net utility?

Or, if you are a religious person and are against "sin"... What does sin look like, in the real world? How can you measure sin? Etc....

Or, if you are a paper-clip maximizes...what exactly constitutes a paperclip?

There are no "utility" molecules, or "sin" molecules, or "paperclip" molecules. None of these things have coherent, ontologically fundamental definitions - they exist largely in your own head. You yourself must try to figure out what these things are in the real world, if you plan on using them in your moral systems.

In other words, there's got to be a conceptual umbilical chord connecting arbitrarily defined morality to things that happen in the world. It can't just be an abstract system ... it has to consider meat and neurons and circuits, it has to answer hard questions like "what is a person" and "what is pain and pleasure" before it's complete.

[-][anonymous]00

I see. I disagree with the claim, but I think I do understand it now. Thanks for taking the time to explain (and for being so patient with my incredulity).

A green sky will be green

This is true

A pink invisible unicorn is pink

This is a meaningless sequence of squiggles on my computer screen, not a tautology

A moral system would be moral

I'm unsure what this one means

I've encountered a few near-Comtean altruists who will readily admit their morality makes them miserable; the idea that other people are worse off than them fills them with a deep guilt which they cannot resolve.

Interesting morality, that makes those who follow it miserable. Why is it that they want to have a morality, when the one they have makes them miserable?

Most of what we strive for has survival value for the species. Yet striving is not fun. The human moral instinct is filled with features which facilitate humans living together in large numbers and working together in a unified fashion. We don't kill each other when we get mad at each other (mostly, and that is what morality pushes us towards even when we fail): this makes it way easier to live together in large groups. For the most part we are motivated to speak honestly to each other and to keep our promises and commitments. This allows groups of humans to function together to get things done that individual humans could not.

SO you ask:

Why is it that they want to have a morality, when the one they have makes them miserable?

Pardon my answering your question with a rhetorical question, but why would you think that what we have and what we get would be influenced at all by what we want or what does or does not make us miserable? But to answer more straightforwardly: if the result of having a morality which allows us to function effectively together in large numbers, then evolution will choose, if it can, for such moralities, and if being miserable does not fully negate the benefit of human cooperation, evolution will not fail at pursuing morality just because it also makes some people miserable.

want to have a morality

This implies that you get a choice in the matter. Ultimately, preferences simply exist - they aren't chosen.

I might say that, but I doubt that those miserable Comtean altruists would see it that way. For them, I suspect morality is a truth, not a preference.

Well, it is a truth about one's preferences, isn't it?

They'd say that Altruism is good regardless of whether they prefer it or not.

No, I mean even regardless of that.

Regardless of the debate surrounding whether Morality and "Good" are somehow embedded in how the universe works, you can't change whether or not you prefer to behave that way.

For example, I can't help but care about human suffering. You can't ask me why I would want to care about it - it's a terminal value. I care because I was "programmed" to care...and it wouldn't matter whether or not it was "good" to care.

I can't help it in the same way I can't help preferring sweet to bitter. Asking why I would want to have those preferences is like asking someone why they find junk food tasty.

I think this helps raise some elemental problems with morality in itself. I, myself, don't have a moral system, nor do I want one. I see a moral system as only worth having only if it is in some way useful to the individual. If it makes the person happy, for instance, or if it gives the person some blind sense of meaning or purpose. It might also be a source of peace and resolve to plunge forward in something, regardless of doubt, under the belief it is 'right'. It might also be useful to attempt to persuade and influence others by attacking their conscience or guilt complex. Even these uses, to me, however, seem disfunctional. I would prefer to see things as they are, devoid of moral abstractions, even if those moral ideas did cloud my mind with a false sense of rightousness or superiority. They also seem disfunctional regarding doing something "that's right", because it is better, to me, to have doubts and question one's actions instead of blindly acting under the guise of morality. They are also disfuntional if attempting to convert others to a moral system, as this system would also be clouded with a false sense of objective right or wrong, and could easily turn on the person who started it, if that person, according to the moral system's subjects, acted 'immorally'.

Cool, I'm glad I've finally found someone who won't object to my hobby of boiling babies alive.

I would certainly have objection. I would just make sure my objection wasn't on the fragile grounds of moral objection. Moral objection is fragile because there is no collective or objective definition to it. Using subjective morals to object to it would be like making-up rules to a game you never asked to play with me.

Welcome to Lesswrong.

I would just make sure my objection wasn't on the fragile grounds of moral objection.

What would an example of that objection be?

(Other than: "Here is a gun,and unless you stop boiling babies I'm gonna shoot you.)