My intended next OB post will, in passing, distinguish between moral judgments and factual beliefs. Several times before, this has sparked a debate about the nature of morality. (E.g., Believing in Todd.) Such debates often repeat themselves, reinvent the wheel each time, start all over from previous arguments. To avoid this, I suggest consolidating the debate. Whenever someone feels tempted to start a debate about the nature of morality in the comments thread of another post, the comment should be made to this post, instead, with an appropriate link to the article commented upon. Otherwise it does tend to take over discussions like kudzu. (This isn't the first blog/list where I've seen it happen.)
I'll start the ball rolling with ten points to ponder about the nature of morality...
- It certainly looks like there is an important distinction between a statement like "The total loss of human life caused by World War II was roughly 72 million people" and "We ought to avoid a repeat of World War II." Anyone who argues that these statements are of the same fundamental kind must explain away the apparent structural differences between them. What are the exact structural differences?
- We experience some of our morals and preferences as being voluntary choices, others as involuntary perceptions. I choose to play on the side of Rationality, but I don't think I could choose to believe that death is good any more than I could choose to believe the sky is green. What psychological factors account for these differences in my perceptions of my own preferences?
- At a relatively young age, children begin to believe that while the teacher can make it all right to stand on your chair by giving permission, the teacher cannot make it all right to steal from someone else's backpack. (I can't recall the exact citation on this.) Do young children in a religious environment believe that God can make it all right to steal from someone's backpack?
- Both individual human beings and civilizations appear to change at least some of their moral beliefs over the course of time. Some of these changes are experienced as "decisions", others are experienced as "discoveries". Is there a systematic direction to at least some of these changes? How does this systematic direction arise causally?
- To paraphrase Alfred Tarski, the statement "My car is painted green" is true if and only if my car is painted green. Similarly, someone might try to get away with asserting that the statement "Human deaths are bad" is true if and only if human deaths are bad. Is this valid?
- Suppose I involuntarily administered to you a potion which would cause you to believe that human deaths were good. Afterward, would you believe truly that human deaths were good, or would you believe falsely that human deaths were good?
- Although the statement "My car is painted green" is presently false, I can make it true at a future time by painting my car green. However, I can think of no analogous action I could take which would make it right to kill people. Does this make the moral statement stronger, weaker, or is there no sense in making the comparison?
- There does not appear to be any "place" in the environment where the referents of moral statements are stored, analogous to the place where my car is stored. Does this necessarily indicate that moral statements are empty of content, or could they correspond to something else? Is the statement 2 + 2 = 4 true? Could it be made untrue? Is it falsifiable? Where is its content?
- The phrase "is/ought" gap refers to the notion that no ought statement can be logically derived from any number of is statements, without at least one ought statement in the mix. For example, suppose I have a remote control with two buttons, and the red button kills an innocent prisoner, and the green button sets them free. I cannot derive the ought-statement, "I ought not to press the red button", without both the is-statement "If I press the red button, an innocent will die" and the ought-statement "I ought not to kill innocents." Should we distinguish mixed ought-statements like "I ought not to press the red button" from pure ought-statements like "I ought not to kill innocents"? If so, is there really any such thing as a "pure" ought-statement, or do they all have is-statements mixed into them somewhere?
- The statement "This painting is beautiful" could be rendered untrue by flinging a bucket of mud on the painting. Similarly, in the remote-control example above, the statement "It is wrong to press the red button" can be rendered untrue by rewiring the remote. Are there pure aesthetic judgments? Are there pure preferences?
Anna : For you.
No this is not a personal opinion, the fact that ethical behavior is for a large part innate is shown by psychological studies, and also philosopy back to millenia ago, Confucius and more.
Anna : A religious person may feel that their ethical behavior is governed by their faith.
Young children "feel" that some of the gifts they get are from Santa Claus or the Tooth Fairy, because they are told that.
Anna : I believe in Something as opposed to Nothing but I am not a theist. I don't believe in Gods or Goddesses. I don't see how that's relevant.
FYI, atheists aren't believers in Nothing. (Please show us an instance of Nothing) It is relevant to ask about theism because it is commonplace for religionists to claim that there can be no morality without faith.
Anna : I agree that it's futile [bringing "rationality" to moral dilemmas]
Certainly not for the same reasons than me, you likely didn't read the comment I linked to, I will reproduce the substance below :
[bringing "rationality" to moral dilemmas] is the typical legacy of the Greeks who "invented" logic for this very purpose. Yet it does not make any sense because our actual decision making (in moral matters as well as in anything else) is NOT rational. But George Ainslie thesis that we use hyperbolic discounting to evaluate distant rewards instead of the "rational" exponential discounting casts serious doubts about our ability to EVER come with consistent judgements on any matter. That may be the real cause for most occurrences of akrasia which are always noticed ex post facto. AND... Trying to enforce "rationality" upon our decisions may lead to severe psychiatric problems:
Intertemporal bargaining also predicts four serious side effects of willpower: A choice may become more valuable as a precedent than as an event in itself, making people legalistic; signs that predict lapses should become self-confirming, leading to failures of will so intractable that they seem like symptoms of disease; there is motivation not to recognize lapses, which might create an underworld much like the Freudian unconscious; and concrete personal rules should recruit motivation better than subtle ones, a difference which could impair the ability of will-based strategies to exploit emotional rewards.
Anna : Rationality is about looking at it from someone else's point of view and deciding if it is "right for you" or "wrong for you", without judgement.
Either I don't catch at all what you mean or you have strange ideas about rationality. To me rationality isn't (primarily) about someone else's point of view but about using evidence and sound inferences to build a model of perceived reality. Nothing to do with "right for [me]" or "wrong for [me]" or with judgement. If I am using rationality I can expect to be able to share the resulting model with other people (intersubjectivity) or to amend or replace this model if I am shown factual errors or omissions in either my evidences or my inferences. This does not seem to be possible when the "other people" take exception of "faith" to assert or deny facts without evidence or condone faulty inferences.
Anna : Morals are about beliefs and faith.
You are just asserting the point you supposedly want to debate!
Anna : Ethical behavior is about "right or wrong".
You are making up your own definitions, not much chance to reach any agreement on the substance of the debate if the terminology is uncertain. I would ask anyway, "right or wrong" about what and for whom?
Anna : "How can I believe myself to be rational and logical and still believe in something that I can't see, hear, touch, taste or smell."
Physicists "believe" in quarks, cosmologists in Dark matter neither of which they can see, hear, touch, taste or smell. Yet they are pretty hardcore rationalists, how do they do?
Anna : I apologize, (yes, that's weird to you, I know) if my post was too long.
How funnny, you seem to feel blamable for the length of your post but not for sneaking in a snide remark. I surmise that you feel guilt about "breaking the rules" but don't really care about fellow humans.