Followup toMoral Complexities

In the dialogue "The Bedrock of Fairness", I intended Yancy to represent morality-as-raw-fact, Zaire to represent morality-as-raw-whim, and Xannon to be a particular kind of attempt at compromising between them.  Neither Xannon, Yancy, or Zaire represent my own views—rather they are, in their disagreement, showing the problem that I am trying to solve.  It is futile to present answers to which questions are lacking.

But characters have independent life in the minds of all readers; when I create a dialogue, I don't view my authorial intent as primary.  Any good interpretation can be discussed.  I meant Zaire to be asking for half the pie out of pure selfishness; many readers interpreted this as a genuine need... which is as interesting a discussion to have as any, though it's a different discussion.

With this in mind, I turn to Subhan and Obert, who shall try to answer yesterday's questions on behalf of their respective viewpoints.

Subhan makes the opening statement:

Subhan:  "I defend this proposition: that there is no reason to talk about a 'morality' distinct from what people want."

Obert:  "I challenge.  Suppose someone comes to me and says, 'I want a slice of that pie you're holding.'  It seems to me that they have just made a very different statement from 'It is right that I should get a slice of that pie'.  I have no reason at all to doubt the former statement—to suppose that they are lying to me about their desires.  But when it comes to the latter proposition, I have reason indeed to be skeptical.  Do you say that these two statements mean the same thing?"

Subhan:  "I suggest that when the pie-requester says to you, 'It is right for me to get some pie', this asserts that you want the pie-requester to get a slice."

Obert:  "Why should I need to be told what I want?"

Subhan:  "You take a needlessly restrictive view of wanting, Obert; I am not setting out to reduce humans to creatures of animal instinct.  Your wants include those desires you label 'moral values', such as wanting the hungry to be fed—"

Obert:  "And you see no distinction between my desire to feed the hungry, and my desire to eat all the delicious pie myself?"

Subhan:  "No!  They are both desires—backed by different emotions, perhaps, but both desires.  To continue, the pie-requester hopes that you have a desire to feed the hungry, and so says, 'It is right that I should get a slice of this pie', to remind you of your own desire.  We do not automatically know all the consequences of our own wants; we are not logically omniscient."

Obert:  "This seems psychologically unrealistic—I don't think that's what goes through the mind of the person who says, 'I have a right to some pie'.  In this latter case, if I deny them pie, they will feel indignant.  If they are only trying to remind me of my own desires, why should they feel indignant?"

Subhan:  "Because they didn't get any pie, so they're frustrated."

Obert:  "Unrealistic!  Indignation at moral transgressions has a psychological dimension that goes beyond struggling with a struck door."

Subhan:  "Then consider the evolutionary psychology.  The pie-requester's emotion of indignation would evolve as a display, first to remind you of the potential consequences of offending fellow tribe-members, and second, to remind any observing tribe-members of goals they may have to feed the hungry.  By refusing to share, you would offend against a social norm—which is to say, a widely shared want."

Obert:  "So you take refuge in social wants as the essence of morality?  But people seem to see a difference between desire and morality, even in the quiet of their own minds.  They say things like:  'I want X, but the right thing to do is Y... what shall I do?'"

Subhan:  "So they experience a conflict between their want to eat pie, and their want to feed the hungry—which they know is also a want of society.  It's not predetermined that the prosocial impulse will be victorious, but they are both impulses."

Obert:  "And when, during WWII, a German hides Jews in their basement—against the wants of surrounding society—how then?"

Subhan:  "People do not always define their in-group by looking at their next-door neighbors; they may conceive of their group as 'good Christians' or 'humanitarians'."

Obert:  "I should sooner say that people choose their in-groups by looking for others who share their beliefs about morality—not that they construct their morality from their in-group."

Subhan:  "Oh, really?  I should not be surprised if that were experimentally testable—if so, how much do you want to bet?"

Obert:  "That the Germans who hid Jews in their basements, chose who to call their people by looking at their beliefs about morality?  Sure.  I'd bet on that."

Subhan:  "But in any case, even if a German resister has a desire to preserve life which is so strong as to go against their own perceived 'society', it is still their desire."

Obert:  "Yet they would attribute to that desire, the same distinction they make between 'right' and 'want'—even when going against society.  They might think to themselves, 'How dearly I wish I could stay out of this, and keep my family safe.  But it is my duty to hide these Jews from the Nazis, and I must fulfill that duty.'  There is an interesting moral question, as to whether it reveals greater heroism, to fulfill a duty eagerly, or to fulfill your duties when you are not eager.  For myself I should just total up the lives saved, and call that their score.  But I digress...  The distinction between 'right' and 'want' is not explained by your distinction of socially shared and individual wants.  The distinction between desire and duty seems to me a basic thing, which someone could experience floating alone in a spacesuit a thousand light-years from company."

Subhan:  "Even if I were to grant this psychological distinction, perhaps that is simply a matter of emotional flavoring. Why should I not describe perceived duties as a differently flavored want?"

Obert:  "Duties, and should-ness, seem to have a dimension that goes beyond our whims.  If we want different pizza toppings today, we can order a different pizza without guilt; but we cannot choose to make murder a good thing."

Subhan:  "Schopenhauer:  'A man can do as he wills, but not will as he wills.'  You cannot decide to make salad taste better to you than cheeseburgers, and you cannot decide not to dislike murder.  Furthermore, people do change, albeit rarely, those wants that you name 'values'; indeed they are easier to change than our food tastes."

Obert:  "Ah!  That is something I meant to ask you about.  People sometimes change their morals; I would call this updating their beliefs about morality, but you would call it changing their wants.  Why would anyone want to change their wants?"

Subhan:  "Perhaps they simply find that their wants have changed; brains do change over time.  Perhaps they have formed a verbal belief about what they want, which they have discovered to be mistaken. Perhaps society has changed, or their perception of society has changed.  But really, in most cases you don't have to go that far, to explain apparent changes of morality."

Obert:  "Oh?"

Subhan:  "Let's say that someone begins by thinking that Communism is a good social system, has some arguments, and ends by believing that Communism is a bad social system.  This does not mean that their ends have changed—they may simply have gotten a good look at the history of Russia, and decided that Communism is a poor means to the end of raising standards of living.  I challenge you to find me a case of changing morality in which people change their terminal values, and not just their beliefs about which acts have which consequences."

Obert:  "Someone begins by believing that God ordains against premarital sex; they find out there is no God; subsequently they approve of premarital sex.  This, let us specify, is not because of fear of Hell; but because previously they believed that God had the power to ordain, or knowledge to tell them, what is right; in ceasing to believe in God, they updated their belief about what is right."

Subhan:  "I am not responsible for straightening others' confusions; this one is merely in a general state of disarray around the 'God' concept."

Obert:  "All right; suppose I get into a moral argument with a man from a society that practices female circumcision.  I do not think our argument is about the consequences to the woman; the argument is about the morality of these consequences."

Subhan:  "Perhaps the one falsely believes that women have no feelings—"

Obert:  "Unrealistic, unrealistic!  It is far more likely that the one hasn't really considered whether the woman has feelings, because he doesn't see any obligation to care.  The happiness of women is not a terminal value to him.  Thousands of years ago, most societies devalued consequences to women.  They also had false beliefs about women, true—and false beliefs about men as well, for that matter—but nothing like the Victorian era's complex rationalizations for how paternalistic rules really benefited women. The Old Testament doesn't explain why it levies the death penalty for a woman wearing men's clothing.  It certainly doesn't explain how this rule really benefits women after all.  It's not the sort of argument it would have occurred to the authors to rationalize!  They didn't care about the consequences to women."

Subhan:  "So they wanted different things than you; what of it?"

Obert:  "See, now that is exactly why I cannot accept your viewpoint.  Somehow, societies went from Old Testament attitudes, to democracies with female suffrage.  And this transition—however it occurred—was caused by people saying, 'What this society does to women is a great wrong!', not, 'I would personally prefer to treat women better.'  That's not just a change in semantics—it's the difference between being obligated to stand and deliver a justification, versus being able to just say, 'Well, I prefer differently, end of discussion.'  And who says that humankind has finished with its moral progress?  You're yanking the ladder out from underneath a very important climb."

Subhan:  "Let us suppose that the change of human societies over the last ten thousand years, has been accompanied by a change in terminal values—"

Obert:  "You call this a supposition?  Modern political debates turn around vastly different valuations of consequences than in ancient Greece!"

Subhan:  "I am not so sure; human cognitive psychology has not had time to change evolutionarily over that period.  Modern democracies tend to appeal to our empathy for those suffering; that empathy existed in ancient Greece as well, but it was invoked less often.  In each single moment of argument, I doubt you would find modern politicians appealing to emotions that didn't exist in ancient Greece."

Obert:  "I'm not saying that emotions have changed; I'm saying that beliefs about morality have changed.  Empathy merely provides emotional depth to an argument that can be made on a purely logical level:  'If it's wrong to enslave you, if it's wrong to enslave your family and your friends, then how can it be right to enslave people who happen to be a different color?  What difference does the color make?'  If morality is just preference, then there's a very simple answer:  'There is no right or wrong, I just like my own family better.'  You see the problem here?"

Subhan:  "Logical fallacy:  Appeal to consequences."

Obert:  "I'm not appealing to consequences.  I'm showing that when I reason about 'right' or 'wrong', I am reasoning about something that does not behave like 'want' and 'don't want'."

Subhan:  "Oh?  But I think that in reality, your rejection of morality-as-preference has a great deal to do with your fear of where the truth leads."

Obert:  "Logical fallacy:  Ad hominem."

Subhan:  "Fair enough.  Where were we?"

Obert:  "If morality is preference, why would you want to change your wants to be more inclusive?  Why would you want to change your wants at all?"

Subhan:  "The answer to your first question probably has to do with a fairness instinct, I would suppose—a notion that the tribe should have the same rules for everyone."

Obert:  "I don't think that's an instinct.  I think that's a triumph of three thousand years of moral philosophy."

Subhan:  "That could be tested."

Obert:  "And my second question?"

Subhan:  "Even if terminal values change, it doesn't mean that terminal values are stored on a great stone tablet outside humanity.  Indeed, it would seem to argue against it!  It just means that some of the events that go on in our brains, can change what we want."

Obert:  "That's your concept of moral progress?  That's your view of the last three thousand years?  That's why we have free speech, democracy, mass street protests against wars, nonlethal weapons, no more slavery—"

Subhan:  "If you wander on a random path, and you compare all past states to your present state, you will see continuous 'advancement' toward your present condition—"

Obert:  "Wander on a random path?"

Subhan:  "I'm just pointing out that saying, 'Look how much better things are now', when your criterion for 'better' is comparing past moral values to yours, does not establish any directional trend in human progress."

Obert:  "Your strange beliefs about the nature of morality have destroyed your soul.  I don't even believe in souls, and I'm saying that."

Subhan:  "Look, depending on which arguments do, in fact, move us, you might be able to regard the process of changing terminal values as a directional progress.  You might be able to show that the change had a consistent trend as we thought of more and more arguments.  But that doesn't show that morality is something outside us.  We could even—though this is psychologically unrealistic—choose to regard you as computing a converging approximation to your 'ideal wants', so that you would have meta-values that defined both your present value and the rules for updating them.  But these would be your meta-values and your ideals and your computation, just as much as pepperoni is your own taste in pizza toppings.  You may not know your real favorite ever pizza topping, until you've tasted many possible flavors."

Obert:  "Leaving out what it is that you just compared to pizza toppings, I begin to be suspicious of the all-embracingness of your viewpoint.  No matter what my mind does, you can simply call it a still-more-modified 'want'.  I think that you are the one suffering from meta-level confusion, not I.  Appealing to right is not the same as appealing to desire.  Just because the appeal is judged inside my brain, doesn't mean that the appeal is not to something more than my desires.  Why can't my brain compute duties as well as desires?"

Subhan:  "What is the difference between duty and desire?"

Obert:  "A duty is something you must do whether you want to or not."

Subhan:  "Now you're just being incoherent.  Your brain computes something it wants to do whether it wants to or not?"

Obert:  "No, you are the one whose theory makes this incoherent.  Which is why your theory ultimately fails to add up to morality."

Subhan:  "I say again that you underestimate the power of mere wanting.  And more:  You accuse me of incoherence?  You say that I suffer from meta-level confusion?"

Obert:  "Er... yes?"

To be continued...

 

Part of The Metaethics Sequence

Next post: "Is Morality Given?"

Previous post: "Moral Complexities"

New Comment
44 comments, sorted by Click to highlight new comments since: Today at 10:59 AM

I'm not sure you are framing the key questions quite as directly and clearly as you could. Both morality and our wants are things we can be uncertain about, and so change our minds about. The claim that there is a morality beyond our mere wants, "what the universe wants" if you will, seems coherent and hard to exclude. The claim that many if not most people want, at least in part, to act morally also seems coherent and hard to exclude. So to me the key questions are:

  1. How much do we actually want to be moral? I suggest we pretend to want to be moral than we do.

  • What evidence do we really have for our beliefs about which acts actually are moral? What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth. Some claim it is incoherent to not want to always act morally, but I find that view hard to understand.
  • Intuition seems to do a lot of the heavy lifting here, and we should know it's not a very reliable support.

    I argued here with Mencius Moldbug whether society is best described as on a random walk (my view and possibly his months ago) or a pre-charted path of decline.

    A duty is half of a contract--it comes from some obligation assumed (perhaps implicitly) in the past. A man may in general assign a very high priority to keeping his promises. He may feel a moral obligation to do so, independent of the specific nature of the promise. Should keeping a promise be difficult or unpleasant, he will balance his desire to avoid unpleasantness with his desire to be the sort of person who repays what was given.

    For example, a man who has enjoyed the rights and privileges of a citizen may feel he has a duty to support the interests of his country. Certainly many citizens of the various States felt so, two hundred and thirty-two years ago.

    Obert: "That's your concept of moral progress? That's your view of the last three thousand years? That's why we have free speech, democracy, mass street protests against wars, nonlethal weapons, no more slavery -"
    What does 'moral progress' mean? I could speak of 'scientific progress' and refer to the tendency of scientific change to produce models that are increasingly specific and accurate. Even if they're not ultimately correct, they're useful - predictive utility has increased.

    What desirable condition is increased by free speech, democracy, and the like?

    """Obert: "A duty is something you must do whether you want to or not." """

    Obey gravity. It's your duty!

    --Obert

    Why is it a mystery (on the morality-as-preferences position) that our terminal values can change, and specifically can be influenced by arguments? Since our genes didn't design us with terminal values that coincide with its own (i.e., "maximize inclusive fitness"), there is no reason why they would have made those terminal values unchangeable.

    We (in our environment of evolutionary adaptation) satisfied our genes' terminal value as a side-effect of trying to satisfy our own terminal values. The fact that our terminal values respond to moral arguments simply means that this side-effect was stronger if our terminal values could change in this way.

    I think the important question is not whether persuasive moral arguments exist, but whether such arguments form a coherent, consistent philosophical system, one that should be amenable to logical and mathematical analysis without falling apart. The morality-as-given position implies that such a system exists. I think the fact that we still haven't found this system is a strong argument against this position.

    You cannot decide to make salad taste better to you than cheeseburgers...

    Tangentially, if Seth Robert's Shangri-la diet theory or something like turns out to be correct*, it may indeed be possible to enact a plan that ends with salad tasting better to you than cheeseburgers.

    • I lost 25 lb on it, so I think something's going on there.

    I think there may be 3 essential types:

    1. The whim type, who thinks whatever he wants in automatically right.
    2. The authoritarian type who thinks that whatever society or religion says is automatically right.
    3. The scientific type who thinks he has found a morality based in fact.

    I don't think 3s should be lumped in with 2s. Yes, he is following an external standard, but it is because he thinks there is a reason to do so, and is open to reason to change his mind, unlike the 2s (or 1s for that matter).

    @Caledonian:

    "What desirable condition is increased by free speech, democracy, and the like?"

    Justice. Without liberty there cannot be full justice.

    Please re-read Machiavelli's Discourses: you will find he answers these questions beautifully.

    Wow, what a long post. Subhan doesn't have a clue. Tasting a cheesburger like a salad, isn't Morality. Morality refers to actions in the present that can initiate a future with preferred brain-states (the weasily response would be to ask what these are, as if torture and pleasure weren't known, and initiate a conversation long enough to forget the initial question). So if you hypnotize yourself to make salad taste like cheeseburgers for health reasons, you are exercising Morality. I've got a forestry paper open in the other window. It is very dry, but I'm hoping I can calculate a rate of spread for an invasive species to plan a logging timeline to try to stop it. There is also a football game on. Not a great game, but don't pull a Subhan and try to tell me I'm reading the forestry paper because I like it more than the football game. I'm reading it because I realize there are brainstates of tourists and loggers and AGW-affected people that would rather see the forests intact, than temporarily dead. That's really all it boils down to. After gaining enough expertise over your own pysche sometime in childhood (ie. most 10 years olds would not waste time with this conversation), a developmental psychologist would know just when, you (a mentally healthy individual) realize there are other people who experience similiar brain states. Yes mirror neurons and the like are probably all evolutionary in origin, that doesn't change anything. There really are locally universe configurations that are "happier" in the net, than in other configurations. There is a ladder of morality, certainly not set in stone (torture me and all of a sudden I probably start valuing myself a lot more). I'd guess the whole point of this is to teach an AGI where to draw the line in upgrading human brain architectures (either that or I really do enjoy reading forestry over watching a game, and a really like salad over pizza and Chinese food). I don't see any reason why human development couldn't continue as it does now, voluntarily, the way human psyches are now developed (ie, trying pizza and dirt, and noting the preference for pizza in the future). Everyone arguing against morality-as-given is saying salad tastes better than pizza, as if there weren't some other reason for eating salad. The other reasons (health, dating a vegetarian, personal finances) maybe deserve a conversation, but not one muddled with this. Honestly, you follow Subhan's flawed reasoning methodology (as it seems Transhumanists and Libertarians are more likely to do than average, for whatever reason), you get to the conclusion consciousness doesn't exist. I think the AGI portion of this question depends a lot more on the energy resources of the universe than upon how to train an AGI to be a pyschologist, as unless there is some hurry to give the teaching/counselling reins to an AGI, what's the rush?

    The difference between duty and desire, is that some desires might harm other people while duty (you can weasily change the definition to mean Nazi duty but then you are asking an entirely different question) always helps other people. "Terminal values" as defined, are pretty weak. There are e=mc^2 co-ordinates that have maximized happiness values. Og may only be able to eat tubers, but most people literate are much higher on the ladder, and thus, have a greater duty. In the future presumably, the standards will be even higher. At some point assuming we don't screw it up the universe will be tiled with happy people, depending on the energy resources of the universe and how accurately they can be safely charted. Subhan is at a lower level on the ladder of Morality. All else equal (it never is as uploading is a delusion), Obert has a greater duty.

    Hear hear to Dynamically Linked's last paragraph.

    There is a subsystem in our brains called "conscience". We learn what is right and what is wrong in our early years, perhaps with certain priors ("causing harm to others is bad"). These things can also change by time (slowly!) per person, for example if the context of the feelings dramatically changes (oops, there is no God).

    So agreeing with Subhan, I think we just do what we "want", maximizing the good feelings generated by our decisions. We ("we" = the optimization process trying to accomplish that) don't have access to the lower level (on/off switch of conscience), so in many cases the best solution is to avoid doing "bad" things. (And it really feels different a) to want something because we like it b) to want something to avoid the bad feelings generated by conscience). What our thoughts can't control directly seems to be an objective, higher level truth, that's the algorithm feels from the inside.

    Furthermore, see psychopaths. They don't seem to have the same mental machinery of conscience, so the utility of their harmful intentions don't get the same correction factor. And so immoral they become.

    Why don’t we separate the semantic from the metaphysic question? In the question "is morality preference?" 'morality' can mean "moral language" or "moral facts". So there are two possible questions: (i) what is the nature and status of moral claims? do moral claims have truth values at all or are they just expressions of preference which like exlamations ("boo!") do not have truth values. (ii) are there moral facts or are there just 'brute', natural facts?

    Both questions are related but can be separated.

    (i) Do moral statements make claims to truth? (ii) Are there moral facts?

    Now there are four possible combinations of answers to the two questions:

    1) yes yes 2) no no 3) yes no 4) no yes

    1) is the classic realist position of Platonism and Realism; our moral statements do have truth value and there are moral facts; 2) is expressivist non-cognitivism; our moral language is just an expression of our preferences, and does not even have truth value; "murder is morally bad" is equivalent to "boo murder!"; to speak of moral facts is thus nonsensical 3) is Mackies error theory: if we understand our moral language correctly it does make truth claims but since there are no moral facts all such claims are false); 4) is an unlikely position for anyone to hold - the claim that there are moral facts but our moral language does not even try to express them.

    I guess that this will lead to your concept of volition, isn't it?

    Anyway, is Obert really arguing that morality is entirely outside the mind? Couldn't the "fact" of morality that he is trying to discover be derived from his (or humanity's, or whatever) brain design? And if you tweak Subhan's definition of "want" enough, couldn't they actually reach agreement?

    Justice. Without liberty there cannot be full justice.

    Please re-read Machiavelli's Discourses: you will find he answers these questions beautifully. You are ALL violating the primary commandment of reasoned argument: Thou Shall Operationally Define Your Terms.

    I ask for explanations, and you give me labels. How am I supposed to know what you mean by the label? How is anyone else supposed to? I'm sure everyone will be in favor of 'justice', but everyone will attach a different meaning to the term, and be in favor of their own private interpretation above others'.

    Don't Taboo the word. You'll just replace one word with another. Provide an operational definition.

    Regarding the first question,

    Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?
    I think the meaning of "it is (morally) right" may be easiest to explain through game theory. Humans in the EEA had plenty of chances for positive-sum interactions, but consistently helping other people runs the risk of being exploited by defection-prone agents. Accordingly, humans may have evolved a set of adaptions to exploit non-zero sumness between cooperating agents, but also avoid cooperating with defectors. Treating "X is (morally) right" as a warning of the form "If you don't do X, I will classify that as defection" explains a lot. Assume a person A has just (honestly) warned a person B that "X is the right thing to do":

    If B continues not do X, A will likely be indignant; indignancy means A will be less likely to help B in the future (which makes sense according to game theory), and might also recommend the same to other members of the tribe. B might accept the claim about rightness; this will make it more likely for him to do the "right" thing. Since, in the EEA, being ostracized by the tribe would result in a significant hit to fitness, it's likely for there to be an adaption predisposing people to evaluate claims about rightness in this manner. B's short-term desires might override his sense of "moral rightness", leading to him doing the (in his own conception) "wrong" thing. While B can choose to do the wrong thing, he cannot change which action is right by a simple individual decision, since the whole point of evaluating rightness at all is to evaluate it the same way as other people you interact with.

    According to this view, moral duties function as rules which help members of a society to identify defectors (by defectors violating them).

    "I think the meaning of "it is (morally) right" may be easiest to explain through game theory."

    Game theory may be useful here, but it is only a low-level efficient means to an ends. It might explain social heirachies on our past or in other species and it might explain the evolution of law, and it might be the highest up the Moral ladder some stupid or mentally impaired individuals can achieve. For instance, a higher Morality system than waiting for individuals to turn selfish before punishing them, is to ensure parents aren't abusive and childhood cognitive development opportunities exist. A basic pre-puberty or pre-25 social safety net is an improvement on game theory to reaching that tiled max-morality place. This no-morality line of reasoning might have some relavence if that happy place is a whole volume of diferent states. There are likely trade-offs between novel experiences and known preferences quite apart from harvesting unknown/dangerous energy resources. I know someone who likes cop shows and takes sleeping pills. This individuals can sometimes watch all his favourite Law + Order show reruns as if they were original. Maybe I'm a little jealous here in that I know every episode of Family Guy off by heart. Just because you don't know if there are Moral consequences doesn't mean there aren't. The key question is if you have the opportunity to easily learn about your moral sphere of influence. An interesting complication mentioned is how to know if what you think is a good act, isn't really bad. In my above forest example, cutting a forest into islands makes those islands more susceptible to invasive species and supressing a natural insect species might make forests less sustainable over the long-term. But that is a quesiton of scientific method and epistimology, not ontology. Ask whether setting fire to an orphanage is Morally equivalent to making a difficult JFK-esque judgement is silly. Assuming they are equivalent assumes because you don't know the answer to any given question, that eveyone else doesn't know either. I'm sure the cover this at some point in the Oxford undergraduate curriculum.

    What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth.

    But what we know about morality, we know purely thanks to the causal origin. If you see no obvious connection to moral truth, then either it is purely a coincidence that we happen to believe correctly, or else it is not and you're failing to see something. If it is purely a coincidence, then we may as well give up now.

    Philosophical dialogues are the best fiction. :)

    So agreeing with Subhan, I think we just do what we "want", maximizing the good feelings generated by our decisions.

    Maximize the satisfaction of wants, maybe, but not just good feelings.

    Subhan: "Now you're just being incoherent. Your brain computes something it wants to do whether it wants to or not?"

    One subself computes something it wants to do whether other subselves want to or not.

    Subhan's explanation is coherent and believable, but he has to bite a pretty big bullet. I happen to like helping people, Hitler happens to like hurting people, and we can both condemn each other if we want but both of our likes are equally valid.

    I think most people who think about morality have long realized Subhan's position is a very plausible one, but don't want to bite that bullet. Subhan's arguments confirm that the position is plausible, but they don't make the consequences any more tolerable. I realize that appeal to consequences is a fallacy and that reality doesn't necessarily have to be tolerable, but I don't feel anywhere near like the question has been "dissolved"

    Disagreeing with Mr. Huggan, I'd say Obert is the one without a clue.

    Obert seems to be trying to find some external justification for his wants, as if it's not sufficient that they are his wants; or as if his wants depend on there being an external justification, and his mental world would collapse if he were to acknowledge that there isn't an external justification.

    I would compare morality to patriotism in the sense of the Onion article that Robin Hanson recently linked to. Much like patriotism, morality is something adopted by people who like to believe in Great Guiding Ideas. Their intellect drives them to recognize that the idea of a god is ridiculous, but the religious need remains, so they try to replace it with a principle. A self-generated principle which they try to think is independent and universal and not self-generated at all. They create their own illusion as a means of providing purpose for their existence.

    I happen to like helping people, Hitler happens to like hurting people, and we can both condemn each other if we want but both of our likes are equally valid.
    Rather, "validity" equally fails to be meaningful for both of you. I think the "equally valid" phrasing makes it too easy to slip into saying silly things like "because all values are equally valid, you must never condemn another's." On the pure preference view, there's no reason not to condemn and even fight others because of their wants if doing so increases satisfaction of your own.

    It's possible that if Hitler had known more (about the Jews, the causal origins of nationalism, and, most importantly, what it was like to be in Auschwitz) and thought better (rejected nationalism as arbitrary, suffered less political self-deception, recognized the expected consequences of war) he would have done very differently. If so, I see this as sufficient to say he was wrong. The math of rationality says nothing about empathy for women or people with different-colored skin (or anybody), but humans, or at least human societies in the long run, have a hard time maintaining such arbitrary distinctions. That the circle of empathy doesn't expand quicker just shows the strength of self-interested biases/subselves.

    I don't know where this leaves genuine psychopaths, though.

    denis bider: I would compare morality to patriotism in the sense of the Onion article that Robin Hanson recently linked to. Much like patriotism, morality is something adopted by people who like to believe in Great Guiding Ideas.

    Daniel B. Klein calls this the People's Romance (I'm not sure whether this idea has been explored in meta-ethics or moral philosophy, though of course it's well known among sociologists). But for such a morality to be sustainable it would have to be a Schelling coordination point, so it would still be "independent" and not self-generated.

    Thanks for the link to The People's Romance!

    (Constant quoted from someone:)"What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth."

    Yes, but to a healthy intelligent individual not under duress, these causal origins (I'm assuming the reptilian or even mammalian brain centres are being referenced here) are much less a factor than is abstract knowledge garnered through education. I may feel on some basic level like killing someone that gives me the evil eye, but these impulses are easily subsumed by social conditioning and my own ideals of myself. Claiming there is a very small chance I'll commit evil is far different than claiming I'm a slave to my reptillian desires. Some people are slaves to those impulses, courts generally adjust for mental illness.

    (denis bider wrote:) "Obert seems to be trying to find some external justification for his wants, as if it's not sufficient that they are his wants; or as if his wants depend on there being an external justification, and his mental world would collapse if he were to acknowledge that there isn't an external justification.

    To me, this reads to say that if solipsism were true, Obert will have to become a hedonist. Correct. Or are you claiming Obert needs some sort of status? I didn't read that at all. Patriotism doesn't always seek utilitarianism as one's nation is only a small portion of the world's population. Morality does. Denis, are you claiming there is no way to commit acts that make others happy? Or are you claiming such an act is always out of self-interest? The former position is absurd, the latter runs into the problem that people who jump on grenades, die. I'm guessing there is a cognitive bias found in some/many of this blogs readers and thread starters, that because they know they are in a position of power vis-a-vis the average citizen, they are looking for any excuse not to accept moral resposibility. This is wrong. A middle class western individual, all else equal, is morally better by donating conspicious consumption income to charity, than by exercising the Libertarian market behaviour of buying luxury goods. I'm not condemning the purchasing behaviour, I'm condemning the Orwellian justification of trying to take (ego) pleasure in not owning up to your own consumption. If you are smart enough to construct such double-think, you can be smart enough to live with your conscious. Obert does not take Morally correct position just to win the argument with idiot Subhan. There are far deeper issues that can be debated on this blog about these issues, further up the Moral ladder. For instance, there are active legal precidents being formed in real world law right now, that could be influenced were this content to avoid retracing what is already known.

    On the question of what voluntary transaction you engage in with your money, libertarianism is silent (I'm pretty sure the Libertarian party is as well).

    Hitler did think he was helping people. He thought the Jews were out to get him and immisserate mankind, the end result in his vision was world peace. We usually think of him as evil-for-evils-sake because we fought a large succesful war against him.

    Mackie's Error sounds odd to me. How can a meaningless statement be false? If I said "Overcoming Bias is scrumtrulescent!", is that false?

    Sorry TGGP I had to do it. Now replace the word "charity" with "taxes".

    Phillip Huggan: "Denis, are you claiming there is no way to commit acts that make others happy?"

    Why the obsession with making other people happy?

    Phillip Huggan: "Or are you claiming such an act is always out of self-interest?"

    Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.

    In general, I wouldn't say self-interest. It is not in your self interest to cut off your penis and eat it, for example. But some people desire it and act on it.

    Desire. Not necessarily logical. Does not necessarily make sense. But drives people's actions.

    Reasons for desire? Unknowable.

    And if known?

    Trivial. Technical. Mundane.

    Phillip Huggan: "The former position is absurd, the latter runs into the problem that people who jump on grenades, die."

    I can write a program that will erase itself.

    Doesn't mean that there's an overarching morality of what programs should and should not do.

    People who jump on grenades do so due to an impulse. That impulse comes from cached emotions and thoughts. You prime yourself that it's romantic to jump on a grenade, you jump on a grenade. Poof.

    Stuff is. Fitting stuff that happens into a moral framework? A hopeless endeavor for misguided individuals seeking to fulfil the romantic notion that things should make sense.

    Phillip Huggan: "A middle class western individual, all else equal, is morally better by donating conspicious consumption income to charity, than by exercising the Libertarian market behaviour of buying luxury goods."

    Give me a break. You gonna contribute to a charity to take care of all the squid in the ocean? The only justification not to is if you invent an excuse why they are not worth caring about. And if not the squid, how about gorillas, then? Baboons, and chimpanzees?

    If we're going to intervene because a child in Africa is dying of malaria or hunger - both thoroughly natural causes of death - then should we not also intervene when a lion kills an antelope, or a tribe of chimpanzees is slaughtered by their neighbors?

    You have to draw a line somewhere, or else your efforts are hopeless. Most people draw the line at homo sapiens. I say that line is arbitrary. I draw it where it makes sense. With people in my environment.

    "Why the obsession with making other people happy?"

    Not obsessed. Just pointing out the definition of morality. High morality is making yourself and other people happy.

    Phillip Huggan: "Or are you claiming such an act is always out of self-interest?" (D.Bider:) Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.

    That's deep.

    "Stuff is. Fitting stuff that happens into a moral framework? A hopeless endeavor for misguided individuals seeking to fulfil the romantic notion that things should make sense."

    To me, there is nothing unintelligible about the ntion that my acts can have consequences. Generally I'm not preachy about it as democracy and ethical investing are appropriate forums to channel my resources towards in Canada. But the flawed line of reasoning that knowledge can never correlate with reality only finds salvation in solipsism, not a very likely scenario IMO. These kinds of reasonings are used by tyrants, for the record (it is god's will, it is for the national good, etc).

    "If we're going to intervene because a child in Africa is dying of malaria or hunger - both thoroughly natural causes of death - then should we not also intervene when a lion kills an antelope, or a tribe of chimpanzees is slaughtered by their neighbors?"

    Natural doesn't make it good. I'd value the child more highly because his physiology is more known (language and written records help) in how to keep him happy, and more importantly because he could grow up to invent a cure for malaria. Yes, eventually we should intervene by providing the chimps with mechanical dummies to murder, if murder makes them happy. Probably centuries away from that. It's nice that you draw the line around at least a group of others, but you seem to be using your own inability to understand Morality as evidence that others who have passed you on the Moral ladder, should come back down. You shouldn't be so self-conscious about this and certainly shouldn't be spreading the meme. I don't understand chemistry well or computer programming at all, but I don't go loudly proclaiming fake computer programming syntax or claiming that atoms don't exist, like EY is inciting here and like you are following. I'm not calling you evil. I'm saying you probably have the capacity to do more good, assuming you are middle class and blowing money on superfluous status-symbol consumer goods. Lobbying for a luxury tax is how I would voice my opinion, a pretty standard avenue I learned from a Macleans magazine back issue. Here, my purpose is to deprogram as many people as possible stuck in a community devoted to increasing longevity, but using means (such as lobbying for the regression of law) that meme-spread promote the opposite.

    @huggan

    "But the flawed line of reasoning that knowledge can never correlate with reality only finds salvation in solipsism."

    It's not going to even get him that far, actually. The view he espouses doesn't seem exactly as you define it - that knowledge can never correlate with reality - but I think based on Bider's overall postings, he is attempting cynicism, which is of course is a self-contradicting philosophy prima facie, as Cicero noted, unless ironic.

    Bider seems sincere in his comment, not ironic, so thus he appears a classic cynic, altho' without the wit or intensity of say, Juvenal, to recommend him.

    This takes us of course to Robin's famed meditation on cynicism. . . .I recommend that to Bider. Now, I'm outta here!

    Another example of a real-world Moral quandry that the real world would love H+ disucssion lists to take on, is the issue of how much medical care to invest in end-of-life patients. Medical advances will continue to make more expensive treatment options available. In Winnipeg, there was a case recently where a patient in a terminal coma had his family insist on not taking him off life support. In Canada in the last decade or so, the decision was based on a doctor's prescription. Now it also encompases family and the patient's previous wishes. 3 doctors quit over the case. My first instinct was to suggest doctors be trained exclusively to be coma-experts, but it seems medical boards might already have accomplished this. I admire a fighting spirit, and one isolated case doesn't tax the healthcare system much. But if this becomes a regular occurrence...this is another of many real-world examples that require intelligent thought. Subhan's position has already been proven wrong many many times. There are cognitive biases but they aren't nearly as strong or all-encompassing as is being suggested here. For example, I'd guess every reader on this list is aware that other people are capable of suffering and feeling happiness that corresponds with their own experiences. This isn't mirror-neurons or some other "bias", it is simple grade school deduction that refutes Subhan's position. You don't have to be highly Moral, to admit it's out there in some people. For instance, most children get what Subhan doesn't.

    Phillip Huggan - let me just say that I think you are an arrogant creature that does much less good to the world than he thinks. The morality you so highly praise only appears to provide you with a reason to smugly think of yourself as "higher developed" than others. Its benefit to you, and its selfish motivation, is plainly clear.

    frelkins: Should I apologize, then, for not yet having developed sufficient wit to provide pleasure with style to those readers who are not pleased by the thought?

    Cynicism is warranted to the extent that it leads to a realistic assessment and a predictive model of the world.

    Cynicism is exaggerated when it produces an unrealistic, usually too pessimistic, model of the world.

    But to the extent that cynicism is a negative evaluation of "what is", I am not being a cynic in this topic.

    I am not saying, bitterly, how sad it is that most people are really motivated by their selfishness, and how sad the world is because of this, etc.

    What I am saying is that selfishness is okay. That recognizing your selfishness is the healthiest state. I am saying not that people who are selfish are corrupting the world. I am saying that people who are self-righteous are.

    I understand people who want to reshape the world because they want it to be different, and are honest about this selfish preference and endeavor. I respect that.

    What I don't respect is people who are self-righteous in thinking that they know how to reshape the world to make other people happy, and do not see how self-anchored their motivation is. They are trying to do the same thing as those people who want to reshape the world selfishly. But the self-righteous ones, they sell what they are doing as being "higher on a moral ladder", because, obviously, they know what is good for everyone.

    I think that sort of behavior is just pompous, arrogant, and offensive.

    Be honest. Do things because of you. Don't do things because of others. Then, we can all come together and agree sensibly on how to act as to not step on each other's toes.

    But don't be running around "healing" the world, pretending like you're doing it a favor.

    Hi,

    Very interesting blog.

    I have a question, that I would really like peoples input on.

    We know that people have a tendency to be against foreigners (out group) and be fore their own contrymen (in-group). However, there are plenty of examples where citizens dislike their countrymen and do not associate with them.

    Obert in the thread above had the example of German resistance during WWII and we have Aboriginees in many countries, perhaps punkers and other movements.

    My first question is whether you can think of more examples where long terms citizens in a country dislike (and probably more than they dislike other countries even!)that country (their other countrymen)?

    My second question is what we may call this: Is it dissociation, aversion, disaffiliation or what concept describes and covers best this phenomenon?

    Thanks, Peter

    I am reading through the meta-ethics sequence for the first time. One thing I couldn't help but observe in this dialogue which I thought was interesting: Obert: "Duties, and should-ness, seem to have a dimension that goes beyond our whims. If we want different pizza toppings today, we can order a different pizza without guilt; but we cannot choose to make murder a good thing." It seemed odd to me that Subhan didn't mention regret at having made a difficult choice between competing wants, such as wondering whether you should've taken up piano playing instead of plumbing, or whatever, as being possibly something like the kind of negative feelings we get from guilt. We can't always order a different pizza without some sense of loss.

    If you have no desire for anything but cheese on your pizza, you will not have any regrets if you order cheese. (I'm going through the Sequences, myself. Nice to not be the gravedigger every time. Hopefully, this will be a useful contribution, not refuted by the next two posts in the Metaethics sequence.) Although I appreciated the actual point of the post, I was hung up on one part in the beginning--why can't I decide to like salad better than cheeseburger? I don't see any process which would prevent one from over-writing (over time) one's current preferences. Many people (in the USA at least) make their food decisions based on how many exclamation points are on the front (GLUTEN-FREE!! Less MSG!!!) or other purely psychological reasons (brand name--I've talked with many people who prefer one milk over another when I know for a fact that they come from the same company within the same hour of each other), which have nothing to do with their taste receptors. Similarly, pleasure and pain are not just based on nociceptors--would the tattoo-covered extreme man have been so eager to endure the tattooing process when he was a five-year old boy? In these cases, it seems to me, the end (body health (no matter how misguided), and much-desired attention) increases the desirability of the means (unpalatable foods, relatively unnecessary pain and risk of infection). Stockholm Syndrome, anyone? If I could achieve some wonderful thing by showing in an fMRI that I prefer eating salad and avoiding cheeseburgers (say, a million dollars or a free mind-upload, not just reduced risk of heart disease), I'll be first in line.

    [-][anonymous]13y00

    "If I could achieve some wonderful thing by showing in an fMRI that I prefer eating salad and avoiding cheeseburgers, I'll be first in line."

    It was "make salad taste better than cheeseburgers", not "prefer to eat salad". This analogy may be muddled by the fact that tastes can in fact be deliberately changed over time; wherein belief in belief can actually become belief, become reality. But the fact remains that if someone offered you a millions dollars, right now, to truthfully claim you prefer the taste of salad when you in fact did not, you would fail.

    I enjoy the sequences in story/dialog form most of all. But all too often the points conveyed in this style seem less significant and helpful to me; typically they are hashing out something that is largely semantics or else describing philosophical tropes.

    It doesn't seem to me that Obert is really a moral objectivist as the duos' names would suggest - I think their argument is really one of semantics. When he says: "Duties, and should-ness, seem to have a dimension that goes beyond our whims. ", he is using the word "whim" as a synonym for "want". Subhan merely has a more inclusive definition of want: "What a brain ultimately decides to do". It does not seem that Obert would object to the idea that moral constructs are created and stored in the mind, nor would Subhert reject that the brains' utility function has many differently ordered terms.

    The brain sometimes arrives at decisions contrary to immediate whims.

    [-][anonymous]12y10

    To be continued...

    A link here would be nice.

    Continued in the next post of the sequence.

    This sounds like a "tree falling in the woods" type argument, at least the way you have it laid our here. They using the word "want" to mean fundamentally different things. Subhan is using "want" to include all mental processes that encourage you to behave in a certain way, which I think is a categorization error that is causing him to come to wrong conclusions.

    You cannot decide to make salad taste better to you than cheeseburgers

    Sure you could; just eat emetic-laced cheeseburgers at half a dozen random times over next week.

    Why was this reposted?