Followup toThe Bedrock of Fairness

Discussions of morality seem to me to often end up turning around two different intuitions, which I might label morality-as-preference and morality-as-given.  The former crowd tends to equate morality with what people want; the latter to regard morality as something you can't change by changing people.

As for me, I have my own notions, which I am working up to presenting.  But above all, I try to avoid avoiding difficult questions.  Here are what I see as (some of) the difficult questions for the two intuitions:

  • For morality-as-preference:   
    • Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?  Why are the two propositions argued in different ways?   
    • When and why do people change their terminal valuesDo the concepts of "moral error" and "moral progress" have referents?  Why would anyone want to change what they want?   
    • Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?  Does the notion of morality-as-preference really add up to moral normality?
  • For morality-as-given:   
    • Would it be possible for everyone in the world to be wrong about morality, and wrong about how to update their beliefs about morality, and wrong about how to choose between metamoralities, etcetera?  So that there would be a morality, but it would be entirely outside our frame of reference?  What distinguishes this state of affairs, from finding a random stone tablet showing the words "You should commit suicide"?   
    • How does a world in which a moral proposition is true, differ from a world in which that moral proposition is false?  If the answer is "no", how does anyone perceive moral givens?   
    • Is it better for people to be happy than sad?  If so, why does morality look amazingly like godshatter of natural selection?   
    • Am I not allowed to construct an alien mind that evaluates morality differently?  What will stop me from doing so?

 

Part of The Metaethics Sequence

Next post: "Is Morality Preference?"

Previous post: "The Bedrock of Fairness"

New Comment
40 comments, sorted by Click to highlight new comments since:

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?
Because they believe the answer to "is it right that I want the pie?" isn't always "yes."

Is it ok to mash the two options together? I'd take the position that morality is about what people want, but that since it is about something that is real (wants) and thus objective/quantifiable/etc you can make statements about these real things that are actually true or false and not subject to whims.

To take a stab at a few of these...

Some terminal values can't be changed (or only very slightly), they are the ones we are born with. Aversion to pain, desire for sex, etc. The more maleable ones that can be changed are never changed through logic or reasoning. They are changed through things like praise, rewards, condemnation, punishments. I'm not sure if it's possible for people to change their own maleable terminal values. But people can change other's maleable terminal values (and likewise, have their own terminal values changed by others) through such methods. Obviously this is much easier to do very early in life.

I'd also like to propose that all terminal values can also be viewed as instrumental values on their tendency to help fulfill or prevent the realization of other values. "Staying alive", for example.

Moral progress is made by empirical observation of what desires/aversions have the greatest tendency to fulfill other desires, and then by strengthening these by the social tools mentioned above.

You can very easily want to change your desires when several of your desires are in conflict. I have a desire to inhale nicotine, and a desire to not get lung cancer, and I realize these two are at odds. I'd much prefer to not have the first desire. If one of your wants has significant consequences (loss of friends, shunning by your family) then you often would really like that want to change.

"Doing something they shouldn't" or "wanting something they know is wrong" are demonstrations of the fact that all entities have many desires, and sometimes these desires are in conflict. A husband might want to have an extra-marital affair due to a desire for multiple sexual partners, and yet "know it's wrong" due to an aversion to hurting his wife, or losing his social status, or alienating his children, or various other reasons.

Am I not allowed to construct an alien mind that evaluates morality differently? What will stop me from doing so?

Can you elaborate on this? You seem to be using "allowed" in a strange way. If you have the means to do this, and others lack the means to physically restrain you from doing so, then the only thing that would stop you would be your own desires and aversions.

I don't think you're talking about my sort of view* when you say "morality-as-preference", but:

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

A commitment to drive a hard bargain makes it more costly for other people to try to get you to agree to something else. Obviously an even division is a Schelling point as well (which makes a commitment to it more credible than a commitment to an arbitrary division).

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

I think humans tend not to have very clean divisions between instrumental and terminal values. Although there is no absolute moral progress or error, some moralities may be better or worse than others by almost any moral standard a human would be likely to use. Through moral hypocrisy, humans can signal loyalty to group values while disobeying them. Since humans don't self modify easily, a genuine desire to want to change may be a cost-effective way to improve the effectiveness of this strategy.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?

See above on signaling and hypocrisy.

*moral nihilist with instrumental view of morality as tool for coordinating behaviour.

It seems to me that the problems of morality as preference come from treating humans as monolithic. Real humans with internal complex ecosystems of agents embedded in larger social systems of agents. As such, they would be expected to shift dominant values, maybe even to shift terminal values as an agent builds a new agent to accomplish its goals by optimizing for some proxy to its goals and eventually finds that new agent to pursue that proxy against its explicit preferences. Plato viewed morality as well ordered relationships between agents, that is, presumably some sort of attractor in the space of possible such relationships which leads to most of the reasonably high level agents flourishing in the medium term and the very high level ones flourishing in the long term.

Consistent with the above, morality as a given can simply be part of the universe or multiverse as a given, but this is hard to express. It is a given that certain configurations are perceptions of moral "wrongness" and others are perceptions of moral "rightness".

I think the answer (to why this behavior adds up to normality) is in the spectrum of semantics of knowledge that people operate with. Some knowledge is primarily perception, and reflects what is clearly possible or what clearly already is. Other kind of "knowledge" is about goals: it reflects what states of environment are desirable, and not necessarily which states are in fact possible. These concepts drive the behavior, each pushing in its own direction: perception shows what is possible, goals show where to steer the boat. But if these concepts have similar implementation and many intermediate grades, it would explain the resulting confusion: some of the concepts (subgoals) start to indicate things that are somewhat desirable and maybe possible, and so on.

In the case of moral argument, what a person wants corresponds to pure goals and has little feasibility part in it ("I want to get the whole pie"). "What is morally right" adds a measure of feasibility, since such question is posed in the context of many people participating at the same time, so since everyone getting the whole pie is not feasible, it is not in answer in that case. Each person is a goal-directed agent, operating towards certain a-priory infeasible goals, plotting feasible plants towards them. In the context of society, these plans are developed so as to satisfy the real-world constraints that it imposes.

Thus, "morally right" behavior is not the content of goal-for-society, it is an adapted action plan of individual agents towards their own infeasible-here goals. How to formulate the goal-for-society, I don't know, but it seems to have little to do with what presently forms as morally right behavior. It would need to be derived from goals of individual agents somehow.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?

It's all about delicious versus nutritious. That is, these conflicts are conflicts between different time horizons, or different discount values for future costs and benefits. Evolution has shaped our time horizon for making relatively short term decisions (Eat the pie now. It will taste good. There may not be another chance.), but we live in a world where a longer term is more appropriate (The pie may taste good, but it isn't good for my health. Also, I may benefit in the long term by giving the pie to somebody else.).

All good questions.

Try replacing every instance of 'morality' with 'logic' (or 'epistemic normativity' more broadly). Sure, you could create a mind (of sorts) that evaluated these things differently -- that thought hypocrisy was a virtue, and that contradictions warranted belief -- but that's just to say that you can create an irrational mind.

[-]Venu00

I share neither of those intuitions. Why not stick with the obvious option of morality as the set of evolved (and evolving) norms? This is it, looking for the "ideal" morality would be passing the recursive buck.

This does not compel me to abandon the notion of moral progress though; one of our deepest moral intuitions is that our morality should be (internally) consistent, and moral progress, in my view, consists of better reasoning to make our morality more and more consistent.

"moral progress, in my view, consists of better reasoning to make our morality more and more consistent"

Right, so morality is not our [actual, presently existing] "set of evolved norms" at all, but rather the [hypothetical, idealized] end-point of this process of rational refinement.

The questions posed by Eliezer are good but elementary. Since there is an entire class of people--moral philosophers--who have been professionally debating and (arguably) making progress on these issues for centuries, why do we believe that we can make much progress in this forum?

I claim that it is highly unlikely that anyone here has an exceptional insight (because of Bayesianism or whatever) that could cause a rational person to assign appreciable importance to this discussion for the purposes of forming moral beliefs. In other words, if we want to improve our moral beliefs, shouldn't we all just grab a textbook on introductory moral philosophy?

Or is this discussion merely an exercise?

[-]Lake10

There's at least one other intuition about the nature of morality to distinguish from the as-preference and as-given ideas. It's the view that there are only moral emotions - guilt, anger and so on - plus the situations that cause those emotions in different people. That's it. Morality on this view might profitably be compared with something like humour. Certain things cause amusement in certain people, and it's an objective fact that they do. At the same time, if two people fail to find the same thing funny, there wouldn't normally be any question of one of them failing to perceive some public feature of the world. And like the moral emotions, amusement is sui generis - it isn't reducible to preference, though it may often coincide with it. The idea of being either a realist or a reductionist about humour seems, I think, absurd. Why shouldn't the same go for morality?

[-]Venu00

@Richard I agree with you, of course. I meant there exists no objective, built-into-the-fabric-of-the-universe morality which we can compute using an idealised philosopher program (without programming in our own intuitions that is).

Jess - "shouldn't we all just grab a textbook on introductory moral philosophy?"

That would seem ideal. I'd recommend James Rachels' The Elements of Moral Philosophy for a very engaging and easy-to-read introductory text. Though I take it Eliezer is here more interested in meta-ethics than first-order moral inquiry. As always, the Stanford Encyclopedia of Philosophy is a good place to start (then follow up Gibbard and Railton, especially, in the bibliography).

On the other hand, one shouldn't let the perfect be the enemy of good discussion. Better to reinvent the wheel than to go without entirely!

[-]poke30

My response to these questions is simply this: Once the neurobiology, sociology and economics is in, these questions will either turn out to have answers or to be the wrong questions (the latter possibility being the much more probable outcome). The only one I know how to answer is the following:

Do the concepts of "moral error" and "moral progress" have referents?

The answer being: Probably not. Reality doesn't much care for our ways of speaking.

A longer (more speculative) answer: The situation changes and we come up with a moral story to explain that change in heroic terms. I think there's evidence that most "moral" differences between countries, for example, are actually economic differences. When a society reaches a certain level of economic development the extended family becomes less important, controlling women becomes less important, religion becomes less important, and there is movement towards what we consider "liberal values." Some parts of society, depending on their internal dynamics and power structure, react negatively to liberalization and adopt reactionary values. Governments tend to be exploitative when a society is underdeveloped, because the people don't have much else to offer, but become less exploitative in productive societies because maintaining growth has greater benefits. Changes to lesser moral attitudes, such as notions of what is polite or fair, are usually driven by the dynamics of interacting societies (most countries are currently pushed to adopt Western attitudes) or certain attitudes becoming redundant as society changes for other reasons.

I don't give much weight to peoples' explanations as to why these changes happen ("moral progress"). Moral explanations are mostly confabulation. So the story that we have of moral progress, I maintain, is not true. You can try to find something else and call it "moral progress." I might argue that people are happier in South Korea than North Korea and that's probably true. But to make it a general rule would be difficult: baseline happiness changes. Most Saudi Arabian women would probably feel uncomfortable if they were forced to go out "uncovered." I don't think moral stories can be easily redeemed in terms of harm or happiness. At a more basic level, happiness just isn't the sort of thing most moral philosophers take it to be, it's not something I can accumulate and it doesn't respond in the ways we want it too. It's transient and it doesn't track supposed moral harm very well (the average middle-class Chinese is probably more traumatized when their car won't start than they are by the political oppression they supposedly suffer). Other approaches to redeeming the kinds of moral stories we tell are similarly flawed.

I think I'm echoing Eneasz when I ask: how does Preference Utilitarianism fit into this scheme? In some sense, it's taken as given that the aim is to satisfy people's preferences, whatever those are. So which type of morality is it?

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?

These really are different statements. "I am entitled to fraction x of the pie" means more or less the same as "a fair judge would assign me fraction x of the pie".

But a fair judge just means the judge has no personal relationship with any of the disputing parties and makes his decision based on some rational process, not arbitrarily. It isn't necessarily true that there's a unique solution that a fair judge would decide upon. One could say that whoever saw it first or touched it first is entitled to the whole pie, or that it should be divided strictly equally, or that it be divided on a need-based or merit-based, or he could even make the gods must be crazy/idiocy of Solomon solution and say it's better that the pie be destroyed than allowed to exist as a source of dissent. In my (admittedly spotty) knowledge of anthropology, in most traditional pie-gathering societies, if three members of a tribe found a particularly large and choice pie they would be expected to share it with the rest of the tribe, but they would have a great deal of discretion as to how the pie was divided, they'd keep most of it for themselves and their allies.

This is not to say that morality is nothing but arbitrary social convention. Some sets of rules will lead to outcomes that nearly everyone would agree are better than others. But there's no particular reason to believe that there could be rules that everyone will agree on, particularly not if they have to agree on those rules after the fact.

Poke - "most 'moral' differences between countries, for example, are actually economic differences"

I'd state that slightly differently: not that moral differences just are economic differences (they could conceivably come apart, after all), but rather, moral progress is typically caused by economic progress (or, even more likely, they are mutually reinforcing). In other words: you can believe in the possibility of moral progress, i.e. of changes that are morally better rather than worse, without buying into any particular explanatory story about why this came to be.

(Compare: "Most 'height' differences between generations... are actually nutritional differences." The fact that we now eat better doesn't undo the fact that we are now taller than our grandparents' generation. It explains it.)

Richard: I would say that moral 'progress' is caused by economics as well, but in a complex manner. Historically, in Western Civilization, possibly due to the verbalized moral norm "do onto others as you would have others do onto you" plus certain less articulate ideas of justice as freedom of conscience, truth, and vaguely 'equality', there is a positive feedback cycle between moral and economic 'progress'. We could call this "true moral progress".

However, the basic drive comes from increased wealth driving increased consumption of the luxury "non-hypocrisy", which surprisingly turns out to be an unrecognized factor of production. Economic development can cause societies with other verbalized governing norms to travel deeper into the "moral abyss", e.g. move away from the attractor that Western Civilization moves towards instead. Usually, this movement produces negative feedback, as it chokes off economic progress, which happens to benefit from movement towards Western moral norms within a large region of possibility space stretching out from the evolutionary psychology emergent default.

In rare cases however, it may be possible for positive feedback to drive a culture parasitically down into the depths of the "moral abyss". This could happen if a cultural discovers a road to riches in the form of decreased production, which is possible if that culture is embedded in an international trade network and highly specialized in the production of a good with inelastic supply. In this case, the productivity losses that flow from moral reform can serve as a form of collusion to reduce production driving up price.

vasser, you're conflating 'morality' both with non-hypocrisy AND some vaguely-alluded-to social interaction preferences.

Having enough wealth to be able to afford to abolish class divisions only permits non-hypocrisy if you've been proclaiming that class division should be abolished. You seem to be confusing certain societal political premises with 'morality', then calling the implementation of those premises 'moral progress'.

I fall closer to the morality-as-preference camp, although I'd add two major caveats.

One is that some of these preferences are deeply programmed into the human brain (i.e. "Punish the cheater" can be found in other primates too), as instincts which give us a qualitatively different emotional response than the instincts for direct satisfaction of our desires. The fact that these instincts feel different from (say) hunger or sexual desire goes a long way towards answering your first question for me. A moral impulse feels more like a perception of an external reality than a statement of a personal preference, so we treat it differently in argument.

The second caveat is that because these feel like perceptions, humans of all times and places have put much effort into trying to reconcile these moral impulses into a coherent perception of an objective moral order, denying some impulses where they conflict and manufacturing moral feeling in cases where we "should" feel it for consistency's sake. The brain is plastic enough that we can in fact do this to a surprising extent. Now, some reconciliations clearly work better than others from an interior standpoint (i.e. they cause less anguish and cognitive dissonance in the moral agent). This partially answers the second question about moral progress— the act of moving from one attempted framework to one that feels more coherent with one's stronger moral impulses and with one's reasoning.

And for the last question, the moral impulses are strong instincts, but sometimes others are stronger; and then we feel the conflict as "doing what we shouldn't".

That's where I stand for now. I'm interested to see your interpretation.

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

"I want the pie" is something that nobody else is affected by and thus nobody else has an interest in. "I should get the pie" is something that anybody else interested in the pie has an interest in. In this sense, the moral preferences are those that other moral beings have a stake in, those that affect other moral beings. I think some kind of a distinction like this explains the different ways we talk about and argue these two kinds of preferences. Additionally, evolution has most likely given us a pre-configured and optimized module for dealing with classes of problems involving other beings that were especially important in the environment of evolutionary adaptedness, which subjectively "feels" like an objective morality that is written into the fabric of the universe.

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

I think of preferences and values as being part of something like a complex system (in the sense of http://en.wikipedia.org/wiki/Complex_system) in which all the various preferences are inter-related and in constant interaction. There may be something like a messy, tangled hierarchy where we have terminal preferences that are initially hardwired at a very low-level, on top of which are higher-level non-terminal preferences, with something akin to back-propagation allowing for non-terminal preferences to affect the low-level terminal preferences. Some preferences are so general that they are in constant interaction with a very large subset of all the preferences; these are experienced as things that are "core to our being", and we are much more likely to call these "values" rather than "preferences", although preferences and values are not different in kind.

I think of moral error as actions that go against the terminal (and closely associated non-terminal (which feedback to terminal)) and most general values (involving other moral beings) of a large class of human beings (either directly via this particular instance of the error affecting me or indirectly via contemplation of this type of moral error becoming widespread and affecting me in the future). I think of moral progress as changes to core values that result in more human beings having their fundamental values (like fairness, purpose, social harmony) flourish more frequently and more completely rather than be thwarted.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

Because the system of interdependent values is not a static system and it is not a consistent system either. We have some fundamental values that are in conflict with each other at certain times and in certain circumstances, like self-interest and social harmony. Depending on all the other values and their interdependencies, sometimes one will win out, and sometimes the other will win out. Guilt is a function of recognizing that something we have done has thwarted one of our own fundamental values (but satisfied the others that won out in this instance) and thwarted some fundamental values of other beings too (not thwarting the fundamental values of others is another of our fundamental values). The messiness of the system (and the fact that it is not consistent) dooms any attempt by philosophers to come up with a moral system that is logical and always "says what we want it to say".

Does the notion of morality-as-preference really add up to moral normality?

I think it does add up to moral normality in the sense that our actions and interactions will generally be in accordance with what we think of as moral normality, even if the (ultimate) justifications and the bedrock that underlies the system as a whole are wildly different. Fundamental to what I think of as "moral normality" is the idea that something other than human beings supplies the moral criterion, whereas under the morality-as-preference view as I described it above, all we can say is that IF you desire to have your most fundamental values flourish (and you are a statistically average human in terms of your fundamental values including things like social harmony), THEN a system that provides for the simultaneous flourishing of other beings' fundamental values is the most effective way of accomplishing that. It is a fact that most people DO have these similar fundamental values, but there is no objective criterion from the side of reality itself that says all beings MUST have the desire to have their most fundamental values flourish (or that the fundamental values we do have are the "officially sanctioned" ones). It's just an empirical fact of the way that human beings are (and probably many other classes of beings that were subject to similar pressures).

These are difficult questions, but I think I can tackle some of them:

Why would anyone want to change what they want?"

If a person wants to change from valuing A to valuing B, they are simply saying that they value B, but it requires short-term sacrifices, and in the short term, valuing A may feel psychologically easier, even though it sacrifices B. They thus want to value B so that it is psychologically easier to make the tradeoff.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

That's recognizing that they are violating an implicit understanding with others that they would not want others to do to them, and would perhaps hope others don't find out about. They are also feeling a measure of psychological pain from doing so as a result of their empathy with others.

[-]GNZ00

People quite often hold contradictory positions simultaneously. there not an incentive for people to be entirely consistent - in fact it is prohibitively expensive in a psychological sense. (eg voters). It would be even easy for the inconsistencies to occur between what you choose to do and what you think you should do.

Am I not allowed to construct an alien mind that evaluates morality differently? What will stop me from doing so?

No. Morality (and their rules that promote intra-group strength) is a almost mathematical consequence of evolutionary game theory applied to the real world of social animals in the context of darwinian evolution.

The outcome of the application of EGT to the nature is what is called "Multilevel selection theory" (Sloan Wilson, E O Wilson). This theory hat has a "one foot" description: within groups, selfish individals prevail over selfless ones, between groups, the group with selfless indifiduals prevais over the ones with selfish individuals.

This is the essence of our evolved moral judgements, moral rules, and of all our internal and external moral conflicts.

http://ilevolucionista.blogspot.com/2008/05/entrevista-david-sloan-wilson.html

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways? This seems to be because "I want the pie" is too direct, too obvious; it violates a social convention of anti-selfishness which demands a more altruistic morality. "It is right that I should get the pie" implies some quasi-moral argument to cover the naked selfishness of the proposal in more acceptable, altruistic terms; while the truth of the matter is that either argument is a selfish one, intended merely to take possession of the pie... and the second one, by virtue of being sneakier, seems more workable then then actual honesty. An honest declaration of selfish intent can be rejected on apparent altruistic grounds, socially deemed more acceptable... while a dishonest argument based on altruistic terms cannot be rejected on selfish terms, only counter-argued with more altruistic refutation. Thus the game of apparent 'morality' engaged in by equally selfish agents attempting to pretend that they are not, in fact, as selfish as they are, in order to avoid the social stigma implicit in admitting to selfishness. A more honest statement "I want the pie" needs no such justification. It's a lot more direct and to the point, and it doesn't need to bother with the moral sophistry a pseudo-altruistic argument would need to cloak itself in. Once it is acknowledged that both parties think they should have the pie for no reason other than that they both want it, they can then move on to settling the issue... whether by coming to some kind of agreement over the pie, or by competing against one another for it. Either case is certainly more straightforward than trying to pretend that there is a 'right' behind desire for pie.

When and why do people change their terminal values? Change in terminal value is a question of an imperfect agent reevaluating their utility function. Although most individuals would find it most personally beneficial, in any given situation, to take the most selfish choice, the result of becoming known for that kind of selfishness is the loss of the tribe's support... and perhaps even the enmity of the tribe. Hence, overt selfishness has less long-term utility, even when considered by a purely selfish utility function, than the appearance of altruism. Attempting to simulate altruism is more difficult then a degree of real altruism, so most utility functions, in actual practice, mix both concepts. Few would admit to such selfish concerns if pressed, but the truth of the matter is that they would rather have a new television than ensure that a dozen starving children they've never seen or heard of are fed for a year. Accordingly, their morality becomes a mix of conflicting values, with the desire to appear 'moral' and 'altruistic' competing with their own selfish desires and needs. As the balance between the two competing concepts shifts as the individual leans one way or another, their terminal values are weighted differently... which is the most common sort of change in terminal values, as the concepts are fundamental enough not to change lightly. It takes a more severe sort of reevaluation to drop or adopt a terminal value; but, being imperfect agents, humans certainly have plenty of opportunities in which to make such reevaluations.

Do the concepts of "moral error" and "moral progress" have referents? No. "Morality" is a term used to describe the altruistic cover used to disguise people's inherent selfishness. Presented with an example of destructive selfishness, one may say "Moral Error" in order to proclaim their altruism; just as they may say "Moral Progress" to applaud an act of apparent altruism. Neither really means anything, they're merely social camouflage, a way of proclaiming one's allegiance to tribal custom.

Why would anyone want to change what they want? Because they find themselves dissatisfied with their circumstances, for any reason. As imperfect agents, humans can't entirely conclude the end consequences of all of their desires... the best they can usually do is, on finding that those desires have led them to a place they do not want to be, desire to change those circumstances, and in realizing that their old desires were responsible for the undesired circumstances, attempt to abandon the old desires in favor of a new set of desires, which might be more useful.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? That's easy, once the idea of morality as social cover against apparent selfishness is granted. The person in question has selfish desires, but has convinced themselves that their selfishness is wrong, and that they should be acting altruistically instead. However, as a path of pure altruism has poor overall fitness, they must perform selfish actions instead on occasion, and the more selfish those actions are, without being overtly selfish enough to incur the tribe's displeasure, the better it is for the individual. Accordingly, any action selfish enough to seem immoral will be something a person will have reason to want, and do, despite the fact that it seems to be wrong.

Does the notion of morality-as-preference really add up to moral normality? I do not believe that there is such a thing as a normal morality; morality is merely a set of cultural ideas devised to limit the degree to which the selfishness of individuals can harm the tribe. The specifics of morality can and do vary widely given a different set of tribal circumstances, and are not on the same level as individual preferences... but this in no way implies that morality is in any sense an absolute, determined by anything at all outside of reality. If morality is a preference, it is a group preference, shaped by those factors which are best for the tribe, not for the individuals within it.

[-][anonymous]20

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways? This seems to be because "I want the pie" is too direct, too obvious; it violates a social convention of anti-selfishness which demands a more altruistic morality.

People use both moralistic and altruistic claims hypocritically, but altruism and moralism aren't the same thing. Morality is based on the idea that people deserve one thing or another (whether riches or imprisonment); what they deserve is in some sense part of the objective world: moral judgments are thought to be objective truths.

For the argument that morality is a social error that most of us would be better off were it abolished, a good source is Ian Hinckfuss, "The Moral Society." AND my "morality series."

Uh... did you just go through my old comments and upvote a bunch of them? If so, thanks, but... that really wasn't necessary.

It's almost embarrassing in the case of the above; it, like much of the other stuff that I've written at least one year ago, reads like an extended crazy rant.

[-][anonymous]-20

I read some of your posts because, having agreed with you on some things, I wondered whether I would agree on others. Actually, I didn't check the date. When I read a post I want to approve of, I don't worry whether it's old.

If I see a post like this one espousing moral anti-realism intelligibly, I'm apt to upvote it. Most of the posters are rather dogmatic preference utilitarians.

Sorry I embarrassed you.

No worries; it's just that here, in particular, you caught the tail end of my clumsy attempts to integrate my old Objectivist metaethics with what I'd read thus far in the Sequences. I have since reevaluated my philosophical positions... after all, tidy an explanation as it may superficially seem, I no longer believe that the human conception of morality can be entirely based on selfishness.

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

Suppose I currently want pie, but there is very little pie and lots of cake. Wouldn't I have more of what-I-want if I could change what I want from pie to cake? Sure, that doesn't get me more pie, but it increases the valuation-of-utility-function.

Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me more pie without changing how much pie everyone else gets? Then I want to change what-I-want to wanting-pie-for-everyone because that increases the valuations of both utility functions.

Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me pie at the expense of everyone else? Now I have no stable solution. I want to change what I want, but I don't want to have changed what I want. My head is about to melt because I am very confused. "Rational agents should WIN" doesn't seem to help when Omega's behaviour depends on my definition of WIN.

I know this is an old post, I just wanted to write down my answers to the "morality as preference" questions.

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

Do the statements, "I liked that movie" and "That movie was good" sound different? The latter is phrased as a statement of fact, while the former is obviously a statement of preference. Unless the latter is said by a movie critic or film professor, no one thinks it's a real statement of fact. It's just a quirk of the English language that we don't always indicate why we believe the words we say. In English, it's always optional to state whether it's a self-evident fact, the words of a trusted expert or merely a statement of opinion.

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

"Moral progress" doesn't really refer to individuals. The entities we refer to making "moral progress" tend to be community level, like societies, so I don't really get the first and last questions. As for the concept of moral progress, it refers to the amount of people who have their moral preferences met. The reason democracy is a "more ethical" society than totalitarianism is because more people have a chance to express their preferences and have them met. If I think a particular war is immoral, I can vote for the candidate or law that will end that war. If I think a law is immoral I can vote to change it. I think this theory lines up pretty well with the concept of moral progress.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?

Usually people who do something they "know is wrong" are just doing something that most other people don't like. The only reason it feels like it's wrong to steal is because society has developed, culturally and evolutionary, in such a way that most people think stealing is wrong. That's really all it is. There's nothing in physics that encodes what belongs to who. Most people just want stuff to belong to them because of various psychological factors.

[-][anonymous]-20

For Morality-As-Preference

Some people realize the difference between their immediate/naive desires and long-term/societal desires.

What you want is not always changed by you. We do after all run on corrupted hardware.

Some people realize they are acting against their long-term/societal desires in favor of their immediate/naive desires. And they judge this locally good enough to do immediately, while simultaneously feeling guilty about having damaged the long-term goal. Our brains run massively parallel, and different threads do not always agree.

For Morality-As-Given

It is not possible for a thing to both be, and not have any access to our frame of reference. That puts us instead in a glitchless escapeless agentless The Matrix, which then is indistinguishable by premise from being "The Actual Universe" in a way that invokes the law of identity to make it BE the actual universe. A delusion which never differs in any way from being an objective reality is in fact that objective reality. The difference is that we can find the stone, and test its morality to see if "You should commit suicide." is moral.

The world in which the moral proposition is true and the world in which the moral proposition is false differs in the physical structure of its inhabitants neurologies and in its physical laws, such that the outcomes of the proposition in the universe where the proposition is true inflict upon the neurologies of its inhabitants a moral effect, and in the other universe an immoral or at least suboptimal one.

Insufficient information for meaningful answer. Prerequisites include at least the "Understanding All Biomechanical and Neuropsychological Details of All Potentially Sentient Organisms" project, and the "Unified Perfect Laws of Physics" project. That said, there can still be some statement made of immoral things even without a complete morality assembled. Without doing a lot of computation, I don't know what 43875623746 x 3429856746 is. But with very little computation or further information, I already know the answer isn't 5.

How any particular sentient evaluates morality is utterly irrelevant. Space Hitler can think as much as he likes that eliminating all the Space Jews is moral, and still be wrong about it if there is a morality-as-given. It just means he disagrees with reality, and is as wrong about morality as some fervently religious people are about the origins of the universe. Nothing prevents you from constructing such an entity, it will merely be wrong.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]-20

Oh wow. I seem to have predicted the tone of the argument in the next section. >_>

Am I detecting a pattern on my own, or is EY leading me intentionally, or is there even a difference?

[This comment is no longer endorsed by its author]Reply

Morality-as-preference, I would argue, is oriented around the use of morality as a tool of manipulation of other moral actors.

Question one: "It is right that I should get the pie" is more convincing, because people respond to moral arguments. Why they do so is irrelevant to the purpose (to use morality to get what you want).

Question two: People don't change their terminal values (which I would argue are largely unconscious, emotional parameters), though they might change how they attempt to achieve them, or one terminal value might override a different one based on mood-affecting-circumstance ("I am hungry, therefore my food-seeking terminal value has priority"). This, btw, answers why it is less morally wrong for a starving man to steal to eat versus a non-starving man.

Question three: "I want this, though I know it's wrong" under this view maps to "I want this, and have no rhetoric with which I can convince anyone to let me have it." This might even include the individual themselves, because people can criticize their own decisions as if they were separate actors, even to the point where they must convince a constructed 'moral actor' that has their own distinct motives, using moral arguments, before permitting themselves to engage in an action.

Going through the Metaethics sequence looking for nitpicks. The Morality-as-Preferences ones aren't that hard with the philosophy of mind theory that by default people have the delusion that Right and Wrong actually exist. Although the concept of Right and Wrong without clarification is incoherent, this doesn't stop people referring to them anyway.

As a result, they refer to things being Right or Wrong, are persuaded about notions of Right or Wrong based on their own understanding, and sometimes violate their own notions.

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?  Why are the two propositions argued in different ways?

  • I want to consider this question carefully.
    • My first answer is that arguing about morality is a political maneuver that is more likely to work for getting what you want than simply declaring your desires.
      • But that begs the question, why is it more likely to work? Why are other people, or non-sociopaths, swayed by moral arguments?
        • It seems like they, or their genes, must get something out of being swayed by moral arguments.
        • You might think that it is better coordination or something. But I don't think that adds up. If everyone makes moral arguments insincerely, then the moral argument don't actually add more coordination.
          • But remember that morality is enforced...?
          • Ok. Maybe the deal is that humans are loss averse. And they can project, in any given conflict, being in the weaker party's shoes, and generalize the situation to other situations that they might be in. And so, any given onlooker would prefer norms that don't hurt the looser too badly? And so, they would opt into a timeless contract where they would uphold a standard of "fairness"?
            • But also the contract is enforced.
          • I think this can maybe be said more simply? People have a sense rage at someone taking advantage of someone else iff they can project that they could be in the loser's position?
            • And this makes sense if the "taking advantage" is likely to generalize. If the jerk is pretty likely to take advantage of you, then it might be adaptive to oppose the jerk in general?
              • For one thing, if you oppose the jerk when he bullies someone else, then that someone else is more likely to oppose him when he is bullying you.
          • Or maybe this can be even more simply reduced to a form of reciprocity? It's adaptive to do favors for non-kin, iff they're likely to do favors for you?
            • There's a bit of bootstrapping problem there, but it doesn't seem insurmountable.
          • I want to keep in mind that all of this is subject to scapegoating dynamics, where some group A coordinates to keep another group B down, because A and B can be clearly differentiated and therefore members of A don't have to fear the bullying of other members of A. 
            • This seems like it has actually happened, a bunch, in history. Whites and Blacks in American history is a particularly awful example that comes to mind.

Perhaps even simpler: it is adaptive to have a sense of fairness because you don't want to be the jerk. 'cuz then everyone will dislike you, oppose you, and not aid you.

The biggest, meanest, monkey doesn't stay on top for very long, but a big, largely fair, monkey, does?

[-]TAG10

It's not difficult to see why groups would mutually believe in fairness: the alternative is fighting over resources, and fighting over resources destroys resources and kills people. But "being a jerk" is only instrumental ... groups enforce norms of fairness and rule following by giving negative feedback to those who don't follow them.

Not wanting to be a jerk is individually adaptive, but it's more important to have an idea established system for avoiding conflict and allocating resources is adaptive at the group level.

[-]TAG10

Who gets how much of the pie is two different questions depending on how much pies are in short supply. If no one is starving, it might as well go to the hungriest person. If everyone is starving, it would be mutually agreeable to divide it equally...that way, no one is the loser .

You can fight over resources, or you do one of about three things to avoid a fight

  1. Appeal to unwritten , traditional rules, ie ethics

  2. Appeal to written rules, ie law

  3. Appeal to the system that assigns winners and losers , ie. politics.