If Dennis just strides around saying, "I want the whole pie! Give me the whole pie! What's fair is for me to get the whole pie! Not you, me!" then that's not going to persuade anyone else in the tribe. Dennis has not managed to frame his desires in a form which enable them to leap from one mind to another. His desires will not take wings and become interpersonal. He is not likely to leave many offspring.
I must come from a different planet than EY. The critters that took all the food of the other critters had more baby critters than the no-food critters, and the no-food critter babies were sickly. After an exceptionally long string of critters, many of which got the whole pie (surrounded by no-pie critters), my world ended up with humans. And among the humans, I'm confident some evil or heroic ancestor of mine got roast beef while the humans around him got none. It would be a different world indeed if greed, violence, waste and cruelty didn't win over kindness and weakness with such consistency.
Inquiries into where morality is from can be interesting but they are not important. It doesn't matter where the morals we're born among or can envision come from. Placing value on where morals come from is the argument from authority, be it nature or utility or god or logic or Ayn Rand or etc. The important moral question is: how can we identify and minimize moral errors?
It would be a different world indeed if greed, violence, waste and cruelty didn't win over kindness and weakness with such consistency.
A different world than me too, it seems. Humans generally win through cooperation, and a human operating alone against their fellows is hardly a human at all, and certainly not successful.
Greed wins over kindness, and violence wins over weakness, yes.
Betray wins over cooperate every single time, but the winners of the iterated prisoners' dilemma don't betray people who cooperate.
Dennis is not going to get any pie. Whoever resorts to violence, Dennis is going to lose. This is not the general case of "there is a pie", this is the case of "there is a pie with no clear owner". If Dennis bakes a pie and declares "I will have the whole pie", then there is no more moral discussion.
If Dennis bakes a pie and declares "I will have the whole pie", then there is no more moral discussion.
If I bake a pie and eat the whole thing while my neighbor starves, there are those who would argue that I've acted immorally. Depending on what a moral discussion is, that might entail a moral discussion.
Those people are wrong, in the sense that they are either not following a rational morality or they do not believe that every interaction should be consensual. It cannot be moral for your neighbor to compel you to bake a pie for them. Nor can it be moral, given that you have baked a pie, that you should be forced to give it to them.
You're slipping directly from "I've acted immorally" to "I can/should be compelled / forced to stop". Don't confuse morality with law/politics.
No, I'm treating a moral imperative as a compulsion. If something is immoral, it is necessarily prohibited for a moral person. If a lack of an action is immoral, it is necessarily compulsory for a moral person.
I should have phrased it in the passive voice, however- "It cannot be moral for you to be compelled against your consent to bake a pie for your neighbor, nor, given that you have baked a pie, can it be moral for you to be compelled against your consent to give that pie to your neighbor."
But now I've described choosing to share the pie as an amoral action, and lots of people who wish me to be compelled to share pie object to that categorization.
So... say I bake a pie, and my neighbor is starving.
Let us accept for the sake of argument that everything you say here is correct, and it is therefore immoral for me to be compelled to give that pie to my neighbor against my consent.
Let us suppose further that I wish to behave morally in this situation.
If I understand you correctly, my wish to behave morally is essentially irrelevant to the question of what I do with the pie... it gives me no guidance, because morality provides no basis for choice here. Either I eat the pie, or I give it to my neighbor, or I throw it in the trash, or I do something else, and there are no moral grounds to prefer one of those acts over another. (There might be other reasons to prefer one over the other... e.g., if I really like pie that's a reason to eat the pie, and if I don't want my other neighbors to think I'm a meanie that's a reason to give the pie to my neighbor, and so on and so forth, but none of these things have anything to do with morality.)
Have I understood your position?
Yes. What I choose to do with my pie is an amoral decision, unless I have entered into some prior contract regarding the pie (using a very broad definition of 'contract'- a contract is anything that I have consensually agreed to be morally obligated to do in the future, and includes obedience to laws created by a government if I have consented to be governed).
If I have consented to be part of a community with certain social norms, does that constitute a contract that includes obedience to those norms? (And, more importantly, how might I figure out the answer to that question for myself?)
Have you accepted any treatment based on your assent to be bound by those norms? Have you represented yourself as someone who will be bound by those norms and influenced others thereby? Is obedience to the social norms a required part of being a part of that community? If you didn't know about the norm when you became a member of the community, did you disavow your membership immediately upon learning of it? Or did you ever agree to follow the norms when you didn't know what they were?
It's hard for me to answer the general question of whether you have agreed to do something.
Implicit contracts are valid only for an implicit period of time and for implicitly defined circumstances, and may have an implicit penalty clause (if there is a penalty clause for nonperformance of other duties, then the agreement may be fulfilled by fulfilling the penalty clause instead of the other portion of the contract). Typically for an implicit contract, disavowing the contract and returning or renegotiating all of the value that you have gained from it is sufficient. For any explicit contract, refer to the terms of the contract.
What do you suspect that you might have agreed to do, and why do you suspect that you have agreed to do it?
It's hard for me to answer the general question of whether you have agreed to do something.
Yeah, absolutely. The thing is, I'm in the same boat with respect to my own agreements, for the most part.
I mean, I don't think I've ever actually formally said "I consent to be governed by the governments of my nation, my state, my town, etc." to anyone, and yet it seems relatively clear that I am being governed by them. Similarly, there are many government services that I benefit from (many of which I'm unaware of benefiting from) that might be provided under the assumption that I'm consenting to being so governed, with neither my being aware of that assumption, nor the provider being aware of its falsehood (supposing it's false).
It's hard to say unambiguously whether I've agreed to any of that, and if so exactly what.
This is even more true when it comes to communities less formal than governments. When I join a community I don't ever say "I will be bound by your social norms," nor has anyone ever said "I offer you this treatment if and only if you agree to be bound by our social norms." Indeed, often the social norms themselves are never explicitly stated.
More generally: in practice, many of the important consensual agreements in my life are implicit.
All of that seems reasonable enough to me, but then I'm of the opinion that there exist acts which are morally obligatory or morally impermissible in the absence of consent. Of course, people disagree on which acts those are, exactly, or how I could tell. Which causes problems.
You are proposing what seems to be a simpler approach, in which the only determinant of moral status is consent. Which sounds great.
But if I understand that to mean explicit consent, then the moral status of most of my life is utterly ambiguous, as the underlying consent is not at all explicit... and it's really not at all clear to me that it could ever be otherwise among humans. In which case that moral framework fails to do the work I want a moral framework to do (that is, allow me to decide what the right thing to do is in various situations).
Conversely, if I understand that to mean implicit consent, then we've simply replaced the question of which acts are morally obligatory or morally impermissible in the absence of consent with the question of which acts I am understood to have implicitly consented to.
What do you suspect that you might have agreed to do, and why do you suspect that you have agreed to do it?
Well, for example... there have been periods of my life where I was unable to feed myself, and others fed me. And there was never an explicit contract governing the terms of that feeding, at least not one that I ever agreed to. And my understanding of the implicit social norms that underlay the decisions of others to do so (which often go by names like "being a decent human being") do seem to assume that I am agreeing to be bound by similar norms.
So is it possible that I've implicitly agreed to an implicit contract that stipulates that I will give my pie to my starving neighbor under certain circumstances?
Perhaps.
Then again, perhaps not.
Admittedly, your model has what seems like a simple way out... I can just disavow the implicit contract.
Except that to do so I must return the value I've gained from it, or renegotiate that value.
And, well, the value I gained from being fed when I couldn't feed myself is, well, my life.
So, that doesn't seem to work too well.
If you are asking whether duress is a factor in an implicit contract, the answer is "It can be". Personally, I don't think that any contract can be made where one party has placed another party into duress, but the system of universal consent fails to robustly account for people who choose not to consent to the government which controls the area in which they happen to be physically present and lack the ability to leave without using the infrastructure provided by that government.
Returning the value you received involves returning the cost to the other parties, not removing the benefit to yourself. What cost was incurred as part of the implicit contract "I will help you live if you accept unstated obligations afterward", what part of the cost of raising you was incurred as part of the social and legal obligation to care for children, and what part was provided freely and without obligation?
I'm not asking about duress at all.
Returning the value you received involves returning the cost to the other parties
Ah, I see. Well, that's far more convenient. Does this include opportunity costs?
what part of the cost of raising you was incurred as part of the social and legal obligation to care for children, and what part was provided freely and without obligation?
Beats me... what does the social and legal obligation to care for children comprise?
Ah, I see. Well, that's far more convenient. Does this include opportunity costs?
Either it includes only opportunity costs, or it includes only direct costs. You don't have to give them back what they spent AND also give them what they could have gotten with that. Unfortunately, costs also include time and other things that are hard to quantify.
Beats me... what does the social and legal obligation to care for children comprise?
All of it, I think.
Ah.
Well, if all of it is obligatory, then it follows that none of it is provided freely and without obligation.
That's easy enough to calculate, at least.
Assuming that math applies in this case, it is. I was using the counterquestion "How much of it could have been withheld without sanctions being applied?". Actual values may vary.
If something is immoral, it is necessarily prohibited for a moral person. If a lack of an action is immoral, it is necessarily compulsory for a moral person.
I'm confused. Does this "moral person" wish to do things that are immoral, but is compelled not to? (by whom?) And it's a problem that morality wishes to compel the person to do something without their consent?
I think I'm missing what definition you're using for morality, unless you simply mean:
I'm treating a moral imperative as a compulsion.
to be merely definitional. In that case, we're down to disputing definitions, and this discussion will not be very fruitful, though I'm left to note that you're using the word "morality" in a very strange way, and should expect to be misunderstood if you keep it up.
But now I've described choosing to share the pie as an amoral action, and lots of people who wish me to be compelled to share pie object to that categorization.
More likely, those people are operating under a different definition of "morality", and think that you should share the pie even if nobody wishes you were compelled to do so.
Do they think that I should share the pie, or do they hope that I will? My claim is that there is no moral imperative for me to share, their claim is that there is a moral imperative.
There is no way to claim that a moral imperative to share exists without positing something more important than 'all interactions should be mutually consensual'.
Generally in both ethical jargon and common usage, "morality" is just "what one has most reason to do"; tautologically, subject S should perform action x iff x is moral for S to do.
So there is no contradiction if I think Annie should eat less salt, and yet believe no one should force Annie to eat less salt. I can believe that you should share your pie, and yet believe that no one should force you to share your pie. If I believe that you should share your pie and you don't do so, then that suggests you're a bad person. But there are still good reasons not to require people to share their pies.
I disagree. So does Sidgwick, and following him, Parfit (and, I believe, Bertrand Russell). SEP (more specifically Bernard Gert) seems to think it refers to "codes of conduct", but that still does not include compelling others to do things. Comment noting some uses of 'morality' and 'ethics' in the wild.
The first definition you linked is different from the usage you provided, and the second does not claim to be about the jargon.
Morality describes the rules that govern behavior, ethics describe the principles which inform those rules, and meta-ethics describes the reason(s) why those principles and not others are the ones which inform morality.
For example, in the statement "I am the Lord Thy God; thou shalt have no other gods before Me", the moral statement is "have no other gods", the ethical principle behind that is "because God says so", and the meta-ethical reason is "God is the ultimate ethical adjudicator".
If you claim that seeking to optimize for a certain universe is what agents 'should' do, then you are positing a system of meta-ethics where "These acts will bring the universe closer to optimal conditions" is a justified ethics. Following that ethos, actions which result in the desired universe are what people should do and the only moral actions. In such a system there is no reason to perform an amoral action (by definition, anything that brings you closer to your desired state is moral, and anything that brings you further from your desired state is immoral; only things which you do not have a preference for are amoral).
Other systems of meta-ethics permit an individual to have a preference regarding an amoral act.
Sidgwick doesn't define moral behavior as what an individual has the most reason to do. He does say that all agents have a reason to take a moral action, but doesn't define moral action to mean that. He then goes on to say that what is moral
"cannot, without error, be disapproved by any other mind"
Which means that he is using a different definition of 'rational' in the phrase 'rational morality' than I am.
If you think it is immoral for Annie to eat so much salt, that is different from saying that she would be happier/healthier/closer to reaching her optimization goals/should eat less salt.
I don't think this line of argument can progress further without one or both of us giving citations, and I'm not sure what good that would accomplish.
I could use your given definition to interpret what you said, and vice versa. It doesn't matter to me what definition you use, but it is critical to me that I know what definition you use.
Nor can it be moral, given that you have baked a pie, that you should be forced to give it to them.
That is a particular moral position. Why do you call it the only "rational" one?
I don't. I claim that any contrary position either isn't rational OR requires nonconsensual interactions.
This position comes from my intermediate conclusion that "All interactions should be mutually consensual", which comes from my premise that "All people are metaphysically equal" and some other unstated assumptions that I am still in the process of identifying.
I claim that any contrary position either isn't rational OR requires nonconsensual interactions.
Right, sorry, I misread.
I must come from a different planet than EY. The critters that took all the food of the other critters had more baby critters than the no-food critters, and the no-food critter babies were sickly.
Eliezer's point is that those critters didn't get the whole pie by arguing that they should get the whole pie. Instead, they just took it. Thus, people are evolved to appeal to fairness to convince others to give them one-nth of the pie (where n is the number of people), but not to give them more than that. Dennis could try saying that he should get the whole pie, but other people are unlikely to agree, so this approach will leave Dennis hungry.
Just taking the whole pie by force is an altogether different approach, which Eliezer wasn't talking about
Is this a fair summary:
group morality is built up from personal morality
we assume that people are roughly similar, and so is their personal morality
this restricts personal moralities by rejecting those unsuitable for building up group morality
Or, even shorter, useful personal morality is Lego-shaped.
I have a number of issues with this, but I first wanted to know if I missed anything.
Start from the intermediate steps: "All interactions should be mutually consensual between all participants", "Ownership of property consists of the right to consent or deny consent for it to be used for a particular purpose or by a particular person ", and "Contracts, once entered, should not be broken".
If three people discover a pie, then the only moral use of that pie is that which all three people agree to. No other solution is general: Perhaps one of them has expended significant value in the search for the pie, while the other two have not. Is it more fair to divide the results of the search evenly, when the costs of the search were not divided evenly? Suppose that they spent the same on the search, and are all starving to death, but one of them is allergic to apples used in the pie, and demands that his third be destroyed rather than divided among the other two- is that 'fair'?
Until they reach a consensual decision, nobody may eat the pie. If they agree to decide by some other means, such as chance or violence, then the issue is resolved as agreed. If one party unilaterally resorts to violence or theft, then the immediate issue has been resolved in an extra-moral manner, and the other parties have no moral recourse (absent a government with laws operating under the principle of consent by the governed, which would have solved the problem by owning the pie to begin with.)
The question becomes thorny when applied to the resources for which the pie is a metaphor, but it remains an issue where it is difficult to determine who has property rights, rather than an issue of the moral distribution of property. The typical solution is to declare that the government or landowner has property rights in undeclared cases, and that assigning those rights is something done explicitly.
Today's post, Interpersonal Morality was originally published on 29 July 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Meaning of Right, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.