That is a very interesting dialogue.
It does not seem to come to any definite conclusion, instead simply presenting arguments and leaving the three participants in the dialogue with beliefs that are largely unchanged from their original position.
I am unable to come up with anything of substance to add, other than praise, but I feel compelled to comment anyway.
At first I tend to side with Zaire. The pie should be divided according to everyone's needs. But what if Zaire has a bigger body and generally needs to eat more? Should he always get more? Should the others receive less and be penalized because Zaire happens to be bigger? This is not easy, sigh...
Does the point of the story have anything to do with the object desire switching from a pie to a cake and back again?
I'm about to make a naked assertion with nothing to back it up, just to put it out there.
The purpose of morality is to prevent such an arguement from even ever occurring. If the morale engine of society is working correctly, then all it's members will have a desire for everyone to get an equally sized portion of the pie (in this example). If there is a Zaire who believes he should get 1/2 of the pie, then there was a malfunction when morality was being programmed into him. This malfunction will lead to conflict.
View it like you would view programming a f...
What's the point?
You realize, incidentally, that there's a huge literature in political philosophy about what procedural fairness means. Right? Right?
Eneasz: You say that Zaire is broken. What broke him, though, was the fact that he hasn't eaten a dew drop in a week. Hunger does weird things to people, cut him some slack.
It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right.Licenses relative to what authority? Himself, I presume. Of course the murderer would say the same.
Blind? What is being seen, what sees it?Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it (at least if ve's human or similar; a paperclip maximizer won't care).
Tiiba - Sure, I got no problem with that. There's often extenuating circumstances which change how any particular interaction occurs. However that was not the case presented in this hypothetical. :) However as a baseline that everyone should start with (and work forward from), an equally sized portion for all is the ideal as it will lead to the least conflict.
Gowder, I'm talking to the people who say unto me, "Friendly to who?" and "Oh, so you get to say what 'Friendly' means." I find that the existing literature rarely serves my purposes. In this case I'm driving at a distinction between the object level and the meta level, and the notion of bedrock (the Buck Stops Immediately). Does the political philosophy go there? - for I am not wholly naive, but of course I have only read a tiny fraction of what's out there. I fear that much political philosophy is written for humans by humans.
Rola...
Eliezer, what if they are all poisoned, and the only antidote is a full blueberry pie? is the obvious fair division still 1/3 to each?
What if only one is poisoned? Is it fair for the other two to get some of the (delicious) antidote?
A bit of unfairness is acceptable, if that is needed to get us all back to fairness. Example: Zaire should get a bigger piece of pie if they are on a lifeboat and if he is the only one who can row the boat back ashore, and needs some extra carbs to do that. Xannon and Yancy should agree that this is a useful distribution in this context.
This is not a cultural argument per se.
Say x and y come from, respectively: a tribe of quasaieugenicists that settle distributions based on "fitness" rankings (using something like IQ - probably largely arbitrary - but that doesn't matter), and a tribe of equal-sharers (that subscribe to y's conclusion is in the dialog). Within each culture the relevant version of "fairness" (or the 'core distributive principle') is intuitive, much like y's system is for us. In the x culture people with low rankings intuit that their superiors are 'enti...
Why not divide the pie equally among cells, which make up the agglomerations we call "persons"? And if there is a distinction between voluntary and fair so that Xannon and Yancy honestly couldn't comfortably eat another bite and gave extra to Zaire, would that be unfair?
We've already got a society in which living things are treated like farm animals, by which of course I speak of farm animals themselves. They are of course privileged over a more defenseless living being that they live as parasites off of, which are plants. Some Swiss officials ar...
Eliezer, to the extent I understand what you're referencing with those terms, the political philosophy does indeed go there (albeit in very different vocabulary). Certainly, the question about the extent to which ideas of fairness are accessible at what I guess you'd call the object level are constantly treated. Really, it's one of the most major issues out there -- the extent to which reasonable disagreement on object-level issues (disagreement that we think we're obligated to respect) can be resolved on the meta-level (see Waldron, Democracy and Disagr...
Okay, how does standard political philosophy say you should fairly / rightly construct an ultrapowerful superintelligence (not to be confused with a corruptible government) that can compute moral and metamoral questions only given a well-formed specification of what is to be computed?
After you've carried out these instructions, what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?
That's a really fascinating question. I don't know that there'd be a "standard" answer to this -- were the questions taken up, they'd be subject to hot debate.
Are we specifying that this ultrapowerful superintelligence has mind-reading power, or the closest non-magical equivalent in the form of access to every mental state that an arbitrary individual human has, even stuff that now gets lumped under the label "qualia"/ability to perfectly simulate the neurobiology of such an individual?
If so, then two approaches seem defensible to me. ...
First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away
Let's not. See, this is what I mean by saying that political philosophy is written for humans by humans.
Your other answer, "ideal democracy", bears a certain primitive resemblance to this, as you'd know if you were familiar with the Friendliness literature...
Okay, sorry about that, just emphasizing that it's not like I'm making all this up as I go along; and also, that there's a hell of a lot of literature out there on everything, but it isn't always easy to adapt to a sufficiently different purpose.
Why doesn't Zaire just divide himself in half, let each half get 1/4 of the pie, then merge back together and be in possession of half of the pie?
Or, Zaire might say: Hey guys, my wife just called and told me that she made a blueberry pie this morning and put it in this forest for me to find. There's a label on the bottom of the plate if you don't believe me. Do you still think 'fair' = 'equal division'?
Or maybe Zaire came with his dog, and claims that the dog deserves an equal share.
I appreciate the distinction Eliezer is trying to draw between the object level and the meta level. But why the assumption that the object-level procedure will be simple?
I was expecting Xannon and Yancy to get into an exchange, only to find that Zaire had taken half the pie while they were talking. Xannon is motivated by consensus, Yancy is motivated by fairness, and Zaire is motivated by pie. I know who I bet on to end up with more pie.
(The cake was an honest mistake, not a lie.)
And then they discover, in the center of the clearing, a delicious blueberry pie.
If the pie is edible then it was recently made and placed there. Whoever made it is probably close at hand. That person has a much better claim on the pie than these three and is therefore most likely rightly considered the owner. Let the owner of the pie decide. If the owner does not show up, leave the pie alone. Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.
This post reminds me a lot of DialogueOnFriendliness.
There's at least one more trivial mistake in this post:
Is their nothing more to the universe than their conflict?s/their/there/
Constant wrote:
Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?
If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?
If, indeed, it requires that we imagine a flying pie-baking monster in order to come up with a situation in which the concept of 'fairness' is actually relevant (e.g. not immediately trumped by an external factor), then it suggests that the concept of 'fairness' is in the real world virtually irrelevant. I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.
I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.
Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.
It seems that the 1/3 each is what the recursive buck ends with, anyhow. Upon learning that Zaire claims half for him/herself and Xannon insists on averaging fairness algorithms, Xannon and Yancy merely update their claims to equal Zaire's at all times. That way, the average of the three desires will always turn out 1/3 a piece. Perhaps an argument for why an equal share is most fair. If not, Zaire could just wait until the other two had stated their desires and claimed the whole pie for him/herself, thus always skewing the final average in his/her favor.
I don't have an argument here; rather, I just want to see if I understand each position taken in the dialogue. After all, it would be a dreadful waste of time to argue one way or the other against our three musketeers while completely misunderstanding some key point. As far as I can tell, these are the essential arguments being made:
Yancy's position: that fairness is a rational (mathematical) system. There is no moral factor; rather than "to each according to his need," it is "to each according to the equation." This presumes fairness i...
As for the question "Friendly to who?"/"So you get to decide what's Friendly?", may I suggest Who Gets to Decide? as a reasonable answer? To summarize (while of course skipping a lot of the detail in the original post), no one gets to decide what's Friendly just like no one gets to decide the speed of light. There are simply facts that can be discovered (or that we can be wrong about). Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.
Does anyone think that this disagreement can be resolved without threat-signalling? I think valuing a particular model of 'fairness' over another (the Xers and Yers from Leif's post) ultimately boils down to the cost/benefit of being accepted/rejected by a particular social group.
So does this disagreement take place in a universe consisting only of the entities Xannon, Yancy, and Zaire, or do they all go back to the same village afterward and reminisce about what happened, or do they each go back to their separate villages?
Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.
You are claiming it is a common practice. But common practice is common practice - not necessarily "fairness". We often do things precisely because they are commonly done. One common practice which is not equal is, if two cars arrive at the same intersection at right angles, then the car on the right has the right of way....
This dialogue leads me to conclude that "fairness" is a form of social lubricant that ensures our pies don't get cold while we're busy arguing. The meta-rule for fairness rules would then be: (1) fast; (2) easy to apply; and (3) everybody gets a share.
Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it
The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn't care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is beh...
I tend to agree with Xannon, that 'fairness' is defined by society. So the question is if the societal moral norms still affect the three opponents. If Xannon decides "we are still members of society where equal shares for everyone are considered fair" he might side with Yancy, share the pie into 1/3's and label Zaire to be a criminal. If he decides "we are out in the desert with no society around to push its moral values unto us" he might side with Zaire, divide the pie in 1/2's and tell Yancy to shove his ideas of equality up his behi...
Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.
You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a computation that computes how to compute it.
Eliezer,
The resemblance between my second suggestion and your thing didn't go unnoticed -- I had in fact read your coherent extrapolated volition thing before (there's probably an old e-mail from me to you about it, in fact). I think it's basically correct. But the method of justification is importantly different, because the idea is that we're trying to approximate something with epistemic content -- we're not just trying to do what you might call a Xannon thing -- we're not just trying to model what humans would do. Rather, we're trying to model and i...
Eliezer: as you are aware yourself, we don't know how to compute it, nor how to run a computation that computes how to compute it. If we leave it up to the superintelligence to decide how to interpret "helping" and "hurting," it will be in a position no worse than our own, and possibly better, seeing that we are not superintelligent.
Paul: Responsiveness to which reasons? For every mind in mind design space that sees X as a reason to value Y, there are other possible minds that see X as a reason to value ~Y.
Right, but those questions are responsive to reasons too. Here's where I embrace the recursion. Either we believe that ultimately the reasons stop -- that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don't. If we do, then the superintelligence should replicate that process. If we don't, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.
Of course, the content of the ideal process is tricky. I'm hiding the really ha...
The only reasons that exist for taking any actions at all are desires. In specific - the desires of the being taking the action. Under any given condition the being will always take the action that best fulfills the most/strongest of it's desires (given it's beliefs). The question isn't which action is right/wrong based on some universal bedrock of fairness, but rather what desires we want the being to have. We can shape many desires in humans (and presumably all the desires of an AI) and thus we want to give it the desires that best help and least hurt h...
Paul: Sounds like you're just describing the "thought faster" part of the CEV process, i.e., "What would you decide if you could search a larger argument space for reasons?" However, it seems to me that you're idealizing this process very highly, and overlooking such questions as "What if different orderings of the arguments would end up convincing us of different things?" which a CEV has to handle somehow, e.g. by weighting the possibilities by e.g. length, combining them into a common superposition, and acting only where s...
Eleizer,
Things like the ordering of arguments are just additional questions about the rationality criteria, and my point above applies to them just as well -- either there's a justifiable answer ("this is how arguments are to be ordered,") or it's going to be fundamentally socially determined and there's nothing to be done about it. The political is really deeply prior to the workings of a superintelligence in such cases: if there's no determinate correct answer to these process questions, then humans will have to collectively muddle through to ...
Eliezer: Are you looking for a new definition of "fairness" which would reconcile the partisans of existing definitions? Or are you just pointing out that this is a sort of damned-if-you-do, damned if-you-don't problem, and that any rule for establishing fairness will piss somebody or other off? If the latter, from the point of view of your larger project, why not just insert a dummy answer for this question - pick any definition that grabs you - and see how it fits with the rest of what you need to work out. Or work through several different obv...
I suppose that's just to second Paul Gowder's point that the political problem is insurmountable. But I imagine few things would resolve a political problem faster then the backing of an all-powerful supermind.
@Paul: You seem to suggest that we all take the same things to be reasons, perhaps even the same reasons. Is this warranted?
Things like the ordering of arguments are just additional questions about the rationality criteria
...which problem you can't hand off to the superintelligence until you've specified how it decides 'rationality criteria'. Bootstrapping is allowed, skyhooking isn't. Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?
Poke has it exactly right. Thinking further along the lines suggested by his "social lubricant" idea, I'd suggest that fairness is no more than efficiency. Or, at the very least, if two prevailing doctrines of fairness exist, the more efficient doctrine will—ceteris paribus—in the long run prevail.
This leaves open the question of how closely to efficiency our notions of fairness have actually evolved, but that's an empirical question.
This question, of what is fairness / morality, seems a lot easier (to me) than the posters here appear to feel.
Isn't the answer: You start with purely selfish desires. These sometimes cause conflict with limited resources. Then you take Rawl's Veil of Ignorance, and come up with social rules (like "don't murder") that result in a net positive outcome for society. It's not a zero-sum game. Cooperation can result in greater returns for everybody, than constant conflict.
Individuals breaking agreed morality are shunned, in much the same way as so...
Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe. You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a c...
I see no reason to believe there is such a thing as an objective definition of "fair" in this case. The idea that an equal division is "fair" is based on the assumption that none of the three has a good argument as to why he should receive more than either of the others. If one has a reasonable argument as to why he should receive more, the fairness argument breaks down. In fact, none of the three really have a good argument as to why he is entitled to any of it, and I can't see why it would be wrong for any of the first one to grab it ...
Why not divide the pie according to who will ultimately put the pie to the best use? If X and Y intend to take a nap after eating the pie, but Z is willing to plant a tree, wouldn't the best outcome for the pie favor Z getting more?
Before you dismiss the analogy, consider this - what if the pie was $1800.00 that none of the three had earned? What if the $1800.00 had been BORROWED with a certain expectation of its utility? Should X, Y, and Z each get $600.00, even though there is no stipulation as to what each of them must DO with that money? If X intends t...
Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?
Again, this is why it's irreducibly social. If there isn't a procedure that yields a justified determinate answer to the rationality of that order, then the best we can do is take what is socially accepted at the time and in the society in which such a superintelligence is created. There's nowhere else to look.
Early in the story, Z is hungry, and X and Y are not. Z says that he thinks that because he is hungry, 'fair' is defined with him getting more pie, while X and Y disagree. This seems like a slightly strange story to me, but here's a much stranger one:
Z is hungry, and X and Y are not. X thinks that it would be fair to give Z 1/2 the pie, but Z and Y both think it would be fair to split the pie 1/3;1/3;1/3. In other words, the person who is arguing the fairness of the unequal distribution is not the person who would benefit from it. This feels much less...
Joe Mathes: I thought it was fairly obvious that a fair distribution is in this case synonymous with a moral distribution (was I wrong?). In this context, the word fair doesn't have any meaning if one tries to remove the concept of morality.
However I don't think that argueing for fairness when one is not the beneficiary is that unusual. The civil rights movement was supported by a lot of white people, and the women's liberation movement was supported by a lot of males. In both cases these people are losing an advantage they previously held in order to ...
They are motivated by a love of fairness and a desire to promote fairness, which has been inculcated into them by their programmers.Unlikely. The basic principles of fairness are constant between human cultures and societies, and seem to be intuitively understood by humans. What changes is the status of categories of people - but humans agree on what behavior is fair towards an equal.
To deal with the question "what is moral", we need first to establish the purpose of "morality". How can you evaluate the effectiveness of a design unless you first understand what it is intended to do and not do?
Eneasz: you're ignoring "moral benefits". Let's say Joe is crossing a desert with enough food and water to live comfortably until he reaches his destination. Midway through, he comes across Bob, who is dying of thirst. If Joe gives Bob sufficient food and water to save his life, Joe can still make it across the desert, but not as comfortably. Giving Bob food and water represents a loss of benefits for Joe; withholding food and water represents a more significant loss, though. Most people would be wracked by guilt at leaving someone to die when th...
For every possible division of pie into three pieces (including pieces of 0 size), take each person and ask how fair they would think the division if they received each of the three slices. Average those together to get each person's overall fairness rating for a given pie distribution.
Average those per-person results into an "overall fairness" rating for each pie distribution.
This includes:
- You can have people involved who don't like pie and don't want any. It seems pointless to say that division into thirds is the only fair division, if one ...
A possible mathematical rule for fairness in this situation.
1. Select who gets to cut the pie into three pieces by a random process.
2. That individual can cut it into any size sections he chooses along as there are three sections.
3. The order of choice selection again is determined by a random process.
Result: on average everyone receives 1/3 share.
Fairness=underlying intuitive mathematical rules. QED
A variant on demiurge: A standard way of dividing something into two parts is to have one person divide and the other choose. Alice cuts the slice of cake in half, and Bob takes whichever piece he likes. If Alice is unhappy with her piece, she should have cut the two more evenly. You can apply the same rule to three people by adding an extra step: glide the knife along the edge to create an increasingly large piece, and any of the three can call a stop and take that piece (then divide the rest as for two people). (For a pie, you might make an initial c...
Yancy: "If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."
So the trick here is to realize that fairness is defined with respect to an expected or typical observer -- when you try to murder me, and I scream "Foul play!", the propositional content of my cry is that I expect any human who happens to pass by to agree with me and to help stop the murder. If nobody passes by this time, well, that's just my bad luck, and I can go to my g...
My favorite answer to this problem comes from "How to Cut a Cake: And Other Mathematical Conundrums." The solution in the book was that "fair" means "no one has cause to complain." It doesn't work in the case here, since one party wants to divide the pie unevenly, but if you were trying to make even cuts, it works. The algorithm was:
At the end, anyone who thinks they got too little (meaning, someone else got too much) could have said "cut" before that other person's cut got too big.
Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.
If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.
Xannon and Yancy offer Zaire 1/3 of the pie, if he'll accept that.
If he won't, they split the pie 50-50 between them, and leave Zaire with nothing.
Does that sound fair?
When people get this embroiled in philosophy, I usually start eating pie.
However as I don't like blueberries, we will split the pie into thirds fairly as Yancy wants, then I will give 1/6th of my pie to Zaire so he has the half he wants, and I'll leave the other 1/6th where I found it since A PIE WE FOUND IN THE FOREST AND KNOW NOTHING ABOUT ISN'T NECESSARILY MINE TO STEAL FROM.
A great post. It captured a lot of intriguing questions I currently have about ethics. One question I have, which I am curious to see addressed in further posts in this sequence, is: Once we dissolve the question of "fairness" (or "morality" or any other such term) and taboo the term, is there a common referent that all parties are really discussing, or do the parties have fundamentally different and irreconcilable ideas of what fairness (or morality, etc.) is? Is Xannon's "fairness" merely a homonym for Yancy's "fairness...
Interesting. As far as I can tell, the moral is that most definitions in an argument are supplied such that the arguer gets their way, instead of being a solid fact that can be followed in a logical sequence in order to deduce the correct course of action.
But I think it would using the rationalists' Taboo would benefit the three, as the word "fair" is defined differently by each of them: Xannon defines fairness as a compromise between the involved parties. Yancy defines fairness as an objective equality wherein everyone receives the same treatmen...
There's another compromise position. Namely, two can form a coalition against the third and treat the problem as dividing a pie between two individuals with different claims. For example, Xannon and Yancy have a combined claim of 2/3 to Zaire's 1/2. Proportional division according to those terms would give Zaire 3/7 to the duo's 4/7, which they can then split in half to get the distribution {2/7, 2/7, 3/7}. As it turns out, you get this same division no matter how the coalitions form. This sort of principle dates back to the Talmud.
Of course, this only wor...
Wow. This creates a real moral conflict for me much better than a clash of three worlds (where the problem is that I really agree more with the super happy than with people, and even more so with those who killed themselves)
Followup to: The Moral Void
Three people, whom we'll call Xannon, Yancy and Zaire, are separately wandering through the forest; by chance, they happen upon a clearing, meeting each other. Introductions are performed. And then they discover, in the center of the clearing, a delicious blueberry pie.
Xannon: "A pie! What good fortune! But which of us should get it?"
Yancy: "Let us divide it fairly."
Zaire: "I agree; let the pie be distributed fairly. Who could argue against fairness?"
Xannon: "So we are agreed, then. But what is a fair division?"
Yancy: "Eh? Three equal parts, of course!"
Zaire: "Nonsense! A fair distribution is half for me, and a quarter apiece for the two of you."
Yancy: "What? How is that fair?"
Zaire: "I'm hungry, therefore I should be fed; that is fair."
Xannon: "Oh, dear. It seems we have a dispute as to what is fair. For myself, I want to divide the pie the same way as Yancy. But let us resolve this dispute over the meaning of fairness, fairly: that is, giving equal weight to each of our desires. Zaire desires the pie to be divided {1/4, 1/4, 1/2}, and Yancy and I desire the pie to be divided {1/3, 1/3, 1/3}. So the fair compromise is {11/36, 11/36, 14/36}."
Zaire: "What? That's crazy. There's two different opinions as to how fairness works—why should the opinion that happens to be yours, get twice as much weight as the opinion that happens to be mine? Do you think your theory is twice as good? I think my theory is a hundred times as good as yours! So there!"
Yancy: "Craziness indeed. Xannon, I already took Zaire's desires into account in saying that he should get 1/3 of the pie. You can't count the same factor twice. Even if we count fairness as an inherent desire, why should Zaire be rewarded for being selfish? Think about which agents thrive under your system!"
Xannon: "Alas! I was hoping that, even if we could not agree on how to distribute the pie, we could agree on a fair resolution procedure for our dispute, such as averaging our desires together. But even that hope was dashed. Now what are we to do?"
Yancy: "Xannon, you are overcomplicating things. 1/3 apiece. It's not that complicated. A fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on. What if we'd all been raised in a society that believed that men should get twice as much pie as women? Then we would split the pie unevenly, and even though no one of us disputed the split, it would still be unfair."
Xannon: "What? Where is this 'fairness' stored if not in human minds? Who says that something is unfair if no intelligent agent does so? Not upon the stars or the mountains is 'fairness' written."
Yancy: "So what you're saying is that if you've got a whole society where women are chattel and men sell them like farm animals and it hasn't occurred to anyone that things could be other than they are, that this society is fair, and at the exact moment where someone first realizes it shouldn't have to be that way, the whole society suddenly becomes unfair."
Xannon: "How can a society be unfair without some specific party who claims injury and receives no reparation? If it hasn't occurred to anyone that things could work differently, and no one's asked for things to work differently, then—"
Yancy: "Then the women are still being treated like farm animals and that is unfair. Where's your common sense? Fairness is not agreement, fairness is symmetry."
Zaire: "Is this all working out to my getting half the pie?"
Yancy: "No."
Xannon: "I don't know... maybe as the limit of an infinite sequence of meta-meta-fairnesses..."
Zaire: "I fear I must accord with Yancy on one point, Xannon; your desire for perfect accord among us is misguided. I want half the pie. Yancy wants me to have a third of the pie. This is all there is to the world, and all there ever was. If two monkeys want the same banana, in the end one will have it, and the other will cry morality. Who gets to form the committee to decide the rules that will be used to determine what is 'fair'? Whoever it is, got the banana."
Yancy: "I wanted to give you a third of the pie, and you equate this to seizing the whole thing for myself? Small wonder that you don't want to acknowledge the existence of morality—you don't want to acknowledge that anyone can be so much less of a jerk."
Xannon: "You oversimplify the world, Zaire. Banana-fights occur across thousands and perhaps millions of species, in the animal kingdom. But if this were all there was, Homo sapiens would never have evolved moral intuitions. Why would the human animal evolve to cry morality, if the cry had no effect?"
Zaire: "To make themselves feel better."
Yancy: "Ha! You fail at evolutionary biology."
Xannon: "A murderer accosts a victim, in a dark alley; the murderer desires the victim to die, and the victim desires to live. Is there nothing more to the universe than their conflict? No, because if I happen along, I will side with the victim, and not with the murderer. The victim's plea crosses the gap of persons, to me; it is not locked up inside the victim's own mind. But the murderer cannot obtain my sympathy, nor incite me to help murder. Morality crosses the gap between persons; you might not see it in a conflict between two people, but you would see it in a society."
Yancy: "So you define morality as that which crosses the gap of persons?"
Xannon: "It seems to me that social arguments over disputed goals are how human moral intuitions arose, beyond the simple clash over bananas. So that is how I define the term."
Yancy: "Then I disagree. If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."
Zaire: "And the murderer says, 'I am in the right, you are in the wrong'. So what?"
Xannon: "How does your statement that you are in the right, and the murderer is in the wrong, impinge upon the universe—if there is no one else present to be persuaded?"
Yancy: "It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right. I can distinguish between things I merely want, and things that are right—though alas, I do not always live up to my own standards. The murderer is blind to the morality, perhaps, but that doesn't change the morality. And if we were both blind, the morality still would not change."
Xannon: "Blind? What is being seen, what sees it?"
Yancy: "You're trying to treat fairness as... I don't know, something like an array-mapped 2-place function that goes out and eats a list of human minds, and returns a list of what each person thinks is 'fair', and then averages it together. The problem with this isn't just that different people could have different ideas about fairness. It's not just that they could have different ideas about how to combine the results. It's that it leads to infinite recursion outright—passing the recursive buck. You want there to be some level on which everyone agrees, but at least some possible minds will disagree with any statement you make."
Xannon: "Isn't the whole point of fairness to let people agree on a division, instead of fighting over it?"
Yancy: "What is fair is one question, and whether someone else accepts that this is fair is another question. What is fair? That's easy: an equal division of the pie is fair. Anything else won't be fair no matter what kind of pretty arguments you put around it. Even if I gave Zaire a sixth of my pie, that might be a voluntary division but it wouldn't be a fair division. Let fairness be a simple and object-level procedure, instead of this infinite meta-recursion, and the buck will stop immediately."
Zaire: "If the word 'fair' simply means 'equal division' then why not just say 'equal division' instead of this strange additional word, 'fair'? You want the pie divided equally, I want half the pie for myself. That's the whole fact of the matter; this word 'fair' is merely an attempt to get more of the pie for yourself."
Xannon: "If that's the whole fact of the matter, why would anyone talk about 'fairness' in the first place, I wonder?"
Zaire: "Because they all share the same delusion."
Yancy: "A delusion of what? What is it that you are saying people think incorrectly the universe is like?"
Zaire: "I am under no obligation to describe other people's confusions."
Yancy: "If you can't dissolve their confusion, how can you be sure they're confused? But it seems clear enough to me that if the word fair is going to have any meaning at all, it has to finally add up to each of us getting one-third of the pie."
Xannon: "How odd it is to have a procedure of which we are more sure of the result than the procedure itself."
Zaire: "Speak for yourself."
Part of The Metaethics Sequence
Next post: "Moral Complexities"
Previous post: "Created Already In Motion"