Deontology for Consequentialists

46 Alicorn 30 January 2010 05:58PM

Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

Consequentialism1 is built around a group of variations on the following basic assumption:

  • The rightness of something depends on what happens subsequently.

It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

  • The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
  • The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
  • Historical facts (e.g. having made a promise, sworn a vow)
  • Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
  • Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
  • The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
continue reading »

Intuitive supergoal uncertainty

4 JustinShovelain 04 December 2009 05:21AM

There is a common intuition and feeling that our most fundamental goals may be uncertain in some sense. What causes this intuition? For this topic I need to be able to pick out one’s top level goals, roughly one’s context insensitive utility function, and not some task specific utility function, and I do not want to imply that the top level goals can be interpreted in the form of a utility function. Following from Eliezer’s CFAI paper I thus choose the word “supergoal” (sorry Eliezer, but I am fond of that old document and its tendency to coin new vocabulary). In what follows, I will naturalistically explore the intuition of supergoal uncertainty.

To posit a model, what goal uncertainty (including supergoal uncertainty as an instance) means is that you have a weighted distribution over a set of possible goals and a mechanism by which that weight may be redistributed. If we take away the distribution of weights how can we choose actions coherently, how can we compare? If we take away the weight redistribution mechanism we end up with a single goal whose state utilities may be defined as the weighted sum of the constituent goals’ utilities, and thus the weight redistribution mechanism is necessary for goal uncertainty to be a distinct concept.

continue reading »

The Difference Between Utility and Utility

8 Matt_Simpson 02 December 2009 06:16AM

Recently I argued that the economist's utility function and the ethicist's utility function are not the same.  The nutshell argument is that they are created for different purposes - one is an attempt to describe the actions we actually take and the other is an attempt to summarize our true values (i.e., what we should do).  I just ran across a somewhat older post over at Black Belt Bayesian arguing this very point.  Excerpt:

Economics (of the neoclassical kind) models consumers and other economic actors as such utility maximizers... Utility is not something you can experience. It’s just a mathematical construct used to describe the optimization structure in your behavior...

Consequentialist ethics says an act is right if its consequences are good. Moral behavior here amounts to being a utility maximizer. What’s “utility”? It’s whatever a moral agent is supposed to strive toward. Bentham’s original utilitarianism said utility was pleasure minus pain; nowadays any consequentalist theory tends to be called “utilitarian” if it says you should maximize some measure of welfare, summed over all individuals... Take note: not all utility maximizers are utilitarians.

There’s no necessary connection between these two kinds of utility other than that they use the same math. It’s possible to make up a utilitarian theory where ethical utility is the sum of everyone’s economic utility (calibrated somehow), but this is just one of many possibilities. Anyone trying to reason about one kind of utility through the other is on shaky ground.

 

Unspeakable Morality

27 Eliezer_Yudkowsky 04 August 2009 05:57AM

It is a general and primary principle of rationality, that we should not believe that which there is insufficient reason to believe; likewise, a principle of social morality that we should not enforce upon our fellows a law which there is insufficient justification to enforce.

Nonetheless, I've always felt a bit nervous about demanding that people be able to explain things in words, because, while I happen to be pretty good at that, most people aren't.

"I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth." —Danielle Egan (journalist)

This experience permanently traumatized Ms. Egan, by the way.  Because years later, at a WTA conference, one of the speakers said that something was true, and Ms. Egan said "What do you mean, 'true'?", and the speaker gave some incorrect answer or other; and afterward I quickly walked over to Ms. Egan and explained the correspondence theory of truth:  "The sentence 'snow is white' is true if and only if snow is white"; if you're using a bucket of pebbles to count sheep then an empty bucket is true if and only if the pastures are empty.  I don't know if this cured her; I suspect that it didn't.  But up until that point, at any rate, it seems Ms. Egan had been so traumatized by this childhood experience that she believed there was no such thing as truth - that because her teacher had demanded a definition in words, and she hadn't been able to give a good definition in words, that no good definition existed.

Of which I usually say:  "There was a time when no one could define gravity in exquisitely rigorous detail, but if you walked off a cliff, you would fall."

On the other hand - it is a general and primary principle of rationality that when you have no justification, it is very important that there be some way of saying "Oops", losing hope, and just giving up already.  (I really should post, at some point, on how the ability to just give up already is one of the primary distinguishing abilities of a rationalist.)  So, really, if you find yourself totally unable to justify something in words, one possibility is that there is no justification.  To ignore this and just casually stroll along, would not be a good thing.

And with moral questions, this problem is doubled and squared. 

continue reading »

Which Parts Are "Me"?

30 Eliezer_Yudkowsky 22 October 2008 06:15PM

Everything I am, is surely my brain; but I don't accept everything my brain does, as "me".

Though I don't like the fact, it does seem in my case to be true, that reflecting upon a passion can diminish it.  It does seem that in taking a step back from myself to look at what my brain is doing, that this introduces a greater emotional distance from the world, than when I am not reflecting.  Though I am extremely cautious when it comes to the moral issue of how to modify human nature, this is one thing that I would change, I think.

The point at which you start thinking that not everything that goes on your brain is to be identified with "you", is something of a Rubicon, I fear.  And those who prize the intensity of a simple passionate existence, above success in life or a complex layered personality, should not cross that Rubicon.

(Though in this case you should accept advice from rationalists about practical policies for guarding yourself, because this is a dangerous world in which to be unreflective.  For example: if you don't follow the link to the rest of this post, then sign the #$!@ up for cryonics regardless of what your simple passions say on the subject.)

continue reading »

Three Fallacies of Teleology

21 Eliezer_Yudkowsky 25 August 2008 10:27PM

Followup toAnthropomorphic Optimism

Aristotle distinguished between four senses of the Greek word aition, which in English is translated as "cause", though Wikipedia suggests that a better translation is "maker".  Aristotle's theory of the Four Causes, then, might be better translated as the Four Makers.  These were his four senses of aitia:  The material aition, the formal aition, the efficient aition, and the final aition.

The material aition of a bronze statue is the substance it is made from, bronze.  The formal aition is the substance's form, its statue-shaped-ness.  The efficient aition best translates as the English word "cause"; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.

Though Aristotle considered knowledge of all four aitia as necessary, he regarded knowledge of the telos as the knowledge of highest order.  In this, Aristotle followed in the path of Plato, who had earlier written:

Imagine not being able to distinguish the real cause from that without which the cause would not be able to act as a cause.  It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it.  That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid.  As for their capacity of being in the best place they could possibly be put, this they do not look for, nor do they believe it to have any divine force...

continue reading »

The Meaning of Right

30 Eliezer_Yudkowsky 29 July 2008 01:28AM

Continuation of:  Changing Your Metaethics, Setting Up Metaethics
Followup toDoes Your Morality Care What You Think?, The Moral Void, Probability is Subjectively Objective, Could Anything Be Right?, The Gift We Give To Tomorrow, Rebelling Within Nature, Where Recursive Justification Hits Bottom, ...

(The culmination of a long series of Overcoming Bias posts; if you start here, I accept no responsibility for any resulting confusion, misunderstanding, or unnecessary angst.)

What is morality?  What does the word "should", mean?  The many pieces are in place:  This question I shall now dissolve.

The key—as it has always been, in my experience so far—is to understand how a certain cognitive algorithm feels from inside.  Standard procedure for righting a wrong question:  If you don't know what right-ness is, then take a step beneath and ask how your brain labels things "right".

It is not the same question—it has no moral aspects to it, being strictly a matter of fact and cognitive science.  But it is an illuminating question.  Once we know how our brain labels things "right", perhaps we shall find it easier, afterward, to ask what is really and truly right.

But with that said—the easiest way to begin investigating that question, will be to jump back up to the level of morality and ask what seems right.  And if that seems like too much recursion, get used to it—the other 90% of the work lies in handling recursion properly.

(Should you find your grasp on meaningfulness wavering, at any time following, check Changing Your Metaethics for the appropriate prophylactic.)

continue reading »

Existential Angst Factory

45 Eliezer_Yudkowsky 19 July 2008 06:55AM

Followup toThe Moral Void

A widespread excuse for avoiding rationality is the widespread belief that it is "rational" to believe life is meaningless, and thus suffer existential angst.  This is one of the secondary reasons why it is worth discussing the nature of morality.  But it's also worth attacking existential angst directly.

I suspect that most existential angst is not really existential.  I think that most of what is labeled "existential angst" comes from trying to solve the wrong problem.

Let's say you're trapped in an unsatisfying relationship, so you're unhappy.  You consider going on a skiing trip, or you actually go on a skiing trip, and you're still unhappy.  You eat some chocolate, but you're still unhappy.  You do some volunteer work at a charity (or better yet, work the same hours professionally and donate the money, thus applying the Law of Comparative Advantage) and you're still unhappy because you're in an unsatisfying relationship.

So you say something like:  "Skiing is meaningless, chocolate is meaningless, charity is meaningless, life is doomed to be an endless stream of woe."  And you blame this on the universe being a mere dance of atoms, empty of meaning.  Not necessarily because of some kind of subconsciously deliberate Freudian substitution to avoid acknowledging your real problem, but because you've stopped hoping that your real problem is solvable.  And so, as a sheer unexplained background fact, you observe that you're always unhappy.

continue reading »

The Bedrock of Fairness

25 Eliezer_Yudkowsky 03 July 2008 06:00AM

Followup toThe Moral Void

Three people, whom we'll call Xannon, Yancy and Zaire, are separately wandering through the forest; by chance, they happen upon a clearing, meeting each other.  Introductions are performed.  And then they discover, in the center of the clearing, a delicious blueberry pie.

Xannon:  "A pie!  What good fortune!  But which of us should get it?"

Yancy:  "Let us divide it fairly."

Zaire:  "I agree; let the pie be distributed fairly.  Who could argue against fairness?"

Xannon:  "So we are agreed, then.  But what is a fair division?"

Yancy:  "Eh?  Three equal parts, of course!"

Zaire:  "Nonsense!  A fair distribution is half for me, and a quarter apiece for the two of you."

Yancy:  "What?  How is that fair?"

Zaire:  "I'm hungry, therefore I should be fed; that is fair."

Xannon:  "Oh, dear.  It seems we have a dispute as to what is fair.  For myself, I want to divide the pie the same way as Yancy.  But let us resolve this dispute over the meaning of fairness, fairly: that is, giving equal weight to each of our desires.  Zaire desires the pie to be divided {1/4, 1/4, 1/2}, and Yancy and I desire the pie to be divided {1/3, 1/3, 1/3}.  So the fair compromise is {11/36, 11/36, 14/36}."

continue reading »

The Moral Void

31 Eliezer_Yudkowsky 30 June 2008 08:52AM

Followup toWhat Would You Do Without Morality?, Something to Protect

Once, discussing "horrible job interview questions" to ask candidates for a Friendly AI project, I suggested the following:

Would you kill babies if it was inherently the right thing to do?  Yes [] No []

If "no", under what circumstances would you not do the right thing to do?   ___________

If "yes", how inherently right would it have to be, for how many babies?     ___________

continue reading »

View more: Prev | Next