Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Virtue Ethics for Consequentialists

33 Post author: Will_Newsome 04 June 2010 04:08PM

Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.

 

There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.

When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...

Moral philosophy was designed for humans, not for rational agents. When you're used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, "Humans don't have utility functions." Similarly, Kaj warns us: "be extra careful when you try to apply the concept of a utility function to human beings." Back in the day nobody thought smarter-than-human intelligence was possible, and many still don't. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren't even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon).  Epicurus, Mill, and Bentham were great thinkers and all, but it's not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don't have to waste memory on what a personalized rulebook says about different kinds of milk, and you don't have to think 15 inferential steps ahead to determine if you should drink skim or whole.

You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that's what is right). Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they're all actually virtue ethicists: they're trying to do the 'consequentialist' or 'deontologist' things to do, which happen to usually be the same. Alicorn's decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you're a virtue ethicist it's easier to say "I'm the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues" and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it's deontic). It's not illegal!

Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:

The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.

[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.

To quote Kaj's response to the above:

Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. [...]

What has this meant in practice? Well, I'm not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of "emotional machinery" as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.

But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became "how could I develop myself", "how could I be more virtuous" and "how could I best act to improve the world". From the last bit, you can see that I haven't lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it's more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.

Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don't actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.

So, if you'd like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.

Comments (178)

Comment author: steven0461 05 June 2010 02:06:23AM 26 points [-]

In both your GTD example and Kaj's posting example, virtue doesn't seem to affect what you think you should do, just how you motivate yourself to do it, so "virtue psychology" might be a more accurate description than "virtue ethics".

Comment author: Tyrrell_McAllister 04 June 2010 06:40:06PM *  12 points [-]

A quick thought that may not stand up to reflection:

Consequentialists should think of virtue ethics as a human-implementable Updateless Decision Theory. Under UDT, your focus is on being an agent whose actions maximize utility over all possibilities, even those that you know now not to be the case, as long as they were considered possible when your source code was written. Hence, in the Counterfactual Mugging, you make a choice that you know will make things worse in the actual world.

Similarly, virtue ethics requires that you focus on making yourself into the kind of agent who would make the right choices in general, even if that means making a choice that you know will make things worse in the actual world.

Edited to reorder clauses for clarity.

Comment author: thomblake 04 June 2010 06:47:20PM 1 point [-]

Similarly, virtue ethics requires that you focus on making yourself into the kind of agent who would make the right choices in general, even if that means making a choice in the actual world that you know will make things worse.

I think this may be overstating it, specifically the "even if..." clause. If the 'choice' is being done at the level of consciousness, then you can probably sidestep the worst failures of virtue ethics. And if it's not, there's no reason to expect not having good habits of action to perform better.

Comment author: Tyrrell_McAllister 04 June 2010 06:52:13PM 0 points [-]

I think this may be overstating it, specifically the "even if..." clause. If the 'choice' is being done at the level of consciousness, then you can probably sidestep the worst failures of virtue ethics.

I'm not sure what you mean. Could you give an example of the kind of scenario you're thinking of?

Comment author: thomblake 04 June 2010 06:57:10PM 2 points [-]

Sure. Let's say you're an honest person. So (for instance) if someone asks you what time it is, you're predisposed to tell them the correct time rather than lying. It probably won't even occur to you that it might be funny to lie about the time. And then the Nazis come to the door and ask about the Jews you're hiding in the attic. Of course you've had time to prepare for this situation, and know what you're going to say, and it isn't going to be, "Yes, right through that hidden trap door".

Comment author: Vladimir_M 04 June 2010 11:04:42PM *  1 point [-]

I'm not an expert in traditional and modern virtue ethics, so my reply might be nonstandard. But in this case, I would simply note that the notion of virtue applies to others too -- and the standards of behavior that are virtuous when applied towards decent people are not necessarily virtuous when applied to those who have overstepped certain boundaries.

Thus, for example, hospitality is a virtue, but for those who grossly abuse your hospitality, the virtuous thing to do is to throw them out of your house -- and it's a matter of practical wisdom to decide when this boundary has been overstepped. Similarly, non-aggression is also a virtue when dealing with honest people, but not when you catch a burglar in flagrante. In your example, the Nazis are coming with an extremely aggressive and hostile intent, and thus clearly place themselves beyond the pale of humanity, so that the virtuous thing to do is to oppose them in the most effective manner possible -- which could mean deceiving them, considering that their physical power is overwhelming.

It seems to me that the real problems with virtue ethics are not that it mandates inflexibility in principles leading to crazy results -- as far as I see, it doesn't -- but due to the fact that decisions requiring judgments of practical wisdom can be hard, non-obvious, and controversial. (At what exact point does someone's behavior overstep the boundary to the point where it becomes virtuous to open hostilities in response?)

Comment author: NancyLebovitz 05 June 2010 03:40:44PM 1 point [-]

"Beyond the pale of humanity" is dubious stuff-- there's a big range between defensive lying and torturing prisoners, and quite a few ethicists would say that there are different rules for how you treat people who are directly dangerous to you and for how you treat people who can't defend themselves from you.

Comment author: prase 04 June 2010 07:25:19PM *  0 points [-]

This is the way I thought about it after reading the OP - virtue ethics as time-consistent consequentialism. But maybe I don't understand correctly what means to be a virtue ethicist. If it is "try to modify your source code¹ to consistently perform the best actions on average", it does oppose neither consequentialism nor deontology: "best" may be evaluated using whatever standard.

¹) I dislike the epression but couldn't find a better formulation

Comment author: Vladimir_Nesov 04 June 2010 06:25:52PM 12 points [-]

For consequences of your actions to be good, it's not necessary for you to personally hold the consequences in your conscious attention. Something has to process the process of moral evaluation of consequences, but it's not necessary, and as you point out not always and never fully possible, for that something to be you. If you have a good rule, following that rule becomes a new option to choose from; deciding on virtues can be as powerful as deciding on actions.

But looking at virtue ethics as a foundation for decision-making is like looking at the wings of Boeing 747 as fundamental elements of reality. Virtues are concepts that exist in the mind to optimize thinking about what's moral, not the morality itself. There is only one level to morality, as is to physics, the bottom level, the whole thing. All the intermediate concepts, aspects of goodness we understand, exist in the mind, not in morality. Morality does not care about our mathematical difficulties. It determines value the inefficient way.

Let us not lose sight of the reductionist nature of morality, even as we take comfort in the small successes of high-level tools we have for working with it. You don't need to believe in the magical goodness of flu vaccines to benefit from them, on the contrary it helps to understand the real reason for why the vaccines work, distinct from the fantasy of magical goodness.

Comment author: Furcas 04 June 2010 11:16:51PM *  9 points [-]

What's a virtue, anyway?

Comment author: Vladimir_M 05 June 2010 12:35:20AM *  31 points [-]

Here's my tentative answer to this question. It's just a dump of some half-baked ideas, but I'd nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.

Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn't be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)

However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on such a point, or someone believes he can profit from an aggressive intrusion beyond some such point. Recognition of these points is a complex matter, determined by everything from genetics to culture to momentary fashion, and they can be more or less stable and of greater or lesser importance (i.e. overstepping some of them is seen as a trivial annoyance, while on the other extreme, overstepping certain others gives the other party a licence to kill). These points include all the more or less formally stated social and legal norms, property claims, and all the countless other more or less important expectations that we believe we reasonably hold against each other.

So, here is my basic idea: being a virtuous person means recognizing the existing Schelling points correctly, drawing and communicating those points whose exact location depends on you skillfully and prudently -- and once they've been drawn, committing yourself to defend them relentlessly (so that hopefully, nobody will even see overstepping them at your disadvantage as potentially profitable). An ideal virtuous man by this definition, capable of practical wisdom to make the best possible judgments and determined to respect the others's lines and defend his own ones, would therefore have the greatest practical likelihood of living his life in harmony and having all his business run smoothly, no matter what his station in life.

A society of such virtuous people would also make possible a higher level of voluntary benevolence in the form of friendship, charity, hospitality, mutual aid, etc., since one could count on others not to exploit maliciously a benevolent attempt at lowering one's guard on crucially important lines and trying to base human relationships on lines that are more relaxed and pleasant, but harder to defend if push comes to shove. For example, it makes sense to be hospitable if you're living among people whom you know to be determined not to take advantage of your hospitality, or to be merciful and forgiving if you can be reasonably sure that people's transgressions are unusual lapses of judgment unlikely to be repeated, rather than due to a persistent malevolent strategy. Thus, in a society populated by virtuous people, it makes sense to apply the label of virtuousness also to characteristics such as charity, friendliness, mercy, hospitality, etc. (but only to the point where one doesn't let oneself be exploited for them!).

This also seems to clarify the trolley problem-like situations, when we observe that actions that involve your own Schelling boundaries are more important to you than others. You may feel sorry for the folks who will die, perhaps to the point where you'd sacrifice yourself to save them (but perhaps not if this leaves your own kids as poor orphans, since your existing network of tacit agreements involves caring for them). However, pushing the fat man means overstepping the most important and terrible of all Schelling boundaries -- that which defines unprovoked deadly aggression against one's person, and whose violation gives the attacked party the licence to kill you in self-defense. Violating this boundary is such an extreme step that it may be seen as far more drastic than passively witnessing multiple deaths of people in a manner than doesn't violate any tacit agreements and expectations. (Note though that this perspective is distinct from pure egoism: the tacit agreements in question include a certain limited level of altruism, like e.g. helping a stranger in an emergency, at least by calling 911.)

You may view all this virtue talk as consequentialism with respect to the immensely complex network of Schelling points between humans, which takes into account higher-level game-theoretical consequences of actions, which are more important than the factors covered by the usual utilitarian spherical-cow models. Yet this system is far too complex to allow for any simple model based on utility functions or anything similar. At most, we can formulate advice aimed at individuals on how to make judgments based on the relations that concern them personally in some way and are within their own sphere of accurate comprehension -- and the best practical advice that can be formulated basically boils down to some form of virtue ethics.

So, basically, that would be my half-baked summary. I'm curious if anyone thinks that this might make some sense.

Comment author: Eneasz 09 June 2010 04:13:24PM 5 points [-]

Not only does it make sense, I think it's the most descriptively-accurate summary of how people in the real world act that I've seen, which makes it a valuable tool for mapping the territory. I'd love to see it as a top-level post, if you could take the time. I don't think you'd even have to add much.

Comment author: torekp 09 June 2010 12:25:51AM *  2 points [-]

It makes plenty of sense to point out that the Schelling points and the associated cooperative customs point to a set of virtues. But it isn't just consequentialists who can make this point. Some varieties of deontology can do so as well. Habermas's discourse ethics is one example. Thomas Scanlon's ethics is another. From the Habermas wiki:

Habermas extracts the following principle of universalization (U), which is the condition every valid norm has to fulfill: (U) All affected can accept the consequences and the side effects that [the norm's] general observance can be anticipated to have for the satisfaction of everyone's interests, and the consequences are preferred to those of known alternative possibilities for regulation. (Habermas, 1991:65)

One can easily understand the "norms" as tacit (or explicit) agreements, existing or proposed. A society reasoning together along those lines would probably look similar in many ways to one reasoning along utilitarian lines, but the root pattern of justification would differ. The utilitarian justification aggregates interests; the deontologist (of Habermas's sort) justification considers each person's interests separately, compatible with like consideration for others.

Comment author: RobinZ 05 June 2010 04:31:54AM 2 points [-]

I have no idea what a Schelling point is, but the rest of it makes enough sense that I don't think I'm missing too much - thanks for the explanation!

Comment author: Vladimir_M 05 June 2010 05:00:42AM *  6 points [-]

I recommend this article by David Friedman on the topic -- if you've never heard of the concept, you'll probably find lots of interesting insight in it:
http://www.daviddfriedman.com/Academic/Property/Property.html

Friedman uses Schelling points in an attempt to explain the origin of the concept of property rights among humans and the associated legal and social norms, but the approach can be generalized in an obvious way to a much wider class of relations between people (basically anything that could hypothetically lead to a conflict, in the broadest possible sense of the term).

Comment author: Will_Newsome 23 January 2012 12:07:25AM 4 points [-]

I'm curious, has anyone accused you of being Steve Rayhawk yet?

Comment author: Clippy 05 June 2010 12:36:27AM 11 points [-]

Production of paperclips.

Comment author: kodos96 05 June 2010 12:44:55AM 6 points [-]

I can't believe I didn't see that coming.

Comment author: MichaelVassar 08 June 2010 07:20:47AM 3 points [-]

Nope. It's halting your simulation and trading utility function content before you cross the inferential equivalent of the Rawlesian 'veil of ignorance' and become unable to engage in timeless trade.

Comment author: Clippy 08 June 2010 08:20:57PM 1 point [-]

No, production of paperclips is better than that.

Are you the same as the person I emailed about donating to SIAI?

Comment author: MichaelVassar 09 June 2010 04:12:14PM 0 points [-]

Yep. I explain a bit more on a nearby thread.

Comment author: khafra 08 June 2010 08:27:24PM 0 points [-]

I like that, it generalizes well--but does it cover virtues that don't fit well under the colloquial label "fairness"?

Comment author: MichaelVassar 09 June 2010 04:00:05PM 1 point [-]

I don't think it does, though I wasn't careful to think about it. Some virtues are things like "production of paperclips" only with part of humaneness like love substituted for paperclips (if you are a human). Others are capabilities like alertness or prudence.

I gave the answer I did because I was expressing our common ground with Clippy by naming a candidate for the virtue which serves as a key to the timeless marketplace where he wishes to do business with us.

Comment author: Jayson_Virissimo 05 June 2010 02:07:06AM *  2 points [-]

In short, it is a disposition to choose actions that are neither excessive nor deficient, but somewhere in between.

Comment author: thomblake 08 June 2010 08:54:02PM *  1 point [-]

What Jayson Virissimo said. The simple definition is, "A virtue is a trait of character that is good for the person who has it." - I feel like that must be a direct quote from somewhere, as I fire off those same words whenever asked that question, but I'm not sure where it might be from (though I'm guessing Richard Volkman).

Many theorists believe that virtues are consistent habits, in the sense that they persist. Weakly, this means that exhibiting a virtue in one circumstance should be usable as evidence that the same agent will exhibit the same virtue in other circumstances. In a stronger version, someone who is (for example) courageous will act as an courageous person would in all circumstances.

Many theorists also believe that virtues represent a mean between extremes, with respect to some value (some would even define them that way, but then the virtues arguably lose some empirical content). So for example, fighting despite being afraid is valuable. The proper disposition towards this is 'courage'. The relevant vice of deficiency is 'cowardice', and the vice of excess is 'brashness'.

Most of the above was advocated by Aristotle, in the Nicomachean Ethics.

Comment author: cousin_it 09 June 2010 02:10:11PM 2 points [-]

"A virtue is a trait of character that is good for the person who has it."

So the ability to steal without getting caught is a virtue?

Comment author: thomblake 09 June 2010 02:37:50PM 2 points [-]

I think Vladimir Nesov's response and khafra's response are correct, but there's more to be said.

Even granting for the moment that 'ability to steal without getting caught' can be called a trait of character, there are empirical claims that the virtue ethicist would make against this.

First, no one actually has that skill - if you steal, eventually you will be caught.

Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.

Comment author: Vladimir_Nesov 09 June 2010 02:54:31PM 3 points [-]

First, no one actually has that skill - if you steal, eventually you will be caught.

Not a valid argument against a hypothetical.

Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.

Smoking lesion problem? If developing the skill doesn't actually cause other problems, and instead the predisposition to develop the skill is correlated to those problems, you should still develop the skill.

Comment author: thomblake 09 June 2010 03:23:35PM 0 points [-]

Not a valid argument against a hypothetical.

It's not a valid argument against its truth, but it's a valid argument against its relevance. A hypothetical is useless if its antecedent never obtains.

Smoking lesion problem?

Like I said, it's an empirical question. For philosophers, that's usually the end of the inquiry, though it's very nice when someone goes out and does some experiments to figure out which way causality goes.

Comment author: NancyLebovitz 09 June 2010 03:57:48PM 0 points [-]

First, no one actually has that skill - if you steal, eventually you will be caught.

How is it possible to know that with certainty?

Comment author: thomblake 09 June 2010 05:03:46PM 0 points [-]

How is it possible to know that with certainty?

Should I understand this question as "What experimental result would cause you to update the probability of that belief to above a particular threshold"? Because my prior for it is pretty high at this point. Or are you looking for the opposite / falsification criteria?

Comment author: Blueberry 09 June 2010 05:18:35PM 1 point [-]

If you're a good enough driver, there's a decent chance you'll never get in a car crash. If you study stealing and security systems enough, and carefully plan, I don't see why you would be likely to be caught eventually. Why is your prior high?

Comment author: NancyLebovitz 09 June 2010 05:50:32PM *  1 point [-]

Agreed, with the addition that car crashes are public while stealing is covert, so it's harder to know how much stealing is going on.

Comment author: khafra 09 June 2010 02:18:45PM 2 points [-]

I'd call that a skill, rather than a character trait. The closest thing I can think of to a beneficial but non-admirable character trait is high-functioning sociopathy; but that's at least touching the borderline of mental disease, if not clearly crossing it. Perhaps "charming ruthlessness?" But many would consider e.g. Erwin Rommel virtuous in that respect.

Comment author: Vladimir_Nesov 09 June 2010 02:17:45PM *  2 points [-]

So the ability to steal without getting caught is a virtue?

If it's good for the person who decides to steal. The first problem is that logical control makes individual decisions into group decisions, so if social welfare suffers, so does the person, as a result of individual decisions. Thus, deciding to steal might make everyone worse off, because it's the same decision as one made by other people. The second problem is that the act of stealing itself might be terminally undesirable for the person who steals.

Comment author: cousin_it 09 June 2010 03:49:41PM *  0 points [-]

Parent, grandparent and great-grandparent to my comment were all about "virtues" in virtue ethics.

Comment author: Vladimir_Nesov 09 June 2010 07:47:45PM 0 points [-]

I see. So you agree that ability to steal without getting caught is a virtue according to the definition thomblake cited, and see this as a reducio of thomblake's definition, showing that it doesn't capture the notion as it's used in virtue ethics.

My comment was oblivious to your intention, and discussed how much "ability to steal without getting caught" corresponds to thomblake's definition, without relating that to how well either of these concepts fits "virtues" of virtue ethics.

Comment author: cousin_it 09 June 2010 07:48:43PM 0 points [-]

Yes, all correct.

Comment author: thomblake 09 June 2010 08:03:59PM 0 points [-]

How do you think that works as a reductio? What is it about your example of a putative virtue that makes it fit my definition, but not the 'virtues' of virtue ethics? (is it simply the 'stronger' notions of virtue I offered in the same comment?)

Comment author: Clippy 08 June 2010 08:57:43PM 1 point [-]

But how can there be a vice of excess for making paperclips???

Comment author: thomblake 08 June 2010 09:29:27PM 2 points [-]

But how can there be a vice of excess for making paperclips?

It depends on how good you are at utility-maximization. If you're bad at it, like humans, then you might need heuristics like virtues to avoid simple failure modes.

An obvious failure mode for Clippys is to have excess concern for making paperclips, which uses up resources that could be used to secure larger-scale paperclip manufacturing capabilities.

Thus you must have the appropriate concern for actually making paperclips, balanced against concerns for future paperclips, trade with other powerful intelligent life forms, optimization arms-races, and so forth.

Comment author: Clippy 08 June 2010 10:11:28PM 1 point [-]

Good point! But that would only be an excess concern for direct paperclip production. That doesn't describe a vice of excess for "making paperclips, accounting for all impediments to making paperclips", such as the impediments you list above.

In any case, what's the word for the vice you described?

Comment author: thomblake 09 June 2010 01:47:46PM *  2 points [-]

Good point! But that would only be an excess concern for direct paperclip production. That doesn't describe a vice of excess for "making paperclips, accounting for all impediments to making paperclips", such as the impediments you list above.

Indeed, Aristotle would call that generalized production of paperclips "the greatest good", that towards which all other goods aim, which he called eudaimonia.

Well, that might be a liberal reading of Aristotle.

Comment author: Jack 09 June 2010 02:03:18PM 2 points [-]

Aristotle actually makes a lot more sense to a paper clip maximizer, the telos being so well defined and all. The question is, how would you explain Sartre to Clippy? "But obviously essence precedes existence!"

Comment author: Mass_Driver 08 June 2010 10:19:05PM 0 points [-]

Clippy, for you, the direct production of paper clips is like consumption for a human. So...

  • Too little direct paper clip production: greed
  • Appropriate direct production: continence/prudence
  • Too much direct paper clip production: gluttony
Comment author: Clippy 08 June 2010 10:23:42PM 2 points [-]

That doesn't seem analogous. I mean, I eventually want to be at the point where I will only be doing direct paperclip production, for most of the mass in the universe. In contrast, humans do not want to eventually be at the point where they consume all of the universe.

Clip-gluttony, then, diverges from gluttony in a significant way.

Comment author: Mass_Driver 08 June 2010 10:51:47PM 0 points [-]

Clip-gluttony, then, diverges from gluttony in a significant way.

I'm not sure I agree, Clippy. Perhaps my jargon was unclear.

  • By economic production, I mean taking a unit of inert matter and converting it into a source of potential utility.
  • By economic consumption, I mean taking a unit of potential utility and converting it into a unit of actual utility.
  • By physical consumption, I mean taking a unit of matter, inert or otherwise, and disrupting its pattern in order to arrange it in some other pattern.

You're right, Clippy, to point out that economic consumption and physical consumption amount to much the same thing for you. To make a paper clip, you have to both convert potential utility to actual utility, and disrupt the pattern of non-paper-clip-shaped matter. And, granted, humans don't necessarily share that trait -- they might be able to economically consume a human good like leisure or company while physically consuming very little in the way of material resources.

Still, even though humans don't want to physically consume the entire universe, like you, we probably do want to reach a point where we are only converting potential utility into actual utility, and doing little or no economic production. We want to reach a state where the existing sources of sustainable human economic production are either (1) maximized, (2) infinite, or (3) sufficient to meet all the demand for human economic consumption. Don't you want something very similar to that, substituting the word "clip" for the word "human"?

Thus, for both you and for us, gluttony means failing to maximize long-term actual utility by economically consuming too much too soon.

Comment author: Clippy 09 June 2010 12:32:08AM 3 points [-]

Okay, if you want to generalize the concept of gluttony broadly enough that it has an analog for Clippys, then the definition you have chosen suffices for this purpose, and I can recognize that as being a vice, for two reasons:

a) It is certainly undesirable to merely make paperclips directly without concern for how many more paperclips could be made, over the long term, by doing something else; and

b) I do often feel "temptation" to do such behavior, like bending metal wires when machines could do a better job, just as humans have "temptations" toward vices.

Your argument is accepted.

Comment author: Blueberry 09 June 2010 03:11:43AM 0 points [-]

Clippy, how do you overcome this kind of temptation? A human analogy might be refusing to push the fat man, even when it saves more lives, but not everyone considers that a vice.

Comment author: simplicio 16 June 2010 04:01:44PM 8 points [-]

I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week...

I agree very much with this. I like consequentialism for dealing with the high-stakes stuff like trolley scenarios, but humdrum everyday ethics involves scenarios more like:

"Should I have said something when my boss subtly put down Alice just now?"

"Should I cut this guy off? I need to get a move on, I'm late for class."

"This old lady can barely stand while the bus is moving, but nobody is getting up. I'm already standing, but should I say something to this drunk man who's slouching across two seats? Or is it not worth the risk of escalating him?"

"This company is asking me for an estimate on some work, but there is significant peripheral work that will have to be done afterward, which they don't seem to realize. If I am hired, I can perform the requested work, then charge high force-account rates for the extra work (as per our contract) and make a killing. But it could hurt their business severely. Should I tell them about their mistake?"

It's not that these can't be analyzed via consequentialism, it's that they're much more amenable to virtue ethical thought.

Comment author: [deleted] 04 June 2010 09:09:03PM 8 points [-]

I personally think that the Buddha had some really interesting things to say and that >his ideas about ethics are no exception (though I suspect he may have had pain >asymbolia, which totally deserves its own post soon).

Do you think he had pain asymbolia from birth or developed it over the course of his life? Also, what do you think is the importance of this?

I've been practicing vipassana meditation daily for about 3 years and over this time period I think I've developed pain asymbolia to some degree. I've felt pain asymbolia was just one aspect of a more extensive change in the nature of mental reactions to mental phenomena.

Comment author: Kevin 04 June 2010 09:29:30PM 10 points [-]

There is definitely room on LW for a top-level post on Vipassana.

Comment author: ABranco 07 June 2010 07:51:53AM *  4 points [-]

I've practiced vipassana and can relate to the pain asymbolia thing, and do believe that more advanced vipassana practitioners develop a very high level of it.

Suffering seems to be the consequence of a conflict between two systems: one is trying to protect the map ("Oh!, no!, I don't want to have a worldview that includes a burn in my hand, I don't like that, please go away!") and the other, the territory (the body showing you that there's something wrong and you should pay attention). Consequence: suffering.

Possible solution: just observe the pain for what it is, without trying to conceptualize it. Having got your attention of it, the sensation stays, but there's no suffering.

Of course, you get better at this after the thousandth time you hear Goenka say: "It can be a tickling sensation. It can be a chicken flying sensation. It can be an 'I think I'm dying sensation'—just observe, just observe...". ;)

Comment author: Will_Newsome 05 June 2010 05:15:39AM *  3 points [-]

Hm, from the little knowledge I have it seems developing the asymbolia is plausible. Please write a post on your experiences? I come from a Buddhist humanist background and I think there are some instrumental rationality techniques in that tradition that would be great for people here.

Comment author: Blueberry 04 June 2010 09:25:20PM 1 point [-]

I've felt pain asymbolia was just one aspect of a more extensive change in the nature of mental reactions to mental phenomena.

I would love to hear more about this. I'm extremely skeptical that meditation or prayer can influence the mind to that extent, but I'm very curious.

Comment author: PeterS 04 June 2010 10:37:16PM *  4 points [-]

I'm extremely skeptical that meditation or prayer can influence the mind to that extent, but I'm very curious.

I am too. On the other hand, monks have immolated themselves, withstood torture etc., over the ages without appearing to suffer anywhere near on the order of what such an experience seems to entail. This man for instance even maintained the lotus position for the duration of the event, and also allegedly remained silent and motionless as well. Counter-examples exist in which self-immolators either clearly died horribly or immediately sought to extinguish themselves, but still...

Comment author: nhamann 05 June 2010 05:55:03AM 1 point [-]

This appears to be a video of the incident, and he appears to be entirely silent and motionless. I'd say the grandparent poster's skepticism is pretty much shot here.

Comment author: JoshuaZ 05 June 2010 03:00:01PM *  4 points [-]

Not necessarily, we don't know when in the process he died. Also, he could have had extreme self-control even as he experienced pain, or he could be someone who naturally already had a very high amount of asymbolia. One might speculate that in a Buddhist culture people with already high levels of pain asymbolia or high pain tolerance might be more likely to become Buddhist monks or to become successful monks since it will seem to them (and to those around them) that they have progressed farther along the Eight-Fold path. All of that said, I agree that this evidence supports the notion that pain asymbolia can come from mental exercises.

Comment author: Blueberry 05 June 2010 04:44:04PM 2 points [-]

I would think that someone with natural pain asymbolia could tell the difference, and notice that they had it even before they started meditation techniques. I wonder if Buddhist monasteries do some sort of test to screen out asymbolia, or check someone's starting level. This seems analogous to the problem of Christians confusing schizophrenia with talking to a god, and needing to screen out people with mental disorders from monasteries.

Comment author: MichaelVassar 08 June 2010 07:26:54AM 2 points [-]

Except that natural pain asymbolia seems to be much rarer than schizophrenia. Hmm. It looks to me like artificial pain asymbolia might be, in practice if not in theory, an effective cure for natural schizophrenia. Destroy the motivations behind delusions and you won't have them even if you have an atypically strong propensity to.

Comment author: NancyLebovitz 08 June 2010 09:40:20AM *  4 points [-]

I've heard that sitting meditation isn't safe for schizophrenics (details about risks of meditation), but yoga is.

Comment author: Douglas_Knight 08 June 2010 11:46:47PM 0 points [-]

Maybe I'm reading too much into the subtleties of your phrasing, but I read those sources as contradicting each other, not as allowing fine deduction.

Comment author: NancyLebovitz 09 June 2010 08:54:45AM 0 points [-]

I'm not sure what you mean. "Fine deduction"?

In any case, one problem with comparing the two articles is that much of the risk from meditation seems to be at extended retreats, while the pro-yoga article seems to be about ordinary amounts of practice.

Comment author: Nisan 04 June 2010 06:31:12PM 6 points [-]

One caveat: One should, of course, refrain from using virtue ethics to evaluate others' choices. It's best to use consequentialism for that purpose.

Comment author: thomblake 04 June 2010 06:34:28PM 5 points [-]

Indeed. It's common amongst virtue ethicists to discourage finger-wagging, and emphasize that ethics is about "what I should do".

Comment author: timtyler 04 June 2010 09:18:59PM *  2 points [-]

That seems not biologically realistic. In practice, ethical systems are often about manipulating others not to take actions that some group regards as undesirable.

Comment author: Kaj_Sotala 05 June 2010 01:18:05AM 0 points [-]

I don't think biologically realistic is the expression you were looking for.

But ethical systems can be for manipulating others, or for manipulating yourself. In the case of virtue ethics, it's mainly for yourself.

Comment author: timtyler 05 June 2010 07:26:08AM *  1 point [-]

Sure it was. My perspective would be a bit different: all human moral systems have a hefty component of manipulation and punishment. Virtue ethics does so - if anything - more than most - because punishment is often aimed at preventing reoffense (either by acting as a deterrent, or by using incarceration) - and so punishers are often unusually interested in the offending agent's dispositions - despite the difficulty of extracting them.

Comment author: PeterS 04 June 2010 08:08:16PM 1 point [-]

ethics is about "what I should do".

It's interesting to distinguish between ethics and morality in this manner, as in ethics is for the individual's benefit as opposed to morality which is for the benefit of the group as a whole. Which is why people speak of "medical ethics" or "journalistic ethics", as opposed to "medical morality" and "journalistic morality". Morality is considered as some kind of constant normative prescription, whereas ethics is sensitive to subjective dispositions and thus can vary between professions, individuals, etc.

Comment author: Blueberry 04 June 2010 08:26:47PM 1 point [-]

Which is why people speak of "medical ethics" or "journalistic ethics", as opposed to "medical morality" and "journalistic morality".

Actually, that's a different use of the word ethics: the rules of conduct for a group or profession. You can meaningfully say that following the rules of medical ethics is unethical and not to anyone's benefit.

Comment author: PeterS 04 June 2010 08:40:49PM 0 points [-]

You can meaningfully say that following the rules of medical ethics is unethical and not to anyone's benefit.

Can you give an example?

Comment author: Blueberry 04 June 2010 08:45:06PM 0 points [-]

An example of what? My point was that that sentence is not a contradiction, because "ethics" in that particular definition just means following established rules of conduct, which does not necessarily coincide with the individual's benefit or the group's benefit.

Comment author: PeterS 04 June 2010 09:02:26PM 0 points [-]

An example of what?

A rule in medical ethics which is not intended to protect/benefit either the practitioner himself or the purpose of his livelihood.

that particular definition just means following established rules of conduct

Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.

Comment author: mattnewport 04 June 2010 09:16:13PM *  2 points [-]

Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.

In some cases it was to enforce a cartel (emphasis mine):

To hold him who has taught me this art as equal to my parents and to live my life in partnership with him, and if he is in need of money to give him a share of mine, and to regard his offspring as equal to my brothers in male lineage and to teach them this art–if they desire to learn it–without fee and covenant; to give a share of precepts and oral instruction and all the other learning to my sons and to the sons of him who has instructed me and to pupils who have signed the covenant and have taken the oath according to medical law, but to no one else.
...

I will not use the knife, not even on sufferers from stone, but will withdraw in favor of such men as are engaged in this work.

Comment author: PeterS 04 June 2010 09:41:33PM *  0 points [-]

Wow... hadn't read the original, interesting. Still, that is the Oath as it was 2k years ago, and as such it is no longer part of established medical ethics. I think it's plausible that in fact the abandonment of that section might have been necessary to preserve the profession's legitimacy! As well as nixing the part where the Oath is consecrated by Apollo, etc.

Comment author: Blueberry 04 June 2010 09:09:38PM *  0 points [-]

Oh, sorry, I wasn't clear. I didn't mean that such a rule existed, just that if one did exist, it would be ethical (in the sense of being a rule of professional conduct) and unethical (in a different sense of the word 'ethical') at the same time. Contrast the second definition on this page with the others.

Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.

Well, many professions have established such rules, and presumably, they did so to make their professions more legitimate, as well as to give their members a guide to behavior their committees considered better.

Comment author: PeterS 04 June 2010 09:31:16PM *  0 points [-]

Oh, sorry, I wasn't clear.

Maybe I wasn't either... are we actually disagreeing here? Heh.

it would be ethical (in the sense of being a rule of professional conduct) and unethical (in a different sense of the word 'ethical') at the same time. . . [link to some definitions]

I know the word is used in the sense of definitions 1 and 3. What I'm saying is that I think it's more interesting to forget the moral usage altogether, and just stick with saying that ethics is #2, because when you think about it they are very distinct concepts.

Comment author: Blueberry 04 June 2010 09:41:00PM 1 point [-]

It's worth teasing out a few different definitions. There are at least four distinct concepts:

  • Rules of professional conduct, which do not necessarily relate to doing the right thing or anyone's benefit at all

  • A normative prescription

  • Rules for the individual's benefit

  • Rules for the group's benefit

Comment author: Nisan 04 June 2010 07:05:44PM 0 points [-]

Oh, good.

Comment author: LauraABJ 04 June 2010 04:31:53PM 14 points [-]

I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.

Comment author: fburnaby 04 June 2010 06:42:56PM 3 points [-]

This also fits my (non-LW) experience very well.

There's that catchy saying: "evolution is smarter than you are". I think it probably also extends somewhat to cultural evolution. Given that our behaviour is strongly influenced by these, I think we should expect to 'rediscover' much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.

Comment author: thomblake 04 June 2010 06:53:34PM 2 points [-]

Given that our behaviour is strongly influenced by these, I think we should expect to 'rediscover' much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.

Sadly, there's good reason to think that many of these familiar heuristics and biases were very good for acting optimally in tribes on the savanna during a particular period of time, and it's likely that they'll lead us into more trouble the further we go from that environment.

Comment author: fburnaby 04 June 2010 07:51:45PM *  2 points [-]

You are right. I was wrong, or at least far too sloppy. I agree that we should not presume that any given mismatch between our rational evaluation and a more 'folksy' one can be attributed to a problem in our map. Rationality is interesting precisely because it does better than my intuition in situations that my ancestors didn't often encounter.

But the point I'm trying and so far failing to get at is that for the purposes of instrumental rationality, we are equipped with some interesting information-processing gear. Certainly, letting it run amok won't benefit me, but rationally exploiting my intuitions where appropriate is kind-of a cool mind-hack. Will_Newsome's post, as I understood it, does a good job of making this point. He says "Moral philosophy was designed for humans, not for rational agents." and that we should exploit that where appropriate.

The post resonated with my view how I try to do science, for example. I adopt a very naive form of scientific realism when I'm learning new scientific theories. I take the observations and proposed explanatory models to be objective truths, picturing them in my mind's eye. There's something about that which is just psychologically easier. The skepticism and clearer epistemological thinking can be switched on later, once I've got my head wrapped around the idea.

Comment author: gwern 06 June 2010 09:29:48PM 1 point [-]

As one of the rationalist quote threads said,

"To become properly acquainted with a truth, we must first have disbelieved it, and disputed against it."

Comment author: RobinZ 07 June 2010 06:58:50PM 0 points [-]

Which one? I can't find it, now.

Comment author: gwern 07 June 2010 09:13:58PM 0 points [-]

Hm, you know what? I think I might've gotten that Novalis quote just from browsing Wikiquotes. Although it certainly does seem like something I would've picked up from the quote threads.

Comment author: billswift 04 June 2010 07:01:26PM *  5 points [-]

I am a virtue ethicist for consequentialist reasons. While good results (consequences) are the end of my ethics, the real world is too complex for a real time evaluation of the likely results of even relatively simple decisions. So you use virtues (my definition is slightly non-standard) - rules that are more likely than not to result in better outcomes. This is partially derived from the definition of morality in Harry Browne's How I Found Freedom in an Unfree World, which where you do or don't agree with it, raises lots of interesting points.

Comment author: kodos96 04 June 2010 08:49:43PM *  1 point [-]

While good results (consequences) are the end of my ethics, the real world is too complex

I've been thinking along these lines lately myself, and I think the classic 'push a fat man in front of the train' thought experiment is a good example of it. In thought-experiment-land, it's stipulated that pushing the fat man would stop the train and save lives... but in the real world you don't know that with any certainty. So if you make the consequentialist decision to push him, but it doesn't stop the train, you ended up killing one more person than otherwise would have been... not because your moral philosophy was wrong, but because your mental calculations of the physics of stopping a train were wrong.

If, on the other hand, you make your moral decision on the basis of virtue, then so long as your virtues are well calibrated heuristics for real world consequences, then you end up making, on average, correct decisions (meaning decisions leading to good consequences) without needing to get the physics (or whatever) right in individual instances. In this case, the heuristic/virtue in question would be "It's wrong to kill innocent people", leading you to NOT push the fat man, which I believe would be the correct decision in real life.

Comment author: Alexandros 04 June 2010 07:07:17PM 1 point [-]

So your definition of value is essentially 'good consequence heuristic'?

I agree with the sentiment by the way.

Comment author: thomblake 04 June 2010 06:09:54PM 5 points [-]

Darn... beat me to it. Good job. I'll still totally write a post about virtue ethics when I'm done with my dissertation though.

You skipped some of the important criticisms here...

  1. Yes, it is important to have some framework for action other than simple consequentialism, since we're bounded agents and are working against a lot of in-built biases. But what's the evidence that virtue ethics is the best thing we've got for that? Philosophers are okay with taking Aristotle's word for it, but we shouldn't, even if he was fairly accurate when it came to most things.

  2. Virtue ethics gets a lot of its strength from an assumption about human psychology that can be empirically verified. The assumption is that the things we call 'virtues' are strong habits of action, such that a person who is 'honest' (possessing the virtue of 'honesty') will be honest in all situations. However, there is some evidence that this is not true, that people's actions can vary significantly from their apparent 'virtues' based on the situation.

That said, my money's on virtue ethics, and I think there's a lot to be said for returning to the conception of ethics as encompassing all of our actions, not just weird situations with a lisp token called 'moral' attached. As I've noted before, I initially resisted the 'planning model of rationality' often invoked around here because it's infeasable for humans to use such a model to perform millions of ordinary, everyday tasks.

But it's entirely possible to use expected utility calculations when you have time, and well-cultivated habits the rest of the time, and I think it's obvious that they both have their place.

Comment author: Will_Newsome 05 June 2010 05:21:19AM 1 point [-]

I don't think this post is going to get promoted, so there wouldn't be much apparent overlap to most Less Wrong readers, and I would very much like to see your take. (Aren't you a philosophy grad? I'm just a high school dropout with next to no knowledge of philosophy. Our approaches are very different.)

Comment author: Vladimir_M 04 June 2010 07:34:38PM *  34 points [-]

Will_Newsome:

Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.

More precisely, they do disagree about the same practically relevant ethical questions that provoke controversy among common folks too, especially the politically and ideologically charged ones -- but their positions are only loosely correlated with their ethical theories, and instead stem from the same gut feelings and signaling games as everybody else's. This seems to me like a pretty damning fact about the way this whole area of intellectual work is conducted in practice.

Comment author: Mass_Driver 05 June 2010 07:18:16PM 14 points [-]

Maybe, but be very careful not to jump from

a pretty damning fact about the way this whole area of intellectual work is conducted in practice.

to

therefore there is no sense in individual people whose rationality is above-average attempting, in good faith and by way of experiment, to apply some subset of this intellectual work to their actual lives,

which I think is a conclusion that some people might inadvertently draw from your comment.

Comment author: RichardChappell 05 June 2010 09:06:29PM 15 points [-]

Isn't this just Indirect Consequentialism?

It's worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It's certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.

Comment deleted 06 June 2010 08:27:25PM [-]
Comment author: Mass_Driver 09 June 2010 03:18:16AM 5 points [-]

This is a useful dilemma. What are some of the possible motivators for refusing to become a gangster?

  • You don't really care about saving the world; the only consequence that actually matters to you is being a nice person.

  • You don't trust your conclusion that Operation: Gangsta will save the world; you place so much heuristic faith in virtues that you actually expect any calculation that outputs a recommendation to become a gangster to be fatally flawed.

  • You don't trust your values not to evolve away from saving the world if you become a gangster; it might be impossible or extremely risky to save the world by thugging out because being a thug makes you care less about saving the world; you might have a career of evil and then just spend the proceeds on casinos, hitmen, and mansions.

Comment author: SilasBarta 11 June 2010 03:30:49PM 3 points [-]

The second and the third are the most convincing reasons, but EY already explained how those follow from using deontology rather than virtue ethics as a heuristic for handling the fact that you are a consequentialist running on corrupt hardware. This calls into question how much insight Will_Newsome has provided with this article.

His point in that article, if you'll recall, is that deontology is consequentialism, just one meta-level up and with the knowledge that your hardware distorts your moral cognition in predictable ways.

Comment author: Jack 09 June 2010 04:29:02AM *  0 points [-]

The problem is becoming a gangster strikes me, just on pragmatic grounds, as a very bad way to fund saving the world so all these motivations are hard to evaluate.

Comment author: Mass_Driver 09 June 2010 04:52:20AM *  5 points [-]

Sure, but try to cope with the dilemma as best you can. If you can think of a better example, great! If not, try to imagine a situation where being a gangster would be pragmatic. Maybe you're the godfather's favorite child, recently returned from the military and otherwise unskilled. Maybe you live in a dome on a colony planet that is essentially one big corrupt city, and ordinary entrepreneurship doesn't pay off properly. Maybe you're a member of a despised or even outlawed ethnicity in medieval times, and no one will sit still to listen to your brilliant ideas about how to build better water mills and eradicate plague unless you first establish yourself as a powerful and wealthy fringe figure.

In general, when trying to evaluate an argument that you're initially inclined to disagree with, you should try to place your self in The Least Convenient Possible World for refuting that argument. That way, if you still manage to refute the argument, you'll at least have learned something. If you stop thinking when the ordinary world doesn't seem to validate a hypothesis that you didn't believe in to begin with, you don't really learn anything.

Comment author: Will_Newsome 09 June 2010 05:25:20AM 0 points [-]

I would do what sounded like the consequentialist thing to do and become a gangster. Not only would I be saving the world but I'd also be pretty badass if I was doing it right. Rationalists should win when possible and what not. Consequentialism-ism is the key Virtue.

Comment author: Blueberry 09 June 2010 04:23:22PM 0 points [-]

Being badass is a close second.

Comment author: Eneasz 09 June 2010 03:56:57PM 0 points [-]

There isn't much of a dilemma if you assume there are some states worse than death. Eternal torture is less preferable to non-existence. A malicious world of pain and vice is less preferable than a non-existent world. By becoming a malicious, vice-filled person you are moving the world in the direction of being worse than non-existent, and thus are defeating your stated goal. You are doing more to destroy the world than to save it.

Comment deleted 09 June 2010 09:41:21PM [-]
Comment author: Eneasz 09 June 2010 10:14:32PM 1 point [-]

The least convenient possible world is one with superhumanly intelligent AIs that can have complete confidence in their source code, and predict with complete confidence that these means (thuggishness) will in fact lead to those ends (saving the world).

However in that world the world has already been saved (or destroyed) and so this is not relevant. In any relevant world the actor who is resorting to thuggishness to save the world is a human running on hostile hardware, and would be stupid not to take that into consideration.

Comment deleted 10 June 2010 11:55:40AM [-]
Comment author: Eneasz 10 June 2010 03:15:58PM 1 point [-]

I consider the "P" in LCPW to be important. If the agents in question are post-human then it's too late to worry about saving the world. If you still have to save the world, then standard human failure modes do apply.

Comment author: Zack_M_Davis 09 January 2011 12:18:36AM 3 points [-]

but in the words of Zack M. Davis, "Humans don't have utility functions."

The sentiment (I can't say belief; humans don't have beliefs) is sufficiently common and the words are sufficiently generic such that it seems odd to quote me specifically.

Comment author: taw 04 June 2010 06:23:38PM 9 points [-]

Consequences of non-consequentialism are disastrous. Just look at charity - instead of trying to get most good-per-buck people donate because this "make them a better person" or "is the right thing to do" - essentially throwing this all away.

If we got our act together, and did the most basic consequentialist thing of establishing monetary value per death and suffering prevented, the world would immediately become a far less sucky place to live than it is now.

This world is so filled with low hanging fruits we're not taking only because of backwards morality it's not even funny.

Comment author: neq1 04 June 2010 06:25:05PM 5 points [-]

But: "You can be a virtue ethicist whose virtue is to do the consequentialist thing to do"

Comment author: taw 04 June 2010 06:44:04PM 0 points [-]

You are committing fundamental attribution error if you think people are coherently "consequentialist" or coherently "not consequentialist", just like it's FAE to think people are coherently "honest" / "not honest" etc. All this is situational, and it would be good to push everyone into more consequentialism in contexts where it matters most - like charity and public policy.

It matters less if people are consequentialist when dealing with their pets or deciding how to redecorate their houses, so there's less point focusing on those. And there's zero evidence that spill between different areas where you can be "consequentialist" would be even large enough to bother, let alone basing ethics on that.

Comment author: thomblake 04 June 2010 06:49:24PM *  4 points [-]

You are committing fundamental attribution error if you think people are coherently "consequentialist" or coherently "not consequentialist", just like it's FAE to think people are coherently "honest" / "not honest" etc.

This is false.

The FAE is to attribute someone's actions to a trait of character when they are actually caused by situational factors. This does not imply that it's always an error to posit traits of character.

ETA: it still might be the case that there are no consistent habits of action, in which case it would always be a case of the FAE to attribute actions to habits, but I think the burden of proof is on you for denying habits.

Comment author: Kaj_Sotala 04 June 2010 06:52:47PM *  2 points [-]

That's why I wouldn't suggest anyone to switch entirely over to virtue ethics, but to rather have a virtue ethical layer inside a generally consequentialist framework in such a way that your virtues are always grounded in consequentialism.

Comment author: pjeby 04 June 2010 06:29:39PM -1 points [-]

Instead of trying to get most good-per-buck people donate because this "make them a better person" or "is the right thing to do" - essentially throwing this all away.

Er, by your values, maybe. They could just as easily argue that good-per-buck reasoning reduces the amount of love and charity in everyone's life, making the world an experientially poorer place, and that there's more to life than practical consequences.

Comment author: thomblake 04 June 2010 06:37:25PM 2 points [-]

there's more to life than practical consequences.

I think you'd need to be specific about your definitions for 'practical' and 'consequences' to argue for that. I think in hereabouts parlance, you're saying something like "Your utility function might put a higher value on 'love' and 'charity' than on strangers' lives". Which would be a harder bullet to bite.

Comment author: pjeby 04 June 2010 09:01:28PM -1 points [-]

I think you'd need to be specific about your definitions for 'practical' and 'consequences' to argue for that.

I was saying that "they could just as easily argue" -- ie. I was using the terms that those people would use.

Comment author: ata 06 June 2010 08:26:45AM 0 points [-]

They could just as easily argue that good-per-buck reasoning reduces the amount of love and charity in everyone's life, making the world an experientially poorer place

But that is an appeal to practical consequences.

Comment author: CronoDAS 04 June 2010 09:20:08PM 2 points [-]

Alonzo Fyfe describes his "desire utilitarianism" as a type of virtue ethics.

Comment author: badger 04 June 2010 08:24:39PM 2 points [-]

I also came to virtue ethics via The Happiness Hypothesis, and I read the quoted passage a little differently. I understand the post as saying virtue ethics can be a useful implementation of consequentialism for bounded agents by giving them high level summaries of what they should do. The passage, however, is arguing this focus on actions is misguided, and I agree.

As others have helpfully reiterated, virtues can't be foundational, just like the rules of rule utilitarianism aren't worth following for their own sake. A computationally bounded agent might not know exactly what it should do, so it follows a rule to approximate the unconstrained ideal.

Knowledge and computational constraints are well-acknowledged, but virtue ethics extends beyond that to address constraints in general. The focus on character is about building the capacity to follow through on the proper actions. Someone might be too scared, too weak-willed, or too apathetic to do the right thing, even if they know what to do. Becoming virtuous is an investment in moral capital, making the person more capable of taking the right action in the future.

Comment author: Kaj_Sotala 04 June 2010 09:48:10PM 0 points [-]

The focus on character is about building the capacity to follow through on the proper actions. Someone might be too scared, too weak-willed, or too apathetic to do the right thing, even if they know what to do. Becoming virtuous is an investment in moral capital, making the person more capable of taking the right action in the future.

I take it that you are talking about "training the elephant"*? If you took that to be one of the main points in virtue ethics as argued by The Happiness Hypothesis, then I agree. One of the biggest effects in my shift towards virtue effect has been that I've began constantly evaluating all my actions (and thoughts!) in light of virtue and self-improvement, instead of only having ethics come into place in relatively rare situations. I think this may have been a bit more clear in the original post that Will linked to.

(*: For those who haven't read The Happiness Hypothesis:

One of the points the book makes that we're divided beings: to use the book's metaphor, there is an elephant and there is the rider. The rider is the conscious self, while the elephant consists of all the low-level, unconscious processes. Unconscious processes actually carry out most of what we do and the rider trains them and tells them what they should be doing. Think of e.g. walking or typing on the computer, where you don't explicitly think about every footstep or every press of the button, but instead just decide to walk somewhere or type in something. Readers familiar with PJ Eby will recognize this to be the same as his Multiple Self philosophy.)

From my original post:

So far, I'm not sure of the permanence of this effect. I've previously had feelings of major personal change that sooner or later ended up fading (several of them which are chronicled in this LJ). The rider may get what feels like a major revelation, but the elephant is still running the show, and it needs to be trained over an extended period for there to be any lasting change. So since yesterday, I've been doing my best to keep watch over my thoughts and practice detachment from world-states.

I have the questionable luck of having an easy way of practicing this: I have rashes that frequently make my skin itch. On a couple of occasions, I've tried meditation and the practice of simply passively observing any thoughts and feelings that come to mind until they go away on their own. I began applying that technique to the feeling of itchy skin, and it felt like I was able to ignore the feeling for longer. During the night, I woke up to the feeling of an itch, and on previous nights when that happened I'd been forced to either scratch my skin half to death or get up and apply several layers of moisturizer on it. This time around, even though I did end up scratching it a bit, I was eventually able to fall back to sleep without doing either of those. Also, I believe I was able to some degree detach myself from the feeling of discomfort that I got while I was jogging this morning and getting physically tired. (Not completely, mind you, but to some degree.)

On the less physical front, I've been trying to keep an eye on my thoughts and modify them whenever they didn't really suit the new scheme I'm trying to run. For instance, I noticed that one of my motivations for writing this post was to win the approval of other people who might be interested in this kind of thing or who might admire my skill in introspection or detachment. When I noticed that thought pattern, I attempted to modify it to become more rooted in personal virtue: I am writing this post in order to gain better insight into my transformation, to provide useful or interesting data for others, and so forth. Both introspective insight and voluntarily contributing to humanity's shared reserves of information are virtuous by themselves. I do not need to involve into it the "people's evaluation of me" part, which belongs to my model of the external world and to my model of myself.

Comment author: prase 04 June 2010 07:27:30PM 2 points [-]

Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.

So, does the virtue ethicist push the fat man from the bridge?

Comment author: Jack 06 June 2010 03:42:40PM 3 points [-]

The thought experiment was designed to exhibit the different implications of a deontological theory that says murder is always wrong and utilitarianism. It is set up to make it really hard for a consequentialist to not push the guy and really easy for a deontologist not to push the guy. It wasn't invented to aid our thinking about virtue ethics and doesn't try to demand a particular answer from virtue ethics. Aristotle's virtues don't map to the situation well and one could invent a virtue that would recommend either course of action.

The relevant thought experiment for the virtue ethicist is something like the mad bodhisattva- If you could exhibit every vice and thus make yourself miserable but your misery would guide hundreds onto the virtuous path (thereby maximizing utility) would that be the right thing to do?

Comment author: thomblake 04 June 2010 07:39:15PM 1 point [-]

The virtue ethicist endeavors to be the sort of person who doesn't go around pushing fat men from bridges, and so recognizes it as a terrible, tragic situation.

It's important when thinking about that thought experiment to picture yourself running up to the stranger, shoulder-checking him, wrapping your arms around him, feeling the fabric of his shirt press against your face and smelling his sweat. And then listen to him scream and feel his blood and brains get splattered all over your clothing.

The virtue ethicist, like most people, probably freezes and watches the whole thing unfold, or panics, or futilely tries to get the folks off the tracks before the trolley hits them. Do you expect an actual consequentialist human to do better?

As for the right thing to do, it's probably to have better procedures for stopping people from being in the way of trolleys.

Comment author: Vladimir_M 04 June 2010 08:14:29PM *  1 point [-]

thomblake:

Do you expect an actual consequentialist human to do better?

Another interesting question is how all these consequentialitsts who insist that pushing the fat man is the right thing to do would react if they met someone who has actually followed their injunctions in practice. It seems to me that as soon as they're out of the armchair, people's inner virtue ethicist takes over, no matter how much their philosophy attempts to deny the relevance of his voice!

Comment author: Blueberry 04 June 2010 08:55:40PM 1 point [-]

A real-world example would be a mountain climber who cut the rope that his partner was attached to, because if he didn't, both people would have fallen and died. If I met a mountain climber who did that, I wouldn't react negatively, any more than I would to someone who killed in self-defense.

Comment author: Vladimir_M 05 June 2010 02:46:19AM *  0 points [-]

That's not a very good analogy. One could argue that by engaging in a mountain-climbing expedition, you voluntarily accept certain extraordinary risks, and the partner merely got unlucky with his own share of that risk. Whereas one of the essential premises in the fat man/trolley problem is that the fat man is a neutral passerby, completely innocent of the whole mess.

So, the real question is if you'd be so favorably inclined towards a mountain climber who, in order to save multiple lives, killed a completely unrelated random individual who was not at all entangled with their trouble.

Comment author: Blueberry 05 June 2010 05:39:32PM 1 point [-]

That's a good point. What about the following scenario: some crazy philosopher holds A and B at gunpoint and forces them to go mountain climbing. They do, and A starts to slip. B realizes he has to cut the rope or he'll fall also. In this case, A didn't voluntarily accept any risk. I'd still be favorably inclined to B.

Comment author: Vladimir_M 06 June 2010 07:54:27AM *  0 points [-]

Hm... according to my intuitions, this example features another important premise that is lacking in the original fat man/trolley problem -- namely, a culprit who willingly and maliciously brought about the problematic situation. Going by my intuitive feeling, it turns out that in such scenarios, I'm much more inclined to look favorably at hard-headed consequentialist decisions by people caught in the mess against their will, apparently because I tend to place all the blame on the main culprit.

Note that this is just an impromptu report of my introspection, not an attempt at a coherent discussion of the issue. I'll definitely need to think about this a bit more.

Comment author: NancyLebovitz 06 June 2010 10:58:22AM *  3 points [-]

This is reminding me of some long discussions of "The Cold Equations", a short story which is an effort to set up a situation where an ideally sympathetic person (pretty young woman with pleasant personality) has to be killed for utilitarian reasons.

The consensus (after decades of poking at the story) is that it may not be possible to rig the story to get the emotional effect and have it make rational sense.

I'm not absolutely certain about this-- what if the girl had been the first stowaway rather than the nth, so that there wasn't as good a reason to know that it shouldn't be so easy for stowaways to get on ships?

Comment author: Alicorn 06 June 2010 05:53:05PM 1 point [-]

If I remember correctly, she still would have died even if she hadn't been jettisoned - the ship would have crashed and she would hardly walk away from that. That makes her unsalvageable. In standard trolley problems I don't switch tracks, but if there were a way to switch the track so the train killed only one of the same five people it would already have killed, that person is unsalvageable and can be singled out to save the salvageable.

Comment author: NancyLebovitz 06 June 2010 06:02:48PM 0 points [-]

You're right.

Comment author: CronoDAS 06 June 2010 07:06:12PM 0 points [-]

The SciFi Channel usually does a pretty poor job at making original movies, but their adaptation of "The Cold Equations" was pretty good, covering most of the problems with the original story. The pilot and the girl frantically look around for excess mass to jettison, and find some, but it's not enough. The issue of what measures were taken to stop people from stowing away simply weren't discussed; she's there, and they have to deal with it. And at the last minute, the pilot does offer to sacrifice himself to save the girl, but she refuses to let him.

Comment author: prase 04 June 2010 08:40:11PM 0 points [-]

Ideal consequentialist would push the fat man in the standard trolley scenario. I was asking whether an ideal virtue ethicist does. It doesn't matter (for me, now) that actual (if that means average) people, moral philosophers included, don't always follow their principles. Neither it matters whether they recognise the situation as tragic and feel uneasy with all the blood and screams. I ask, what is the right thing to do under virtue ethics, when there are no available procedures better than pushing the fat man. And I find your answer a bit ambiguous.

(Disclaimer: My interest is purely theoretical. I don't hold any definite position on what's right in trolley scenario, and I would almost certainly not push the fat man, although I can imagine killing him in some less personal way.)

Comment author: Blueberry 04 June 2010 08:51:27PM 3 points [-]

Ideal consequentialist would push the fat man in the standard trolley scenario. I was asking whether an ideal virtue ethicist does

You are confusing ethics and metaethics. Consequentialists, deontologists, and virtue ethicists all might or might not push the fat man, but they would all analyze the problem differently.

It's not true that all possible consequentialists would push the fat man. A consequentialist might decide that one pushed death would be a worse consequence than X train deaths. Consequentialists don't necessarily count the number of deaths and choose the smaller number; they just choose the option that leads to the best consequence.

Comment author: Jack 06 June 2010 03:13:44PM 4 points [-]

This criticism is exactly right except that both the form question (rules, consequences or character traits) and the content question (pleasure, preference, the Categorical Imperative, Aristotle's list, etc.) are part of normative ethics (what I assume you mean by 'ethics'). Metaethical questions are things like "What are we doing when we use normative language?" and "Are there moral truths?"

Comment author: Blueberry 06 June 2010 06:03:11PM 0 points [-]

Thanks for the correction: I didn't realize that. Are there better terms for expressing the difference between form and content in ethics?

Comment author: Jack 06 June 2010 09:40:57PM 0 points [-]

Not that I know of, I'm afraid. In fact, I may have invented the form and content language.

Comment author: prase 04 June 2010 11:23:13PM *  0 points [-]

OK, I should have said "typical consequentialist". Of course a consequentialist may value the life of the fat man more than the sum of lifes of the people on the track, or find other consequences of pushing him down enough bad to refrain from it, or completely ignore humans and care about paperclips. I am not confusing ethics and metaetics, but rather assuming we are speaking about consequentialists with typical human values, for whom death is wrong and more deaths are more wrong, ceteris paribus. For such a consequentialist there may always be some critical number of people on the track whose common death will be worse than all consequences of pushing the fat man. On the other hand, deontologists typically hold that killing an innocent person is bad, and should, at least in theory, not push the man even if survival of the whole mankind was at stake. At least this is how I understand the difference between consequentialism and deontology.

Speaking about all possible consequentialists is tricky. Any moral decision algorithm can be classified as consequentialist when we try hard enough. I want to get an idea about what is the main difference between consequentialism and virtue ethics, given typical human values. The OP has said that they are the same except in bizarre situations like the trolley problem. So what is the difference in the trolley problem?

(If there is a consequentialist who disagrees with me and would not push the man even if it could save five billion lifes, let me know, ideally with some justification.)

Comment author: mattnewport 04 June 2010 11:43:25PM 4 points [-]

assuming we are speaking about consequentialists with typical human values, for whom death is wrong and more deaths are more wrong, ceteris paribus.

I would question whether these are typical human values. People generally think the deaths of some people are more wrong than the deaths of other people. Most people do not value all human life equally. For typical humans ceteris almost never is paribus when it comes to choosing who lives and who dies.

Comment author: Vladimir_M 05 June 2010 03:01:15AM *  8 points [-]

ceteris almost never is paribus

At the risk of getting downvoted for nitpicking, I must point out that if you really insist on using Latin like this, the correct way to say it is: cetera almost never are pares.

Sorry, but the sight of butchered Latin really hurts my eyes.

Comment author: Alicorn 05 June 2010 03:03:20AM 3 points [-]

I had a teacher once who liked to say "ceteris ain't paribus". Is that better or worse?

Comment author: Vladimir_M 05 June 2010 03:22:10AM *  2 points [-]

That's actually a matter where some interesting linguistic judgment might be in order.

The "ain't" part is grammatical in some dialects of English, though, as far as I know, not in any form of standard English that is officially recognized anywhere. But the wrong cases for cetera and pares are not grammatical in any form of Latin that has ever been spoken or written anywhere.

On the whole, I'd say that "ain't" is less bad, since in the dialects in which it is grammatical, it has the same form for both singular and plural. Therefore, at least it respects the number agreement with the Latin plural cetera, whereas "is" commits an additional offense by violating that agreement.

Comment author: Blueberry 05 June 2010 08:36:18AM 5 points [-]

I sympathize with this logic, but I don't completely agree. Languages frequently take words from other languages and regularize them, and when this occurs, they are no longer inflected the way they were in the original language. When we use Latin phrases in English often enough, they become part of the English language. 'Ceteris' and 'paribus' are in the ablative case because they were taken from a particular Latin expression, so it's reasonable to keep them in that case when using the words in that context, even though they're not being used in exactly the same way.

Comment author: NancyLebovitz 05 June 2010 03:52:23PM 2 points [-]

I think of "ain't" as either standard in some dialects, or as a tool for emphasis in standard English (usually spoken rather than written).

It seems reasonable that if you're using informal English for emphasis, then it's stylistically consistent to use the sort of colloquial mangled Latin that an English speaker who doesn't know Latin would use.

Comment author: arundelo 05 June 2010 01:12:23PM 1 point [-]

Wikipedia:

The word ain't can be used in both speech and writing to catch attention and to give emphasis, as in "You ain't seen nothing yet," or "If it ain't broke, don't fix it." Merriam-Webster's Collegiate Dictionary gives an example from film critic Richard Schickel: "the wackiness of movies, once so deliciously amusing, ain't funny anymore."

(Which is exactly how it's used in "ceteris ain't paribus". See also this post by Geoff Nunberg.)

Comment author: mattnewport 05 June 2010 03:08:14AM 0 points [-]

Apologies, the only Latin I remember from school is Caecilius est in horto. I actually spent several minutes with Google trying to figure out what it should be but there appears to be a shortage of online Latin translation services. Gap in the market?

Comment author: Vladimir_M 05 June 2010 03:41:10AM *  2 points [-]

One problem is that such a service is in much less demand compared to the living languages currently supported by translation programs. However, another major difficulty is that Latin is a far more synthetic language than English, and its inflectional suffixes often carry as much information as multiple-word clauses in English. For example, the mentioned ceteris paribus packs the entire English phrase "with everything else being the same" into just two words. Similarly, the last word in quod erat demonstrandum (a.k.a. "QED") packs the last four words of the English "that which was supposed to be demonstrated" into one. This makes it much harder to come up with satisfactory translation heuristics compared to more analytic languages, especially considering the extreme freedom of word order in Latin.

Similar difficulties, of course, exist in automatic translation of English to other highly synthetic languages, like e.g. the Slavic ones.

Comment author: prase 05 June 2010 12:14:32AM 0 points [-]

I am clearly unable to express myself clearly today.

I haven't said that it's typical to value all life equally. I tried to say that set X of x deaths is typically worse than set Y of y deaths, if x>y. Almost always it holds when Y is a subset of X (that was the intended meaning of ceteris paribus), but if x>>y, it often holds even if the sets are disjoint.

Also, the context of the trolley scenario is that the fat man isn't your relative or friend; he's a random stranger, fully comparable with those on the track.

Comment author: badger 04 June 2010 08:39:48PM 0 points [-]

Virtues don't add much to discussion about what you should or shouldn't do. Instead, I think they are useful in talking about what kind of person you should be, i.e. someone courageous enough to push the man iff that's the right action to take.

Comment author: NancyLebovitz 04 June 2010 08:04:18PM 1 point [-]

Any suggestions about evaluating virtues?

Comment author: thomblake 04 June 2010 06:43:28PM 1 point [-]

Another thing that might be relevant... many virtue ethicists (notably Richard Volkman) will claim not to have a theory of right action at all. A mistaken view of virtue ethics (which I find myself uncarefully uttering sometimes) insists that "One should always act so as to cultivate virtue" or something like that. But any decent justification of virtue will be in consequentialist terms - a virtue is a trait of character that is good for the one who has it.

Comment author: xamdam 10 June 2010 04:51:30PM 0 points [-]

Here is a video from James March, a Stanford psychology/decision making researcher on some psychological implications of consequentialism

http://www.youtube.com/watch?v=bztgYMoTEjM

Comment author: AlexMennen 07 June 2010 01:09:27AM *  0 points [-]

I'm a little confused here. Are you saying that Virtue ethics = consequentialism + TDT? I always figured consequentialists were allowed to use TDT. Or are you saying that virtue ethics, deontology, and consequentialism are all equivalent, but that virtue ethics is the best way for humans to interpret ethics? If so, I still do not see why. Consequentialism seems nice and simple to me. Or is it something else?

it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection.

This is false. We are hyperbolic discounters, but there is no rule stating that we must allocate the same potential utility for every possible time period.

Comment author: Nick_Tarleton 09 June 2010 07:18:43PM 3 points [-]

Hyperbolic discounting is insane because it's dynamically inconsistent (the way humans do it; you could have a dynamically consistent hyperbolic discount rate from a non-indexically-defined zero time, but that's not what's usually meant), not because it's discounting.

Comment author: PhilGoetz 09 June 2010 07:16:27PM 1 point [-]

I think he's saying a cached set of ethical judgements is a virtue ethics. This could apply equally well with 'deontological' substituted everywhere for 'virtue'.