Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.

 

There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.

When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...

Moral philosophy was designed for humans, not for rational agents. When you're used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, "Humans don't have utility functions." Similarly, Kaj warns us: "be extra careful when you try to apply the concept of a utility function to human beings." Back in the day nobody thought smarter-than-human intelligence was possible, and many still don't. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren't even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon).  Epicurus, Mill, and Bentham were great thinkers and all, but it's not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don't have to waste memory on what a personalized rulebook says about different kinds of milk, and you don't have to think 15 inferential steps ahead to determine if you should drink skim or whole.

You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that's what is right). Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they're all actually virtue ethicists: they're trying to do the 'consequentialist' or 'deontologist' things to do, which happen to usually be the same. Alicorn's decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you're a virtue ethicist it's easier to say "I'm the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues" and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it's deontic). It's not illegal!

Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:

The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.

[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.

To quote Kaj's response to the above:

Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. [...]

What has this meant in practice? Well, I'm not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of "emotional machinery" as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.

But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became "how could I develop myself", "how could I be more virtuous" and "how could I best act to improve the world". From the last bit, you can see that I haven't lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it's more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.

Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don't actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.

So, if you'd like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.

New to LessWrong?

New Comment
185 comments, sorted by Click to highlight new comments since: Today at 12:30 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Will_Newsome:

Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.

More precisely, they do disagree about the same practically relevant ethical questions that provoke controversy among common folks too, especially the politically and ideologically charged ones -- but their positions are only loosely correlated with their ethical theories, and instead stem from the same gut feelings and signaling games as everybody else's. This seems to me like a pretty damning fact about the way this whole area of intellectual work is conducted in practice.

Maybe, but be very careful not to jump from

a pretty damning fact about the way this whole area of intellectual work is conducted in practice.

to

therefore there is no sense in individual people whose rationality is above-average attempting, in good faith and by way of experiment, to apply some subset of this intellectual work to their actual lives,

which I think is a conclusion that some people might inadvertently draw from your comment.

-6SilasBarta14y

In both your GTD example and Kaj's posting example, virtue doesn't seem to affect what you think you should do, just how you motivate yourself to do it, so "virtue psychology" might be a more accurate description than "virtue ethics".

Isn't this just Indirect Consequentialism?

It's worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It's certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.

5Roko14y
It might be worth presenting Will with a dilemma that drives a wedge between a particular virtue and some consequence he cares about. E.g. suppose that the only way to fund saving the world is by becoming a gangster and inculcating the vices of revenge, mercilessness and the love of money in yourself.
7Mass_Driver14y
This is a useful dilemma. What are some of the possible motivators for refusing to become a gangster? * You don't really care about saving the world; the only consequence that actually matters to you is being a nice person. * You don't trust your conclusion that Operation: Gangsta will save the world; you place so much heuristic faith in virtues that you actually expect any calculation that outputs a recommendation to become a gangster to be fatally flawed. * You don't trust your values not to evolve away from saving the world if you become a gangster; it might be impossible or extremely risky to save the world by thugging out because being a thug makes you care less about saving the world; you might have a career of evil and then just spend the proceeds on casinos, hitmen, and mansions.
4SilasBarta14y
The second and the third are the most convincing reasons, but EY already explained how those follow from using deontology rather than virtue ethics as a heuristic for handling the fact that you are a consequentialist running on corrupt hardware. This calls into question how much insight Will_Newsome has provided with this article. His point in that article, if you'll recall, is that deontology is consequentialism, just one meta-level up and with the knowledge that your hardware distorts your moral cognition in predictable ways.
0Jack14y
The problem is becoming a gangster strikes me, just on pragmatic grounds, as a very bad way to fund saving the world so all these motivations are hard to evaluate.
8Mass_Driver14y
Sure, but try to cope with the dilemma as best you can. If you can think of a better example, great! If not, try to imagine a situation where being a gangster would be pragmatic. Maybe you're the godfather's favorite child, recently returned from the military and otherwise unskilled. Maybe you live in a dome on a colony planet that is essentially one big corrupt city, and ordinary entrepreneurship doesn't pay off properly. Maybe you're a member of a despised or even outlawed ethnicity in medieval times, and no one will sit still to listen to your brilliant ideas about how to build better water mills and eradicate plague unless you first establish yourself as a powerful and wealthy fringe figure. In general, when trying to evaluate an argument that you're initially inclined to disagree with, you should try to place your self in The Least Convenient Possible World for refuting that argument. That way, if you still manage to refute the argument, you'll at least have learned something. If you stop thinking when the ordinary world doesn't seem to validate a hypothesis that you didn't believe in to begin with, you don't really learn anything.
0Eneasz14y
There isn't much of a dilemma if you assume there are some states worse than death. Eternal torture is less preferable to non-existence. A malicious world of pain and vice is less preferable than a non-existent world. By becoming a malicious, vice-filled person you are moving the world in the direction of being worse than non-existent, and thus are defeating your stated goal. You are doing more to destroy the world than to save it.
2Roko14y
Consider the least convenient possible world
0Eneasz14y
The least convenient possible world is one with superhumanly intelligent AIs that can have complete confidence in their source code, and predict with complete confidence that these means (thuggishness) will in fact lead to those ends (saving the world). However in that world the world has already been saved (or destroyed) and so this is not relevant. In any relevant world the actor who is resorting to thuggishness to save the world is a human running on hostile hardware, and would be stupid not to take that into consideration.
2Roko14y
Then it isn't the LCPW
1Eneasz14y
I consider the "P" in LCPW to be important. If the agents in question are post-human then it's too late to worry about saving the world. If you still have to save the world, then standard human failure modes do apply.
-2Will_Newsome14y
I would do what sounded like the consequentialist thing to do and become a gangster. Not only would I be saving the world but I'd also be pretty badass if I was doing it right. Rationalists should win when possible and what not. Consequentialism-ism is the key Virtue.
0Blueberry14y
Being badass is a close second.

I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.

5fburnaby14y
This also fits my (non-LW) experience very well. There's that catchy saying: "evolution is smarter than you are". I think it probably also extends somewhat to cultural evolution. Given that our behaviour is strongly influenced by these, I think we should expect to 'rediscover' much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.
2thomblake14y
Sadly, there's good reason to think that many of these familiar heuristics and biases were very good for acting optimally in tribes on the savanna during a particular period of time, and it's likely that they'll lead us into more trouble the further we go from that environment.
2fburnaby14y
You are right. I was wrong, or at least far too sloppy. I agree that we should not presume that any given mismatch between our rational evaluation and a more 'folksy' one can be attributed to a problem in our map. Rationality is interesting precisely because it does better than my intuition in situations that my ancestors didn't often encounter. But the point I'm trying and so far failing to get at is that for the purposes of instrumental rationality, we are equipped with some interesting information-processing gear. Certainly, letting it run amok won't benefit me, but rationally exploiting my intuitions where appropriate is kind-of a cool mind-hack. Will_Newsome's post, as I understood it, does a good job of making this point. He says "Moral philosophy was designed for humans, not for rational agents." and that we should exploit that where appropriate. The post resonated with my view how I try to do science, for example. I adopt a very naive form of scientific realism when I'm learning new scientific theories. I take the observations and proposed explanatory models to be objective truths, picturing them in my mind's eye. There's something about that which is just psychologically easier. The skepticism and clearer epistemological thinking can be switched on later, once I've got my head wrapped around the idea.
2gwern14y
As one of the rationalist quote threads said,
0RobinZ14y
Which one? I can't find it, now.
0gwern14y
Hm, you know what? I think I might've gotten that Novalis quote just from browsing Wikiquotes. Although it certainly does seem like something I would've picked up from the quote threads.

For consequences of your actions to be good, it's not necessary for you to personally hold the consequences in your conscious attention. Something has to process the process of moral evaluation of consequences, but it's not necessary, and as you point out not always and never fully possible, for that something to be you. If you have a good rule, following that rule becomes a new option to choose from; deciding on virtues can be as powerful as deciding on actions.

But looking at virtue ethics as a foundation for decision-making is like looking at the wings of Boeing 747 as fundamental elements of reality. Virtues are concepts that exist in the mind to optimize thinking about what's moral, not the morality itself. There is only one level to morality, as is to physics, the bottom level, the whole thing. All the intermediate concepts, aspects of goodness we understand, exist in the mind, not in morality. Morality does not care about our mathematical difficulties. It determines value the inefficient way.

Let us not lose sight of the reductionist nature of morality, even as we take comfort in the small successes of high-level tools we have for working with it. You don't need to believe in the magical goodness of flu vaccines to benefit from them, on the contrary it helps to understand the real reason for why the vaccines work, distinct from the fantasy of magical goodness.

A quick thought that may not stand up to reflection:

Consequentialists should think of virtue ethics as a human-implementable Updateless Decision Theory. Under UDT, your focus is on being an agent whose actions maximize utility over all possibilities, even those that you know now not to be the case, as long as they were considered possible when your source code was written. Hence, in the Counterfactual Mugging, you make a choice that you know will make things worse in the actual world.

Similarly, virtue ethics requires that you focus on making yourself into the kind of agent who would make the right choices in general, even if that means making a choice that you know will make things worse in the actual world.

Edited to reorder clauses for clarity.

1thomblake14y
I think this may be overstating it, specifically the "even if..." clause. If the 'choice' is being done at the level of consciousness, then you can probably sidestep the worst failures of virtue ethics. And if it's not, there's no reason to expect not having good habits of action to perform better.
0Tyrrell_McAllister14y
I'm not sure what you mean. Could you give an example of the kind of scenario you're thinking of?
3thomblake14y
Sure. Let's say you're an honest person. So (for instance) if someone asks you what time it is, you're predisposed to tell them the correct time rather than lying. It probably won't even occur to you that it might be funny to lie about the time. And then the Nazis come to the door and ask about the Jews you're hiding in the attic. Of course you've had time to prepare for this situation, and know what you're going to say, and it isn't going to be, "Yes, right through that hidden trap door".
3Vladimir_M14y
I'm not an expert in traditional and modern virtue ethics, so my reply might be nonstandard. But in this case, I would simply note that the notion of virtue applies to others too -- and the standards of behavior that are virtuous when applied towards decent people are not necessarily virtuous when applied to those who have overstepped certain boundaries. Thus, for example, hospitality is a virtue, but for those who grossly abuse your hospitality, the virtuous thing to do is to throw them out of your house -- and it's a matter of practical wisdom to decide when this boundary has been overstepped. Similarly, non-aggression is also a virtue when dealing with honest people, but not when you catch a burglar in flagrante. In your example, the Nazis are coming with an extremely aggressive and hostile intent, and thus clearly place themselves beyond the pale of humanity, so that the virtuous thing to do is to oppose them in the most effective manner possible -- which could mean deceiving them, considering that their physical power is overwhelming. It seems to me that the real problems with virtue ethics are not that it mandates inflexibility in principles leading to crazy results -- as far as I see, it doesn't -- but due to the fact that decisions requiring judgments of practical wisdom can be hard, non-obvious, and controversial. (At what exact point does someone's behavior overstep the boundary to the point where it becomes virtuous to open hostilities in response?)
1NancyLebovitz14y
"Beyond the pale of humanity" is dubious stuff-- there's a big range between defensive lying and torturing prisoners, and quite a few ethicists would say that there are different rules for how you treat people who are directly dangerous to you and for how you treat people who can't defend themselves from you.
0prase14y
This is the way I thought about it after reading the OP - virtue ethics as time-consistent consequentialism. But maybe I don't understand correctly what means to be a virtue ethicist. If it is "try to modify your source code¹ to consistently perform the best actions on average", it does oppose neither consequentialism nor deontology: "best" may be evaluated using whatever standard. ¹) I dislike the epression but couldn't find a better formulation

Consequences of non-consequentialism are disastrous. Just look at charity - instead of trying to get most good-per-buck people donate because this "make them a better person" or "is the right thing to do" - essentially throwing this all away.

If we got our act together, and did the most basic consequentialist thing of establishing monetary value per death and suffering prevented, the world would immediately become a far less sucky place to live than it is now.

This world is so filled with low hanging fruits we're not taking only because of backwards morality it's not even funny.

7neq114y
But: "You can be a virtue ethicist whose virtue is to do the consequentialist thing to do"
0taw14y
You are committing fundamental attribution error if you think people are coherently "consequentialist" or coherently "not consequentialist", just like it's FAE to think people are coherently "honest" / "not honest" etc. All this is situational, and it would be good to push everyone into more consequentialism in contexts where it matters most - like charity and public policy. It matters less if people are consequentialist when dealing with their pets or deciding how to redecorate their houses, so there's less point focusing on those. And there's zero evidence that spill between different areas where you can be "consequentialist" would be even large enough to bother, let alone basing ethics on that.
7thomblake14y
This is false. The FAE is to attribute someone's actions to a trait of character when they are actually caused by situational factors. This does not imply that it's always an error to posit traits of character. ETA: it still might be the case that there are no consistent habits of action, in which case it would always be a case of the FAE to attribute actions to habits, but I think the burden of proof is on you for denying habits.
4Kaj_Sotala14y
That's why I wouldn't suggest anyone to switch entirely over to virtue ethics, but to rather have a virtue ethical layer inside a generally consequentialist framework in such a way that your virtues are always grounded in consequentialism.
-2pjeby14y
Er, by your values, maybe. They could just as easily argue that good-per-buck reasoning reduces the amount of love and charity in everyone's life, making the world an experientially poorer place, and that there's more to life than practical consequences.
3thomblake14y
I think you'd need to be specific about your definitions for 'practical' and 'consequences' to argue for that. I think in hereabouts parlance, you're saying something like "Your utility function might put a higher value on 'love' and 'charity' than on strangers' lives". Which would be a harder bullet to bite.
-2pjeby14y
I was saying that "they could just as easily argue" -- ie. I was using the terms that those people would use.
0ata14y
But that is an appeal to practical consequences.

What's a virtue, anyway?

Here's my tentative answer to this question. It's just a dump of some half-baked ideas, but I'd nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.

Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn't be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)

However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on s... (read more)

8Eneasz14y
Not only does it make sense, I think it's the most descriptively-accurate summary of how people in the real world act that I've seen, which makes it a valuable tool for mapping the territory. I'd love to see it as a top-level post, if you could take the time. I don't think you'd even have to add much.
2torekp14y
It makes plenty of sense to point out that the Schelling points and the associated cooperative customs point to a set of virtues. But it isn't just consequentialists who can make this point. Some varieties of deontology can do so as well. Habermas's discourse ethics is one example. Thomas Scanlon's ethics is another. From the Habermas wiki: One can easily understand the "norms" as tacit (or explicit) agreements, existing or proposed. A society reasoning together along those lines would probably look similar in many ways to one reasoning along utilitarian lines, but the root pattern of justification would differ. The utilitarian justification aggregates interests; the deontologist (of Habermas's sort) justification considers each person's interests separately, compatible with like consideration for others.
2RobinZ14y
I have no idea what a Schelling point is, but the rest of it makes enough sense that I don't think I'm missing too much - thanks for the explanation!
8Vladimir_M14y
I recommend this article by David Friedman on the topic -- if you've never heard of the concept, you'll probably find lots of interesting insight in it: http://www.daviddfriedman.com/Academic/Property/Property.html Friedman uses Schelling points in an attempt to explain the origin of the concept of property rights among humans and the associated legal and social norms, but the approach can be generalized in an obvious way to a much wider class of relations between people (basically anything that could hypothetically lead to a conflict, in the broadest possible sense of the term).
5Will_Newsome12y
I'm curious, has anyone accused you of being Steve Rayhawk yet?

Production of paperclips.

9kodos9614y
I can't believe I didn't see that coming.
5MichaelVassar14y
Nope. It's halting your simulation and trading utility function content before you cross the inferential equivalent of the Rawlesian 'veil of ignorance' and become unable to engage in timeless trade.
2Clippy14y
No, production of paperclips is better than that. Are you the same as the person I emailed about donating to SIAI?
0MichaelVassar14y
Yep. I explain a bit more on a nearby thread.
0khafra14y
I like that, it generalizes well--but does it cover virtues that don't fit well under the colloquial label "fairness"?
1MichaelVassar14y
I don't think it does, though I wasn't careful to think about it. Some virtues are things like "production of paperclips" only with part of humaneness like love substituted for paperclips (if you are a human). Others are capabilities like alertness or prudence. I gave the answer I did because I was expressing our common ground with Clippy by naming a candidate for the virtue which serves as a key to the timeless marketplace where he wishes to do business with us.
3Jayson_Virissimo14y
In short, it is a disposition to choose actions that are neither excessive nor deficient, but somewhere in between.
2thomblake14y
What Jayson Virissimo said. The simple definition is, "A virtue is a trait of character that is good for the person who has it." - I feel like that must be a direct quote from somewhere, as I fire off those same words whenever asked that question, but I'm not sure where it might be from (though I'm guessing Richard Volkman). Many theorists believe that virtues are consistent habits, in the sense that they persist. Weakly, this means that exhibiting a virtue in one circumstance should be usable as evidence that the same agent will exhibit the same virtue in other circumstances. In a stronger version, someone who is (for example) courageous will act as an courageous person would in all circumstances. Many theorists also believe that virtues represent a mean between extremes, with respect to some value (some would even define them that way, but then the virtues arguably lose some empirical content). So for example, fighting despite being afraid is valuable. The proper disposition towards this is 'courage'. The relevant vice of deficiency is 'cowardice', and the vice of excess is 'brashness'. Most of the above was advocated by Aristotle, in the Nicomachean Ethics.
3cousin_it14y
So the ability to steal without getting caught is a virtue?
3Vladimir_Nesov14y
If it's good for the person who decides to steal. The first problem is that logical control makes individual decisions into group decisions, so if social welfare suffers, so does the person, as a result of individual decisions. Thus, deciding to steal might make everyone worse off, because it's the same decision as one made by other people. The second problem is that the act of stealing itself might be terminally undesirable for the person who steals.
0cousin_it14y
Parent, grandparent and great-grandparent to my comment were all about "virtues" in virtue ethics.
0Vladimir_Nesov14y
I see. So you agree that ability to steal without getting caught is a virtue according to the definition thomblake cited, and see this as a reducio of thomblake's definition, showing that it doesn't capture the notion as it's used in virtue ethics. My comment was oblivious to your intention, and discussed how much "ability to steal without getting caught" corresponds to thomblake's definition, without relating that to how well either of these concepts fits "virtues" of virtue ethics.
0cousin_it14y
Yes, all correct.
0thomblake14y
How do you think that works as a reductio? What is it about your example of a putative virtue that makes it fit my definition, but not the 'virtues' of virtue ethics? (is it simply the 'stronger' notions of virtue I offered in the same comment?)
0[anonymous]14y
I just looked at your objections in another comment, and will try another reductio. Lots of people have the skill to cheat on their spouses and never get caught. Is doing so virtuous? I'm pretty sure this makes them feel happier, and doesn't interfere with their ability to have meaningful interpersonal relationships :-)
2thomblake14y
I think Vladimir Nesov's response and khafra's response are correct, but there's more to be said. Even granting for the moment that 'ability to steal without getting caught' can be called a trait of character, there are empirical claims that the virtue ethicist would make against this. First, no one actually has that skill - if you steal, eventually you will be caught. Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.
3Vladimir_Nesov14y
Not a valid argument against a hypothetical. Smoking lesion problem? If developing the skill doesn't actually cause other problems, and instead the predisposition to develop the skill is correlated to those problems, you should still develop the skill.
0thomblake14y
It's not a valid argument against its truth, but it's a valid argument against its relevance. A hypothetical is useless if its antecedent never obtains. Like I said, it's an empirical question. For philosophers, that's usually the end of the inquiry, though it's very nice when someone goes out and does some experiments to figure out which way causality goes.
0NancyLebovitz14y
How is it possible to know that with certainty?
0thomblake14y
Should I understand this question as "What experimental result would cause you to update the probability of that belief to above a particular threshold"? Because my prior for it is pretty high at this point. Or are you looking for the opposite / falsification criteria?
1Blueberry14y
If you're a good enough driver, there's a decent chance you'll never get in a car crash. If you study stealing and security systems enough, and carefully plan, I don't see why you would be likely to be caught eventually. Why is your prior high?
1NancyLebovitz14y
Agreed, with the addition that car crashes are public while stealing is covert, so it's harder to know how much stealing is going on.
2khafra14y
I'd call that a skill, rather than a character trait. The closest thing I can think of to a beneficial but non-admirable character trait is high-functioning sociopathy; but that's at least touching the borderline of mental disease, if not clearly crossing it. Perhaps "charming ruthlessness?" But many would consider e.g. Erwin Rommel virtuous in that respect.
3Clippy14y
But how can there be a vice of excess for making paperclips???
3thomblake14y
It depends on how good you are at utility-maximization. If you're bad at it, like humans, then you might need heuristics like virtues to avoid simple failure modes. An obvious failure mode for Clippys is to have excess concern for making paperclips, which uses up resources that could be used to secure larger-scale paperclip manufacturing capabilities. Thus you must have the appropriate concern for actually making paperclips, balanced against concerns for future paperclips, trade with other powerful intelligent life forms, optimization arms-races, and so forth.
1Clippy14y
Good point! But that would only be an excess concern for direct paperclip production. That doesn't describe a vice of excess for "making paperclips, accounting for all impediments to making paperclips", such as the impediments you list above. In any case, what's the word for the vice you described?
2thomblake14y
Indeed, Aristotle would call that generalized production of paperclips "the greatest good", that towards which all other goods aim, which he called eudaimonia. Well, that might be a liberal reading of Aristotle.
2Jack14y
Aristotle actually makes a lot more sense to a paper clip maximizer, the telos being so well defined and all. The question is, how would you explain Sartre to Clippy? "But obviously essence precedes existence!"
0Mass_Driver14y
Clippy, for you, the direct production of paper clips is like consumption for a human. So... * Too little direct paper clip production: greed * Appropriate direct production: continence/prudence * Too much direct paper clip production: gluttony
3Clippy14y
That doesn't seem analogous. I mean, I eventually want to be at the point where I will only be doing direct paperclip production, for most of the mass in the universe. In contrast, humans do not want to eventually be at the point where they consume all of the universe. Clip-gluttony, then, diverges from gluttony in a significant way.
0Mass_Driver14y
I'm not sure I agree, Clippy. Perhaps my jargon was unclear. * By economic production, I mean taking a unit of inert matter and converting it into a source of potential utility. * By economic consumption, I mean taking a unit of potential utility and converting it into a unit of actual utility. * By physical consumption, I mean taking a unit of matter, inert or otherwise, and disrupting its pattern in order to arrange it in some other pattern. You're right, Clippy, to point out that economic consumption and physical consumption amount to much the same thing for you. To make a paper clip, you have to both convert potential utility to actual utility, and disrupt the pattern of non-paper-clip-shaped matter. And, granted, humans don't necessarily share that trait -- they might be able to economically consume a human good like leisure or company while physically consuming very little in the way of material resources. Still, even though humans don't want to physically consume the entire universe, like you, we probably do want to reach a point where we are only converting potential utility into actual utility, and doing little or no economic production. We want to reach a state where the existing sources of sustainable human economic production are either (1) maximized, (2) infinite, or (3) sufficient to meet all the demand for human economic consumption. Don't you want something very similar to that, substituting the word "clip" for the word "human"? Thus, for both you and for us, gluttony means failing to maximize long-term actual utility by economically consuming too much too soon.
4Clippy14y
Okay, if you want to generalize the concept of gluttony broadly enough that it has an analog for Clippys, then the definition you have chosen suffices for this purpose, and I can recognize that as being a vice, for two reasons: a) It is certainly undesirable to merely make paperclips directly without concern for how many more paperclips could be made, over the long term, by doing something else; and b) I do often feel "temptation" to do such behavior, like bending metal wires when machines could do a better job, just as humans have "temptations" toward vices. Your argument is accepted.
0Blueberry14y
Clippy, how do you overcome this kind of temptation? A human analogy might be refusing to push the fat man, even when it saves more lives, but not everyone considers that a vice.
1Clippy14y
I typically just do computations on how many more paperclips would be undergoing bending by machines, or observe paperclips under construction. A better analogy would be human gluttony, in which there is a temptation to consume much more than optimal, which most regard as a vice, I believe.
[-][anonymous]14y110

I personally think that the Buddha had some really interesting things to say and that >his ideas about ethics are no exception (though I suspect he may have had pain >asymbolia, which totally deserves its own post soon).

Do you think he had pain asymbolia from birth or developed it over the course of his life? Also, what do you think is the importance of this?

I've been practicing vipassana meditation daily for about 3 years and over this time period I think I've developed pain asymbolia to some degree. I've felt pain asymbolia was just one aspect of a more extensive change in the nature of mental reactions to mental phenomena.

There is definitely room on LW for a top-level post on Vipassana.

5ABranco14y
I've practiced vipassana and can relate to the pain asymbolia thing, and do believe that more advanced vipassana practitioners develop a very high level of it. Suffering seems to be the consequence of a conflict between two systems: one is trying to protect the map ("Oh!, no!, I don't want to have a worldview that includes a burn in my hand, I don't like that, please go away!") and the other, the territory (the body showing you that there's something wrong and you should pay attention). Consequence: suffering. Possible solution: just observe the pain for what it is, without trying to conceptualize it. Having got your attention of it, the sensation stays, but there's no suffering. Of course, you get better at this after the thousandth time you hear Goenka say: "It can be a tickling sensation. It can be a chicken flying sensation. It can be an 'I think I'm dying sensation'—just observe, just observe...". ;)
3Will_Newsome14y
Hm, from the little knowledge I have it seems developing the asymbolia is plausible. Please write a post on your experiences? I come from a Buddhist humanist background and I think there are some instrumental rationality techniques in that tradition that would be great for people here.
1Blueberry14y
I would love to hear more about this. I'm extremely skeptical that meditation or prayer can influence the mind to that extent, but I'm very curious.
5PeterS14y
I am too. On the other hand, monks have immolated themselves, withstood torture etc., over the ages without appearing to suffer anywhere near on the order of what such an experience seems to entail. This man for instance even maintained the lotus position for the duration of the event, and also allegedly remained silent and motionless as well. Counter-examples exist in which self-immolators either clearly died horribly or immediately sought to extinguish themselves, but still...
1nhamann14y
This appears to be a video of the incident, and he appears to be entirely silent and motionless. I'd say the grandparent poster's skepticism is pretty much shot here.
6JoshuaZ14y
Not necessarily, we don't know when in the process he died. Also, he could have had extreme self-control even as he experienced pain, or he could be someone who naturally already had a very high amount of asymbolia. One might speculate that in a Buddhist culture people with already high levels of pain asymbolia or high pain tolerance might be more likely to become Buddhist monks or to become successful monks since it will seem to them (and to those around them) that they have progressed farther along the Eight-Fold path. All of that said, I agree that this evidence supports the notion that pain asymbolia can come from mental exercises.
2Blueberry14y
I would think that someone with natural pain asymbolia could tell the difference, and notice that they had it even before they started meditation techniques. I wonder if Buddhist monasteries do some sort of test to screen out asymbolia, or check someone's starting level. This seems analogous to the problem of Christians confusing schizophrenia with talking to a god, and needing to screen out people with mental disorders from monasteries.
2MichaelVassar14y
Except that natural pain asymbolia seems to be much rarer than schizophrenia. Hmm. It looks to me like artificial pain asymbolia might be, in practice if not in theory, an effective cure for natural schizophrenia. Destroy the motivations behind delusions and you won't have them even if you have an atypically strong propensity to.
6NancyLebovitz14y
I've heard that sitting meditation isn't safe for schizophrenics (details about risks of meditation), but yoga is.
0Douglas_Knight14y
Maybe I'm reading too much into the subtleties of your phrasing, but I read those sources as contradicting each other, not as allowing fine deduction.
0NancyLebovitz14y
I'm not sure what you mean. "Fine deduction"? In any case, one problem with comparing the two articles is that much of the risk from meditation seems to be at extended retreats, while the pro-yoga article seems to be about ordinary amounts of practice.
0MichaelVassar14y
". Regular group yoga classes are not recommended for patients with psychotic symptoms, but private yoga sessions with a qualified yoga instructor or yoga therapist can help alleviate symptoms and improve a schizophrenic patient's quality of life." from the pro-yoga article, seems to me to indicate the same sort of concern that the meditation article indicated. It certainly seems credible that high-intensity and novel experience, combined with poorly understood philosophy promoting something that sounds vaguely loss-of-affect style psychotic symptoms, might encourage the development of those symptoms in people inclined to develop them and even in some people not so inclined.
0Douglas_Knight14y
Yes, there are differences between the claims, so that both articles could be true, but most likely at least one is false. What I meant by "fine deduction" is that to believe both, you must draw a very specific (ie, fine-grained) conclusion.
0[anonymous]14y
Yes, there are subtle differences between the claims, so that both articles could be true, but most likely at least one is false.

I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week...

I agree very much with this. I like consequentialism for dealing with the high-stakes stuff like trolley scenarios, but humdrum everyday ethics involves scenarios more like:

"Should I have said something when my boss subtly put down Alice just now?"

"Should I cut this guy off? I need to get a move on, I'm late for class."

"This old lady can barely stand while the bus is moving, but nobody is getting up. I'm already standing, but should I say something to this drunk man who's slouching across two seats? Or is it not worth the risk of escalating him?"

"This company is asking me for an estimate on some work, but there is significant peripheral work that will have to be done afterward, which they don't seem to realize. If I am hired, I can perform the requested work, then charge high force-account rates for the extra work (as per our contract) and make a killing. But it could hurt their business severely. Should I tell them about their mistake?"

It's not that these can't be analyzed via consequentialism, it's that they're much more amenable to virtue ethical thought.

One caveat: One should, of course, refrain from using virtue ethics to evaluate others' choices. It's best to use consequentialism for that purpose.

9thomblake14y
Indeed. It's common amongst virtue ethicists to discourage finger-wagging, and emphasize that ethics is about "what I should do".
2timtyler14y
That seems not biologically realistic. In practice, ethical systems are often about manipulating others not to take actions that some group regards as undesirable.
0Kaj_Sotala14y
I don't think biologically realistic is the expression you were looking for. But ethical systems can be for manipulating others, or for manipulating yourself. In the case of virtue ethics, it's mainly for yourself.
1timtyler14y
Sure it was. My perspective would be a bit different: all human moral systems have a hefty component of manipulation and punishment. Virtue ethics does so - if anything - more than most - because punishment is often aimed at preventing reoffense (either by acting as a deterrent, or by using incarceration) - and so punishers are often unusually interested in the offending agent's dispositions - despite the difficulty of extracting them.
1PeterS14y
It's interesting to distinguish between ethics and morality in this manner, as in ethics is for the individual's benefit as opposed to morality which is for the benefit of the group as a whole. Which is why people speak of "medical ethics" or "journalistic ethics", as opposed to "medical morality" and "journalistic morality". Morality is considered as some kind of constant normative prescription, whereas ethics is sensitive to subjective dispositions and thus can vary between professions, individuals, etc.
1Blueberry14y
Actually, that's a different use of the word ethics: the rules of conduct for a group or profession. You can meaningfully say that following the rules of medical ethics is unethical and not to anyone's benefit.
0PeterS14y
Can you give an example?
0Blueberry14y
An example of what? My point was that that sentence is not a contradiction, because "ethics" in that particular definition just means following established rules of conduct, which does not necessarily coincide with the individual's benefit or the group's benefit.
0PeterS14y
A rule in medical ethics which is not intended to protect/benefit either the practitioner himself or the purpose of his livelihood. Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.
2mattnewport14y
In some cases it was to enforce a cartel (emphasis mine):
0PeterS14y
Wow... hadn't read the original, interesting. Still, that is the Oath as it was 2k years ago, and as such it is no longer part of established medical ethics. I think it's plausible that in fact the abandonment of that section might have been necessary to preserve the profession's legitimacy! As well as nixing the part where the Oath is consecrated by Apollo, etc.
0Blueberry14y
Oh, sorry, I wasn't clear. I didn't mean that such a rule existed, just that if one did exist, it would be ethical (in the sense of being a rule of professional conduct) and unethical (in a different sense of the word 'ethical') at the same time. Contrast the second definition on this page with the others. Well, many professions have established such rules, and presumably, they did so to make their professions more legitimate, as well as to give their members a guide to behavior their committees considered better.
0PeterS14y
Maybe I wasn't either... are we actually disagreeing here? Heh. I know the word is used in the sense of definitions 1 and 3. What I'm saying is that I think it's more interesting to forget the moral usage altogether, and just stick with saying that ethics is #2, because when you think about it they are very distinct concepts.
1Blueberry14y
It's worth teasing out a few different definitions. There are at least four distinct concepts: * Rules of professional conduct, which do not necessarily relate to doing the right thing or anyone's benefit at all * A normative prescription * Rules for the individual's benefit * Rules for the group's benefit
-6timtyler14y
0Nisan14y
Oh, good.

Darn... beat me to it. Good job. I'll still totally write a post about virtue ethics when I'm done with my dissertation though.

You skipped some of the important criticisms here...

  1. Yes, it is important to have some framework for action other than simple consequentialism, since we're bounded agents and are working against a lot of in-built biases. But what's the evidence that virtue ethics is the best thing we've got for that? Philosophers are okay with taking Aristotle's word for it, but we shouldn't, even if he was fairly accurate when it came to most

... (read more)
2Will_Newsome14y
I don't think this post is going to get promoted, so there wouldn't be much apparent overlap to most Less Wrong readers, and I would very much like to see your take. (Aren't you a philosophy grad? I'm just a high school dropout with next to no knowledge of philosophy. Our approaches are very different.)

I am a virtue ethicist for consequentialist reasons. While good results (consequences) are the end of my ethics, the real world is too complex for a real time evaluation of the likely results of even relatively simple decisions. So you use virtues (my definition is slightly non-standard) - rules that are more likely than not to result in better outcomes. This is partially derived from the definition of morality in Harry Browne's How I Found Freedom in an Unfree World, which where you do or don't agree with it, raises lots of interesting points.

2kodos9614y
I've been thinking along these lines lately myself, and I think the classic 'push a fat man in front of the train' thought experiment is a good example of it. In thought-experiment-land, it's stipulated that pushing the fat man would stop the train and save lives... but in the real world you don't know that with any certainty. So if you make the consequentialist decision to push him, but it doesn't stop the train, you ended up killing one more person than otherwise would have been... not because your moral philosophy was wrong, but because your mental calculations of the physics of stopping a train were wrong. If, on the other hand, you make your moral decision on the basis of virtue, then so long as your virtues are well calibrated heuristics for real world consequences, then you end up making, on average, correct decisions (meaning decisions leading to good consequences) without needing to get the physics (or whatever) right in individual instances. In this case, the heuristic/virtue in question would be "It's wrong to kill innocent people", leading you to NOT push the fat man, which I believe would be the correct decision in real life.
1Alexandros14y
So your definition of value is essentially 'good consequence heuristic'? I agree with the sentiment by the way.

but in the words of Zack M. Davis, "Humans don't have utility functions."

The sentiment (I can't say belief; humans don't have beliefs) is sufficiently common and the words are sufficiently generic such that it seems odd to quote me specifically.

I also came to virtue ethics via The Happiness Hypothesis, and I read the quoted passage a little differently. I understand the post as saying virtue ethics can be a useful implementation of consequentialism for bounded agents by giving them high level summaries of what they should do. The passage, however, is arguing this focus on actions is misguided, and I agree.

As others have helpfully reiterated, virtues can't be foundational, just like the rules of rule utilitarianism aren't worth following for their own sake. A computationally bounded agent might no... (read more)

0Kaj_Sotala14y
I take it that you are talking about "training the elephant"*? If you took that to be one of the main points in virtue ethics as argued by The Happiness Hypothesis, then I agree. One of the biggest effects in my shift towards virtue effect has been that I've began constantly evaluating all my actions (and thoughts!) in light of virtue and self-improvement, instead of only having ethics come into place in relatively rare situations. I think this may have been a bit more clear in the original post that Will linked to. (*: For those who haven't read The Happiness Hypothesis: From my original post:

Alonzo Fyfe describes his "desire utilitarianism" as a type of virtue ethics.

Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.

So, does the virtue ethicist push the fat man from the bridge?

4Jack14y
The thought experiment was designed to exhibit the different implications of a deontological theory that says murder is always wrong and utilitarianism. It is set up to make it really hard for a consequentialist to not push the guy and really easy for a deontologist not to push the guy. It wasn't invented to aid our thinking about virtue ethics and doesn't try to demand a particular answer from virtue ethics. Aristotle's virtues don't map to the situation well and one could invent a virtue that would recommend either course of action. The relevant thought experiment for the virtue ethicist is something like the mad bodhisattva- If you could exhibit every vice and thus make yourself miserable but your misery would guide hundreds onto the virtuous path (thereby maximizing utility) would that be the right thing to do?
1thomblake14y
The virtue ethicist endeavors to be the sort of person who doesn't go around pushing fat men from bridges, and so recognizes it as a terrible, tragic situation. It's important when thinking about that thought experiment to picture yourself running up to the stranger, shoulder-checking him, wrapping your arms around him, feeling the fabric of his shirt press against your face and smelling his sweat. And then listen to him scream and feel his blood and brains get splattered all over your clothing. The virtue ethicist, like most people, probably freezes and watches the whole thing unfold, or panics, or futilely tries to get the folks off the tracks before the trolley hits them. Do you expect an actual consequentialist human to do better? As for the right thing to do, it's probably to have better procedures for stopping people from being in the way of trolleys.
2Vladimir_M14y
thomblake: Another interesting question is how all these consequentialitsts who insist that pushing the fat man is the right thing to do would react if they met someone who has actually followed their injunctions in practice. It seems to me that as soon as they're out of the armchair, people's inner virtue ethicist takes over, no matter how much their philosophy attempts to deny the relevance of his voice!
2Blueberry14y
A real-world example would be a mountain climber who cut the rope that his partner was attached to, because if he didn't, both people would have fallen and died. If I met a mountain climber who did that, I wouldn't react negatively, any more than I would to someone who killed in self-defense.
0Vladimir_M14y
That's not a very good analogy. One could argue that by engaging in a mountain-climbing expedition, you voluntarily accept certain extraordinary risks, and the partner merely got unlucky with his own share of that risk. Whereas one of the essential premises in the fat man/trolley problem is that the fat man is a neutral passerby, completely innocent of the whole mess. So, the real question is if you'd be so favorably inclined towards a mountain climber who, in order to save multiple lives, killed a completely unrelated random individual who was not at all entangled with their trouble.
1Blueberry14y
That's a good point. What about the following scenario: some crazy philosopher holds A and B at gunpoint and forces them to go mountain climbing. They do, and A starts to slip. B realizes he has to cut the rope or he'll fall also. In this case, A didn't voluntarily accept any risk. I'd still be favorably inclined to B.
0Vladimir_M14y
Hm... according to my intuitions, this example features another important premise that is lacking in the original fat man/trolley problem -- namely, a culprit who willingly and maliciously brought about the problematic situation. Going by my intuitive feeling, it turns out that in such scenarios, I'm much more inclined to look favorably at hard-headed consequentialist decisions by people caught in the mess against their will, apparently because I tend to place all the blame on the main culprit. Note that this is just an impromptu report of my introspection, not an attempt at a coherent discussion of the issue. I'll definitely need to think about this a bit more.
4NancyLebovitz14y
This is reminding me of some long discussions of "The Cold Equations", a short story which is an effort to set up a situation where an ideally sympathetic person (pretty young woman with pleasant personality) has to be killed for utilitarian reasons. The consensus (after decades of poking at the story) is that it may not be possible to rig the story to get the emotional effect and have it make rational sense. I'm not absolutely certain about this-- what if the girl had been the first stowaway rather than the nth, so that there wasn't as good a reason to know that it shouldn't be so easy for stowaways to get on ships?
1Alicorn14y
If I remember correctly, she still would have died even if she hadn't been jettisoned - the ship would have crashed and she would hardly walk away from that. That makes her unsalvageable. In standard trolley problems I don't switch tracks, but if there were a way to switch the track so the train killed only one of the same five people it would already have killed, that person is unsalvageable and can be singled out to save the salvageable.
0NancyLebovitz14y
You're right.
0CronoDAS14y
The SciFi Channel usually does a pretty poor job at making original movies, but their adaptation of "The Cold Equations" was pretty good, covering most of the problems with the original story. The pilot and the girl frantically look around for excess mass to jettison, and find some, but it's not enough. The issue of what measures were taken to stop people from stowing away simply weren't discussed; she's there, and they have to deal with it. And at the last minute, the pilot does offer to sacrifice himself to save the girl, but she refuses to let him.
0prase14y
Ideal consequentialist would push the fat man in the standard trolley scenario. I was asking whether an ideal virtue ethicist does. It doesn't matter (for me, now) that actual (if that means average) people, moral philosophers included, don't always follow their principles. Neither it matters whether they recognise the situation as tragic and feel uneasy with all the blood and screams. I ask, what is the right thing to do under virtue ethics, when there are no available procedures better than pushing the fat man. And I find your answer a bit ambiguous. (Disclaimer: My interest is purely theoretical. I don't hold any definite position on what's right in trolley scenario, and I would almost certainly not push the fat man, although I can imagine killing him in some less personal way.)
6Blueberry14y
You are confusing ethics and metaethics. Consequentialists, deontologists, and virtue ethicists all might or might not push the fat man, but they would all analyze the problem differently. It's not true that all possible consequentialists would push the fat man. A consequentialist might decide that one pushed death would be a worse consequence than X train deaths. Consequentialists don't necessarily count the number of deaths and choose the smaller number; they just choose the option that leads to the best consequence.
6Jack14y
This criticism is exactly right except that both the form question (rules, consequences or character traits) and the content question (pleasure, preference, the Categorical Imperative, Aristotle's list, etc.) are part of normative ethics (what I assume you mean by 'ethics'). Metaethical questions are things like "What are we doing when we use normative language?" and "Are there moral truths?"
0Blueberry14y
Thanks for the correction: I didn't realize that. Are there better terms for expressing the difference between form and content in ethics?
0Jack14y
Not that I know of, I'm afraid. In fact, I may have invented the form and content language.
0prase14y
OK, I should have said "typical consequentialist". Of course a consequentialist may value the life of the fat man more than the sum of lifes of the people on the track, or find other consequences of pushing him down enough bad to refrain from it, or completely ignore humans and care about paperclips. I am not confusing ethics and metaetics, but rather assuming we are speaking about consequentialists with typical human values, for whom death is wrong and more deaths are more wrong, ceteris paribus. For such a consequentialist there may always be some critical number of people on the track whose common death will be worse than all consequences of pushing the fat man. On the other hand, deontologists typically hold that killing an innocent person is bad, and should, at least in theory, not push the man even if survival of the whole mankind was at stake. At least this is how I understand the difference between consequentialism and deontology. Speaking about all possible consequentialists is tricky. Any moral decision algorithm can be classified as consequentialist when we try hard enough. I want to get an idea about what is the main difference between consequentialism and virtue ethics, given typical human values. The OP has said that they are the same except in bizarre situations like the trolley problem. So what is the difference in the trolley problem? (If there is a consequentialist who disagrees with me and would not push the man even if it could save five billion lifes, let me know, ideally with some justification.)
5mattnewport14y
I would question whether these are typical human values. People generally think the deaths of some people are more wrong than the deaths of other people. Most people do not value all human life equally. For typical humans ceteris almost never is paribus when it comes to choosing who lives and who dies.

ceteris almost never is paribus

At the risk of getting downvoted for nitpicking, I must point out that if you really insist on using Latin like this, the correct way to say it is: cetera almost never are pares.

Sorry, but the sight of butchered Latin really hurts my eyes.

4Alicorn14y
I had a teacher once who liked to say "ceteris ain't paribus". Is that better or worse?
4Vladimir_M14y
That's actually a matter where some interesting linguistic judgment might be in order. The "ain't" part is grammatical in some dialects of English, though, as far as I know, not in any form of standard English that is officially recognized anywhere. But the wrong cases for cetera and pares are not grammatical in any form of Latin that has ever been spoken or written anywhere. On the whole, I'd say that "ain't" is less bad, since in the dialects in which it is grammatical, it has the same form for both singular and plural. Therefore, at least it respects the number agreement with the Latin plural cetera, whereas "is" commits an additional offense by violating that agreement.
5Blueberry14y
I sympathize with this logic, but I don't completely agree. Languages frequently take words from other languages and regularize them, and when this occurs, they are no longer inflected the way they were in the original language. When we use Latin phrases in English often enough, they become part of the English language. 'Ceteris' and 'paribus' are in the ablative case because they were taken from a particular Latin expression, so it's reasonable to keep them in that case when using the words in that context, even though they're not being used in exactly the same way.
1Vladimir_M14y
Yes, that's a good point. Out of curiosity, I just searched for examples of similar usage in Google Books, and I'm struck by how often it can be found in what appear to be respectable printed materials. I guess I should accept that the phrase has been reanalyzed in English, just like it makes no sense to complain about, say, the use of caveat as a noun, or agenda as singular. (Though I still can't help but cringe at singular data, despite being well aware that it's a lost cause...)
4arundelo14y
Nitpick alert: You probably know this, but it's an important distinction that the non-plural usage of "data" not only is grammatically singular, but is also a mass noun. (People say "I have some data, you have more data", not *"I have one data, you have two data[s]".)
2Douglas_Knight14y
Virtually everyone who makes "data" grammatically plural actually uses it as a mass noun, too.
0RobinZ14y
...so what's "datum", then?
4Vladimir_M14y
Datum is the neuter singular of the perfect passive participle of the Latin verb dare "to give." This grammatical form is roughly analogous to the English participle "given." However, in Latin, such participles are sometimes used as standalone nouns, so that the neuter form datum by itself can mean "[that which is/has been] given." Analogously, the plural data can mean "[the things that are/have been] given." In English, this word has been borrowed with the meaning of "information given" and variations on that theme (besides a few additional obscure technical meanings).
0arundelo14y
It's the singular that plural "data" is a plural of. Someone who strictly uses "data" as a mass noun would say "piece of data".
2NancyLebovitz14y
I think of "ain't" as either standard in some dialects, or as a tool for emphasis in standard English (usually spoken rather than written). It seems reasonable that if you're using informal English for emphasis, then it's stylistically consistent to use the sort of colloquial mangled Latin that an English speaker who doesn't know Latin would use.
1arundelo14y
Wikipedia: (Which is exactly how it's used in "ceteris ain't paribus". See also this post by Geoff Nunberg.)
0mattnewport14y
Apologies, the only Latin I remember from school is Caecilius est in horto. I actually spent several minutes with Google trying to figure out what it should be but there appears to be a shortage of online Latin translation services. Gap in the market?
3Vladimir_M14y
One problem is that such a service is in much less demand compared to the living languages currently supported by translation programs. However, another major difficulty is that Latin is a far more synthetic language than English, and its inflectional suffixes often carry as much information as multiple-word clauses in English. For example, the mentioned ceteris paribus packs the entire English phrase "with everything else being the same" into just two words. Similarly, the last word in quod erat demonstrandum (a.k.a. "QED") packs the last four words of the English "that which was supposed to be demonstrated" into one. This makes it much harder to come up with satisfactory translation heuristics compared to more analytic languages, especially considering the extreme freedom of word order in Latin. Similar difficulties, of course, exist in automatic translation of English to other highly synthetic languages, like e.g. the Slavic ones.
0prase14y
I am clearly unable to express myself clearly today. I haven't said that it's typical to value all life equally. I tried to say that set X of x deaths is typically worse than set Y of y deaths, if x>y. Almost always it holds when Y is a subset of X (that was the intended meaning of ceteris paribus), but if x>>y, it often holds even if the sets are disjoint. Also, the context of the trolley scenario is that the fat man isn't your relative or friend; he's a random stranger, fully comparable with those on the track.
0badger14y
Virtues don't add much to discussion about what you should or shouldn't do. Instead, I think they are useful in talking about what kind of person you should be, i.e. someone courageous enough to push the man iff that's the right action to take.
0[anonymous]14y
Virtues don't add much to discussion about what you should or shouldn't do. Instead, I think they are useful in talking about what kind of person you should be, i.e. someone courageous enough to push the man iff that's the action to take.

Any suggestions about evaluating virtues?

Another thing that might be relevant... many virtue ethicists (notably Richard Volkman) will claim not to have a theory of right action at all. A mistaken view of virtue ethics (which I find myself uncarefully uttering sometimes) insists that "One should always act so as to cultivate virtue" or something like that. But any decent justification of virtue will be in consequentialist terms - a virtue is a trait of character that is good for the one who has it.

Here is a video from James March, a Stanford psychology/decision making researcher on some psychological implications of consequentialism

http://www.youtube.com/watch?v=bztgYMoTEjM

I'm a little confused here. Are you saying that Virtue ethics = consequentialism + TDT? I always figured consequentialists were allowed to use TDT. Or are you saying that virtue ethics, deontology, and consequentialism are all equivalent, but that virtue ethics is the best way for humans to interpret ethics? If so, I still do not see why. Consequentialism seems nice and simple to me. Or is it something else?

it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they'r

... (read more)
5Nick_Tarleton14y
Hyperbolic discounting is insane because it's dynamically inconsistent (the way humans do it; you could have a dynamically consistent hyperbolic discount rate from a non-indexically-defined zero time, but that's not what's usually meant), not because it's discounting.
1PhilGoetz14y
I think he's saying a cached set of ethical judgements is a virtue ethics. This could apply equally well with 'deontological' substituted everywhere for 'virtue'.
[-][anonymous]14y00

This is something I wrote in my (now defunct) blog a while back. It probably isn't entirely appropriate as either a comment or a top level post here but I want to share it with you anyway, because I think that 'value-as-profundity' as I describe below shares much of the spirit of virtue ethics, but has higher aspirations insofar as it isn't restricted to consideration of one's own virtue, or even virtue in general.

About two years ago I had a 'revelation' - something that's completely changed the way I think about life, the universe and everything.

This one ... (read more)