Will_Newsome:
Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.
More precisely, they do disagree about the same practically relevant ethical questions that provoke controversy among common folks too, especially the politically and ideologically charged ones -- but their positions are only loosely correlated with their ethical theories, and instead stem from the same gut feelings and signaling games as everybody else's. This seems to me like a pretty damning fact about the way this whole area of intellectual work is conducted in practice.
Maybe, but be very careful not to jump from
a pretty damning fact about the way this whole area of intellectual work is conducted in practice.
to
therefore there is no sense in individual people whose rationality is above-average attempting, in good faith and by way of experiment, to apply some subset of this intellectual work to their actual lives,
which I think is a conclusion that some people might inadvertently draw from your comment.
In both your GTD example and Kaj's posting example, virtue doesn't seem to affect what you think you should do, just how you motivate yourself to do it, so "virtue psychology" might be a more accurate description than "virtue ethics".
Isn't this just Indirect Consequentialism?
It's worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It's certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.
I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.
For consequences of your actions to be good, it's not necessary for you to personally hold the consequences in your conscious attention. Something has to process the process of moral evaluation of consequences, but it's not necessary, and as you point out not always and never fully possible, for that something to be you. If you have a good rule, following that rule becomes a new option to choose from; deciding on virtues can be as powerful as deciding on actions.
But looking at virtue ethics as a foundation for decision-making is like looking at the wings of Boeing 747 as fundamental elements of reality. Virtues are concepts that exist in the mind to optimize thinking about what's moral, not the morality itself. There is only one level to morality, as is to physics, the bottom level, the whole thing. All the intermediate concepts, aspects of goodness we understand, exist in the mind, not in morality. Morality does not care about our mathematical difficulties. It determines value the inefficient way.
Let us not lose sight of the reductionist nature of morality, even as we take comfort in the small successes of high-level tools we have for working with it. You don't need to believe in the magical goodness of flu vaccines to benefit from them, on the contrary it helps to understand the real reason for why the vaccines work, distinct from the fantasy of magical goodness.
A quick thought that may not stand up to reflection:
Consequentialists should think of virtue ethics as a human-implementable Updateless Decision Theory. Under UDT, your focus is on being an agent whose actions maximize utility over all possibilities, even those that you know now not to be the case, as long as they were considered possible when your source code was written. Hence, in the Counterfactual Mugging, you make a choice that you know will make things worse in the actual world.
Similarly, virtue ethics requires that you focus on making yourself into the kind of agent who would make the right choices in general, even if that means making a choice that you know will make things worse in the actual world.
Edited to reorder clauses for clarity.
Consequences of non-consequentialism are disastrous. Just look at charity - instead of trying to get most good-per-buck people donate because this "make them a better person" or "is the right thing to do" - essentially throwing this all away.
If we got our act together, and did the most basic consequentialist thing of establishing monetary value per death and suffering prevented, the world would immediately become a far less sucky place to live than it is now.
This world is so filled with low hanging fruits we're not taking only because of backwards morality it's not even funny.
Here's my tentative answer to this question. It's just a dump of some half-baked ideas, but I'd nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.
Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn't be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)
However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on s...
I personally think that the Buddha had some really interesting things to say and that >his ideas about ethics are no exception (though I suspect he may have had pain >asymbolia, which totally deserves its own post soon).
Do you think he had pain asymbolia from birth or developed it over the course of his life? Also, what do you think is the importance of this?
I've been practicing vipassana meditation daily for about 3 years and over this time period I think I've developed pain asymbolia to some degree. I've felt pain asymbolia was just one aspect of a more extensive change in the nature of mental reactions to mental phenomena.
I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week...
I agree very much with this. I like consequentialism for dealing with the high-stakes stuff like trolley scenarios, but humdrum everyday ethics involves scenarios more like:
"Should I have said something when my boss subtly put down Alice just now?"
"Should I cut this guy off? I need to get a move on, I'm late for class."
"This old lady can barely stand while the bus is moving, but nobody is getting up. I'm already standing, but should I say something to this drunk man who's slouching across two seats? Or is it not worth the risk of escalating him?"
"This company is asking me for an estimate on some work, but there is significant peripheral work that will have to be done afterward, which they don't seem to realize. If I am hired, I can perform the requested work, then charge high force-account rates for the extra work (as per our contract) and make a killing. But it could hurt their business severely. Should I tell them about their mistake?"
It's not that these can't be analyzed via consequentialism, it's that they're much more amenable to virtue ethical thought.
One caveat: One should, of course, refrain from using virtue ethics to evaluate others' choices. It's best to use consequentialism for that purpose.
Darn... beat me to it. Good job. I'll still totally write a post about virtue ethics when I'm done with my dissertation though.
You skipped some of the important criticisms here...
Yes, it is important to have some framework for action other than simple consequentialism, since we're bounded agents and are working against a lot of in-built biases. But what's the evidence that virtue ethics is the best thing we've got for that? Philosophers are okay with taking Aristotle's word for it, but we shouldn't, even if he was fairly accurate when it came to most
I am a virtue ethicist for consequentialist reasons. While good results (consequences) are the end of my ethics, the real world is too complex for a real time evaluation of the likely results of even relatively simple decisions. So you use virtues (my definition is slightly non-standard) - rules that are more likely than not to result in better outcomes. This is partially derived from the definition of morality in Harry Browne's How I Found Freedom in an Unfree World, which where you do or don't agree with it, raises lots of interesting points.
but in the words of Zack M. Davis, "Humans don't have utility functions."
The sentiment (I can't say belief; humans don't have beliefs) is sufficiently common and the words are sufficiently generic such that it seems odd to quote me specifically.
I also came to virtue ethics via The Happiness Hypothesis, and I read the quoted passage a little differently. I understand the post as saying virtue ethics can be a useful implementation of consequentialism for bounded agents by giving them high level summaries of what they should do. The passage, however, is arguing this focus on actions is misguided, and I agree.
As others have helpfully reiterated, virtues can't be foundational, just like the rules of rule utilitarianism aren't worth following for their own sake. A computationally bounded agent might no...
Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.
So, does the virtue ethicist push the fat man from the bridge?
ceteris almost never is paribus
At the risk of getting downvoted for nitpicking, I must point out that if you really insist on using Latin like this, the correct way to say it is: cetera almost never are pares.
Sorry, but the sight of butchered Latin really hurts my eyes.
Another thing that might be relevant... many virtue ethicists (notably Richard Volkman) will claim not to have a theory of right action at all. A mistaken view of virtue ethics (which I find myself uncarefully uttering sometimes) insists that "One should always act so as to cultivate virtue" or something like that. But any decent justification of virtue will be in consequentialist terms - a virtue is a trait of character that is good for the one who has it.
Here is a video from James March, a Stanford psychology/decision making researcher on some psychological implications of consequentialism
I'm a little confused here. Are you saying that Virtue ethics = consequentialism + TDT? I always figured consequentialists were allowed to use TDT. Or are you saying that virtue ethics, deontology, and consequentialism are all equivalent, but that virtue ethics is the best way for humans to interpret ethics? If so, I still do not see why. Consequentialism seems nice and simple to me. Or is it something else?
...it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they'r
This is something I wrote in my (now defunct) blog a while back. It probably isn't entirely appropriate as either a comment or a top level post here but I want to share it with you anyway, because I think that 'value-as-profundity' as I describe below shares much of the spirit of virtue ethics, but has higher aspirations insofar as it isn't restricted to consideration of one's own virtue, or even virtue in general.
About two years ago I had a 'revelation' - something that's completely changed the way I think about life, the universe and everything.
This one ...
Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.
There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.
When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...
Moral philosophy was designed for humans, not for rational agents. When you're used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, "Humans don't have utility functions." Similarly, Kaj warns us: "be extra careful when you try to apply the concept of a utility function to human beings." Back in the day nobody thought smarter-than-human intelligence was possible, and many still don't. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren't even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon). Epicurus, Mill, and Bentham were great thinkers and all, but it's not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don't have to waste memory on what a personalized rulebook says about different kinds of milk, and you don't have to think 15 inferential steps ahead to determine if you should drink skim or whole.
You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that's what is right). Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they're all actually virtue ethicists: they're trying to do the 'consequentialist' or 'deontologist' things to do, which happen to usually be the same. Alicorn's decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you're a virtue ethicist it's easier to say "I'm the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues" and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it's deontic). It's not illegal!
Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:
To quote Kaj's response to the above:
Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don't actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.
So, if you'd like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.