Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.
There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.
When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...
Moral philosophy was designed for humans, not for rational agents. When you're used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, "Humans don't have utility functions." Similarly, Kaj warns us: "be extra careful when you try to apply the concept of a utility function to human beings." Back in the day nobody thought smarter-than-human intelligence was possible, and many still don't. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren't even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon). Epicurus, Mill, and Bentham were great thinkers and all, but it's not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don't have to waste memory on what a personalized rulebook says about different kinds of milk, and you don't have to think 15 inferential steps ahead to determine if you should drink skim or whole.
You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that's what is right). Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they're all actually virtue ethicists: they're trying to do the 'consequentialist' or 'deontologist' things to do, which happen to usually be the same. Alicorn's decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you're a virtue ethicist it's easier to say "I'm the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues" and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it's deontic). It's not illegal!
Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:
The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.
[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.
To quote Kaj's response to the above:
Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. [...]
What has this meant in practice? Well, I'm not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of "emotional machinery" as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.
But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became "how could I develop myself", "how could I be more virtuous" and "how could I best act to improve the world". From the last bit, you can see that I haven't lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it's more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.
Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don't actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.
So, if you'd like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.
Here's my tentative answer to this question. It's just a dump of some half-baked ideas, but I'd nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.
Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn't be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)
However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on such a point, or someone believes he can profit from an aggressive intrusion beyond some such point. Recognition of these points is a complex matter, determined by everything from genetics to culture to momentary fashion, and they can be more or less stable and of greater or lesser importance (i.e. overstepping some of them is seen as a trivial annoyance, while on the other extreme, overstepping certain others gives the other party a licence to kill). These points include all the more or less formally stated social and legal norms, property claims, and all the countless other more or less important expectations that we believe we reasonably hold against each other.
So, here is my basic idea: being a virtuous person means recognizing the existing Schelling points correctly, drawing and communicating those points whose exact location depends on you skillfully and prudently -- and once they've been drawn, committing yourself to defend them relentlessly (so that hopefully, nobody will even see overstepping them at your disadvantage as potentially profitable). An ideal virtuous man by this definition, capable of practical wisdom to make the best possible judgments and determined to respect the others's lines and defend his own ones, would therefore have the greatest practical likelihood of living his life in harmony and having all his business run smoothly, no matter what his station in life.
A society of such virtuous people would also make possible a higher level of voluntary benevolence in the form of friendship, charity, hospitality, mutual aid, etc., since one could count on others not to exploit maliciously a benevolent attempt at lowering one's guard on crucially important lines and trying to base human relationships on lines that are more relaxed and pleasant, but harder to defend if push comes to shove. For example, it makes sense to be hospitable if you're living among people whom you know to be determined not to take advantage of your hospitality, or to be merciful and forgiving if you can be reasonably sure that people's transgressions are unusual lapses of judgment unlikely to be repeated, rather than due to a persistent malevolent strategy. Thus, in a society populated by virtuous people, it makes sense to apply the label of virtuousness also to characteristics such as charity, friendliness, mercy, hospitality, etc. (but only to the point where one doesn't let oneself be exploited for them!).
This also seems to clarify the trolley problem-like situations, when we observe that actions that involve your own Schelling boundaries are more important to you than others. You may feel sorry for the folks who will die, perhaps to the point where you'd sacrifice yourself to save them (but perhaps not if this leaves your own kids as poor orphans, since your existing network of tacit agreements involves caring for them). However, pushing the fat man means overstepping the most important and terrible of all Schelling boundaries -- that which defines unprovoked deadly aggression against one's person, and whose violation gives the attacked party the licence to kill you in self-defense. Violating this boundary is such an extreme step that it may be seen as far more drastic than passively witnessing multiple deaths of people in a manner than doesn't violate any tacit agreements and expectations. (Note though that this perspective is distinct from pure egoism: the tacit agreements in question include a certain limited level of altruism, like e.g. helping a stranger in an emergency, at least by calling 911.)
You may view all this virtue talk as consequentialism with respect to the immensely complex network of Schelling points between humans, which takes into account higher-level game-theoretical consequences of actions, which are more important than the factors covered by the usual utilitarian spherical-cow models. Yet this system is far too complex to allow for any simple model based on utility functions or anything similar. At most, we can formulate advice aimed at individuals on how to make judgments based on the relations that concern them personally in some way and are within their own sphere of accurate comprehension -- and the best practical advice that can be formulated basically boils down to some form of virtue ethics.
So, basically, that would be my half-baked summary. I'm curious if anyone thinks that this might make some sense.
Not only does it make sense, I think it's the most descriptively-accurate summary of how people in the real world act that I've seen, which makes it a valuable tool for mapping the territory. I'd love to see it as a top-level post, if you could take the time. I don't think you'd even have to add much.