I think of virtue ethics as reflecting a timeless decision theory. For example, if you use TDT to make decisions, you don't just want to decide to one-box because you heard it was cool, you want to be the kind of person who one-boxes. You are honorable if you are the kind of person who can make credible commitments. You are cooperative if you are the kind of person who cooperates in prisoner dilemmas.
I think of virtue ethics as reflecting a timeless decision theory. For example, if you use TDT to make decisions, you don't just want to decide to one-box because you heard it was cool, you want to be the kind of person who one-boxes.
Would that mean that you want to actually precommit to making certain choices in certain situations, or that you merely want to be predicted to do so, or are those two indistinguishable?
EDIT: I know this is rude, but whoever's downvoting, since this was my original blog post, would you mind explaining precisely how stupid or ignorant I've been? I'd really prefer higher-information signals that can point me towards where/how to learn things.
Actually precomitting corresponds to TDT/virtue ethics, and merely trying to be predicted to do so would correspond to something else (technically TDT is in this class, but it is a much larger class than just that).
Actually precomitting corresponds to TDT/virtue ethics, and merely trying to be predicted to do so would correspond to something else (technically TDT is in this class, but it is a much larger class than just that).
This is a really good point and might need to be repeated more often. This may be a source of some confusion about what TDT does.
I try to look at it this way:
When I have time to process, my ultimate morality is purely consequentialist in nature.
Since I know that my information is always imperfect and my processing power is always limited, I use virtue ethics to make my in-the-moment decisions.
Then, later, when I have more time and information, I use consequentialist reasoning to update my virtue ethics heuristics.
For me, "virtues" are only virtuous inasmuch as they have positive consequential utility, and striving for "virtues" is preferable to direct consequential computation because for limited human minds, virtue ethics tends to produce better net consequences.
You can pull the same stunt with Deontology - treat it as pre-packaged rules to follow when you don't have the time in the moment to do the relevant consequential calculations. Say, you know that lying is much worse for you than your in-the-moment decision making complex says it is. You can reliably do better by implementing a rule of "don't lie", so you do, and then you follow it.
Perhaps this is a problem with my understanding of Deontology, but it seems like Deontological ethics are not as robust under update as Virtue ethics. I.e., I can start with "don't lie", but then discover semi-reliable conditions under which lying IS preferrable, so I update to "don't lie unless a life is at serious risk", which now has a pointer snaking out from the "don't lie" rule to the "life" and "serious risk" definitions. The next time I update something that affects the "serious risk" definition, I have to trace down all those dependencies and re-verify coherence.
Virtue ethics has the advantage of performing its coherence checks mostly subconsciously/instinctively, since it ties into behaviors that have been evolutionarily advantageous to our ancestors. Deontology, with its necessity for strict rule-adherence and logical rigour, has many of the failure modes of consequentialism without any of its direct benefits.
I concur. Well said. I'm going to steal this concept: purely consequentialist given sufficient time and information, but merely virtuous in a pinch. Because in a pinch, virtue ethics is consequentially superior... ethically. Ha.
Also, I assume your'll forgive me stealing your ethical heuristics? :)
I wouldn't share it if I didn't approve of others utilizing it. :) If you find any handy moral heuristics yourself, be sure to pass them forward.
Your description of deontological ethics sounds closer to rule consequentialism, which is a different concept. Deontology means that following certain rules is good in and of itself, not because they lead to better decisionmaking (in terms of promoting some other good) in situations of uncertainty.
It sounds more like act utilitarianism to me. Rule utilitarianism is when you notice that lying usually has bad consequences, and therefore decide to lie even when lying has good consequences. Coming up with heuristics like "don't lie, unless you have a really good reason" or even "don't lie, even if you think you have a really good reason" is still something you do with the sole intent of improving the consequences. It is therefore act utilitarianism.
Ehh, I think that's pretty much what rule util means, though I'm not that familiar with the nuances of the definition so take my opinion with a grain of salt. Rule util posits that we follow those rules with the intent of promoting the good; that's why it's called rule utilitarianism.
I'm pretty sure the first time I read this, it specifically stated that using rules of thumb is not the same as rule utilitarianism. As it is, it's less clear, but I'm pretty sure it's still saying that they are two different ideals, rather than just different strategies.
Just to see if I'm following correctly:
If I want to follow the rule "optimize actions for some utility function X", rule consequentialism says I do this because of the result of the utility function X, and my terminal value is X()¹, which I am trivially doing better decisionmaking for by using the aforementioned rule.
On the other hand, deontology says that I'm following that rule because X itself is good, regardless of whether I value X() or not. This may be because that is simply how human brains are programmed and that is what they do, or by some philosophically-vague decree from higher powers, or something else, but the key point being that X() is completely irrelevant?
1) Programmer slang. If I say my value is "X", that means I value the function, but if I say X(), that means I value the output of the function.
I think that's accurate, though maybe not because the programming jargon is unnecessarily obfuscating. The basic point is that following the rule is good in and of itself. You shouldn't kill people because there is a value in not killing that is independent of the outcome of that choice.
You shouldn't kill people because there is a value in not killing that is independent of the outcome of that choice.
As an attempt to remove the programming jargon (I don't know of any words or expressions which express the same concept without math or programming jargon of some kind):
For that example, skipping the traditional "Kill this one or five others die!" dilemma, if we suppose the person to be killed will revive on their own and thereby become immortal, with no additionnal side effects, the deontological rule still takes precedent and therefore it is good to let the person later die of old age. Rule consequentialism, in such a corner case, would want the person to end up immortal.
Correct?
That would be a form of deontology, yes. I'm not sure which action neo-Kantians would actually endorse in that situation, though.
I thought the question was about normative, not descriptive ethics. Normative here meaning: how would an ideal agent behave?
Human beings are too messy to ask about their descriptive ethics in a simple survey question.
Ah. The issue being, I think an ideal agent will have to behave meta-ethically, or not at all. Agent implies the presence of multiple agents; in a single-agent universe the distinction between morality and aesthetics collapses. A universe with a single human or single ideal agent in it is morally equivalent to a universe with only Clippy in it.
At least, this is a personal intuition I'm mediocre at unpacking. Certainly, from a naturalistic/evo-psych perspective, our notions of morality derive quite directly from our social functioning and our aesthetics. Icky things are impure-evil because we have some intuition that they will poison or damage us somehow; murder is evil because it hurts other human beings with whom we had social interactions. (In fact, killing a dehumanized/socially decontextualized human is usually considered wrong only by people who've significantly extrapolated and rationalized their ethics away from their moral intuitions!)
I think an ideal agent will have to behave meta-ethically, or not at all. [...] A universe with a single human or single ideal agent in it is morally equivalent to a universe with only Clippy in it.
I think your comment needs more unpacking than that. I don't understand most of it, especially the sentences above.
in a single-agent universe the distinction between morality and aesthetics collapses.
What would prevent it from collapsing in a multiple-agent universe?
Certainly, from a naturalistic/evo-psych perspective, our notions of morality derive quite directly from...
Are you implying this is at odds with consequentialism, or something else? Consequentialism is compatible with ethical egoism, altruism, utilitarianism and many other moral philosophies, or, you could say those are subcategories of consequentialism. You have to have some terminal values for it to make any sense, but that doesn't imply virtue ethics, which is a mistake you seem to be making in the OP.
You have to have some terminal values for it to make any sense, but that doesn't imply virtue ethics, which is a mistake you seem to be making in the OP.
I'm not even trying to imply anything about virtue ethics in the OP :-/.
What would prevent it from collapsing in a multiple-agent universe?
I think that our concept of morality as distinct from aesthetics seems to be primarily a social thing. Morality is about how we handle other people, or at least some abstracted sense of an Other with real agency. People, certain animals, Nature, and God are thus all considered valid subjects for human morality to deal with, but we usually have no moral intuitions or even deductions about, say, paper-clips or boot laces as such.
A religious person might care that God legislates a proper order for tying your boot laces (it's left shoe followed by right shoe, halahically speaking ;-)), but even they don't normally have a preexisting, terminal moral value over the bootlaces themselves.
So, to sum up the unpacking, I think that on a psychological level, morality is fundamentally concerned with other people/agents and their treatment, it's a social function.
From the OP:
In such situations, the only drawback is that naive consequentialism fails to consider consequences on the person acting (ie: me). Once I make that more virtue-ethical adjustment ...
Consequentialism would cover you just fine, if you just happened to have any terminal values concerning you. Or, do you mean consequentialism implies too much computation for you? If so, using simpler moral heuristics is still consequentialism, if you predict it is useful to maximize your values in certain situations.
I think that our concept of morality as distinct from aesthetics seems to be primarily a social thing. Morality is about how we handle other people
Or animals, just like you said. It could also include how you handle your future or past self, and I don't think that is about aesthetics. Alas, we seem to be arguing about definitions here, probably not very useful.
Deontology and Virtue Ethics are reducible to counting non-obvious consequences of your actions. If you choose to lie, people are more likely to disbelieve you - so there's a reason to follow a "no lying" rule that a naive consequentialist misses. Similarly, doing calisthenics every morning helps turn you into a disciplined and vigorous person, and these consequences are also easy to miss.
In other words, I'm a consequentialist that uses deontological and virtue-based thinking as a cue to consider the consequences of following policies or the consequences of being the kind of person who does X.
Deontology and Virtue Ethics are reducible to counting non-obvious consequences of your actions. If you choose to lie, people are more likely to disbelieve you - so there's a reason to follow a "no lying" rule that a naive consequentialist misses.
I don't believe this is a reduction. A deontologist will not lie even when he has built up an immense base of trust and would "win" a whole lot from the lie. He just won't do it, because to him it's completely unethical.
Furthermore, the consequentialist might reason the other way around. A deontologist not-liar might decide that he can use Exact Words or You Didn't Ask to engage in some necessary deception. A long-term consequentialist will note that actually doing so gets you a reputation for being a Manipulative Bastard -- which then actually segues right into "Virtue Ethics as Timeless Decision Theory or ethics under repeated games".
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
EDIT: What I actually do, myself, is to sometimes lie using the Moral Equivalent of the Truth: a lie designed not to poison other people's decision-making. Lying outright about having an errand to do instead of sleeping in (insert other minor vices here...) is more-or-less ok, but using Exact Words about making a contract and becoming a magical girl... is evil.
(Yes, that was a Madoka Magica reference.)
EDIT EDIT: Which definitely does seem consequentialist, in the limit, but includes consequential reasoning over how my actions affect other people's decision-making, which then involves Timelessness and virtue-reasoning.
A deontologist will not lie even when he has built up an immense base of trust and would "win" a whole lot from the lie.
If you make a deontologist out of whole cloth with non-contradicting rules, sure. An actual human using deontological thinking is reducible to consequentialism plus large penalties for rule breaking. I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between "always tell the truth" and "do not kill people, or through inaction allow people to die"), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism (I suppose you could make rules for which rules to follow when, but that way lies making way too many rules).
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
I believe that TDT is a formalization of the intuition that if you make a certain choice in certain circumstances, then everything that makes decisions in a similar enough manner is going to make the same choice in similar enough circumstances. That's a complicated sentence, let's see if I can do better:
Your current decision isn't an isolated event. You make that decision for certain reasons - the kind of person you are, the situation you are in, the kind of logic you use, and maybe some other things that I'm missing at the moment. This decision-making process has causal influence on all other decisions that are similar enough to the decision you're currently making. So if you want to get the right answer for certain classes of problems - say, playing prisoner's dilemma against a copy of you, or Newcomb's problem - then you need a decision theory that explicitly takes this causal link into account. TDT is one such formal decision theory that takes this into account.
I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between "always tell the truth" and "do not kill people, or through inaction allow people to die"), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism
Sorta agreed. But note that rewriting some conflicting rules into consequentialist values automatically produces the instrumental goal of "avoid getting into situations where the rules would conflict", whereas the original deontologist might or might not have that as one of their rules.
Yes. I also though a bit about this question and chose "Other" for the same reason. But I didn't think as long as you about it, but your exposition fits.
An obvious short-cut to this end is: If there are multiple competing schools of philosophical inquiry usually all are right somehow and a synthesis is in order. You just provided one.
I think that all ethical systems are just rationalizations, hence all the difficulties in using them consistently.
One thing that I sometimes do when I'm uncertain of an ethical decision is to take a majority vote of different ethical systems. If there's a clear majority going one way, then that's useful evidence that whatever my actual moral position, that's what I should do. This reflects that I suspect at some level that humans either have an inherently inconsistent moral framework or we haven't really come up with a good way of articulating what our actual morality is. This works as an approximation.
This seems closer to a mixture of Joshua Greene and Nick Bostrom's positions on the issues involved than any other intuitive idea about morality I have ever seen. But, likely this was due to you have read them before and then forgetting you read it from them, or to a random event.
On the recent LessWrong/CFAR Census Survey, I hit the following question:
To my own surprise, I couldn't come up with a clear answer. I certainly don't consistently apply one of these things across every decision I make in my life, and yet I consider myself at least mediocre on the scale of moral living, if not actually Neutral Good. So what is it I'm actually doing, and how can I behave more ethical-rationally?
Well, to analyze my own cognitive algorithms, I do think I can actually place these various codes of ethics in relation to each other. Basically, looked at behavioristically/algorithmically, they vary across how much predictive power I have, my knowledge of my own values, and what it is I'm actually trying to affect.
Consequentialism is the ethical algorithm I consider useful in situations of greatest predictive power and greatest knowledge of my own values. It is, so to speak, the ethical-algorithmic ideal. In such situations, the only drawback is that naive consequentialism fails to consider consequences on the person acting (ie: me). Once I make that more virtue-ethical adjustment, consequentialism offers a complete ideal for ethical action over a complete spectrum of moral values for affecting both the universe and myself (but I repeat: I'm part of the universe).
However, in almost all real situations, I don't have perfect predictive knowledge -- not of the "external" universe and not of my own values. In these situations, I can, however, use my incomplete and uncertain knowledge to find acceptable heuristics that I can expect to yield roughly monotonic behavior: follow those rules, and my actions will generally have positive effects. This kind of thinking quickly yields up recognizable, regular moral commandments like, "You will not murder" or "You will not charge interest above this-or-that amount on loans". Yes, of course we can come up with corner-case exceptions to those rules, and we can also elaborate logically on the rules to arrive at more detailed rules covering more circumstances. However, by the time we've fully elaborated out the basic commandments into a complete, obsessively-compulsively detailed legal code (oh hello Talmud), we've already covered most of the major general cases of moral action. We can now invent a criterion for how and when to transition from one level of ethical code to the one below it: our deontological heuristics should be detailed enough to handle any case where we lack the information (about consequences and values) to resort to consequentialism.
At first thought, virtue ethics seems like an even higher-level heuristic than deontological ethics. The problem is that, unlike deontological and consequentialist ethics, it doesn't output courses of action to take, but instead short- and long-term states of mind or character that can be considered virtuous. So we don't have the same thing here; it's not a higher-level heuristic but a seemingly completely different form of ethics. I do think we can integrate it, however: virtue ethics just consists of a set of moral values over one's own character. "What kind of person do I think is a good person?" might, by default, be a tautological question under strict consequentialism or deontology. However, when we take an account of the imperfect nature of real people (we are part of the universe, after all), we can observe that virtue ethics serves as a convenient guide to heuristics for becoming the sort of person who can be relied upon to take right actions when moral issues present themselves. Rather than simply saying, "Do the right thing no matter what" (an instruction that simply won't drive real human beings to actually do the right thing), virtue ethics encourages us to cultivate virtues, moral cognitive biases towards at least a deontological notion of right action.
It's also possible we might be able to separate virtue ethics into both heuristics over our own character, and actual values over our own character. These two approaches to virtue ethics should then converge in the presence of perfect information: if I knew myself utterly, my heuristics for my own character would exactly match my values over my own character.
This is my first effort at actually blogging on rationality subjects, so I'm hoping it's not covering something hashed and rehashed, over and over again, in places like the Sequences, of which I certainly can't attest a full knowledge.