Virtue ethics might be reframed for the LW audience as "habit ethics": it's the notion of ethics appropriate for a mind that precomputes its behavior in most situations based on its own past behavior. (Deontology might be reframable as "Schelling point ethics" or something.)
I've had the same kind of insight. If you compute the consequences of following certain habits, the best plan looks an awful lot like virtue ethics. You're not just someone who eats ice cream, you're someone who has an ice cream eating habit.
Similarly, if you compute the consequences of setting and following rules, you get back a lot of Deontology. A doctor can't just cut up one person for their organs to save a dozen without the risk of destroying valuable societal trust in others following certain expectations (like not being killed for your organs when you go to the doctor).
Yes - and to extend the point, I am an "all three" ethicist at heart, and I think most people are. We need to assess outcomes, we need to assess habits, and we need to assess fairness. Of course, this leaves wide open the possibility that two or one of {utility, virtue, justice} could be more fundamental and explain the other(s), but at the day to day level we need them all.
Utilitarians do not judge people based on the consequences of their actions. They judge people based on the consequences of judging them.
There are times when my instincts take over. This is probably a good thing, but it would happen even if I didn't know this. Nonetheless, when I make a decision, normally what I am concerned about is about what will happen.
When you think about the source code of the players you are being a 'virtue ethicist.' When you optimize outcomes, you are being a 'consequentialist.' You can do both at once.
Beware, the philosopher's virtue ethics is very different than the habit ethics version that is being discussed here (which isn't really an normative ethical theory at all, but rather a descriptive one). The philosopher's virtue ethics is tied to the concept of teleology (purpose) and the objective ends of human beings, and makes no sense under the reductionist framework usually held here.
I am not sure how to classify religious fanaticism
I always thought of that as less a moral difference and more a matter of actually taking beliefs seriously, except with a failure to equally seriously go about checking if beliefs are true.
I'm guessing more than a few rationalists who grew up in religious contexts once upon a time took those religious beliefs much more seriously than their peers did, and consequentially might have shown signs of "fanaticism".
I'm pretty sure that both me and my dogs are virtue ethicists at heart. I don't think natural selection had any sort of a way to code in any other kind of morality, nor does it seem likely that natural selection would have anything to gain by even trying to code in a different kind of morality.
Yes, I presume or really post-sume having read a lot of random stuff that the bulk of my moral sentiments are 1) inborn, 2) started being put in us long before we were humans and 3) are indeed sentiments. I think moral sentiments are the human words for what make...
Sure, I agree that my instinctive judgments of right and wrong are more about judging people (including myself) than they are about the consequences I expect from various actions. This is especially true when the people involved are in categories I am motivated to judge in particular ways.
What judgments I endorse when I have time and resources to make a more considered decision is an entirely different question.
Which of those reflects my "at heart" ethical philosophy is a difficult question to answer... I suspect because it's not well defined.
I agree, we tend to instinctively rely on virtue ethics. And this means that we are not psychopaths.
Our apparent reliance on virtue ethics is a result of the classical conditioning of 'good' and 'bad' that has been drilled into us since birth. "Bad Timmy! Stealing candy from the store is WONG!" is very negative reinforcement for a behavior.
If we could truly abandon our trained value system for pure consequentialism, then we would all be really good at running companies. But most people are not psychopaths, and more importantly most people d...
An ideally moral agent would be a consequentialist (though I won't say "utilitarian" for fear of endorsing the Mere Addition Population Ethic). However, actual humans have very limited powers of imagination, very limited knowledge of ourselves, and very little power of prediction. We can't be perfect consequentialists, because we're horrible at imagining and predicting the consequences of our actions -- or even how we will actually feel about things when they happen.
We thus employ any number of other things as admissible heuristics. Virtue eth...
Indeed, it makes perfect sense for us to be virtue ethicists in the sense as we care about forming the right habits. But in order for virtue ethics not to be vacuous or circular, we need some independent measure for which habits are good and which habits are bad. This is where consequentialism comes in for many Lesswrongers. (When I read professional philosophy, the impression that I formed was that people who talked about "virtue ethics" generally didn't realise this and ended up with something incomprehensible or vacuous.)
From Wikipedia
a consequentialist may argue that lying is wrong because of the negative consequences produced by lying—though a consequentialist may allow that certain foreseeable consequences might make lying acceptable. A deontologist might argue that lying is always wrong, regardless of any potential "good" that might come from lying. A virtue ethicist, however, would focus less on lying in any particular instance and instead consider what a decision to tell a lie or not tell a lie said about one's character and moral behavior
Under this sch...
Are you a virtue ethicist at heart?
No, but I'm a deontologist at heart. Only in death does duty end.
OK, so you use virtue ethics (doing one's duty is virtuous) and deontology as shortcuts for consequentialism, given that you lack resources and data to reliably apply the latter. This makes perfect sense. Your wife applies bounded consequentialism, which also makes sense. Presumably your shortcuts will keep her schemes in check, and her schemes will enlarge the list of options you can apply your rules to.
I upvoted this post, and I want to qualify that upvote. I upvoted this post because I believe it raises a substantial point, but I feel like it doesn't have enough, for lack of a better term, punch, to it. Part of my lack of conviction is based in how I'm not very well-educated on the manners of moral psychology, or philosophy, either, and I suspect this would be cleared up if I were to study it more. Shminux, you might not recognize me, but I'm Evan from the meetup. Anyway, I remember at the last meetup we both attended a couple of weeks ago when we discu...
Spoiler Alerts
An example from fiction. In the Dark Knight, Batman refuses to kill the Joker. From a consequential point of view, it would save many more lives if Batman just killed the damn Joker. He refuses to do this because it would make him a Killer, and he doesn't want that. Yet, intuitively, we view Batman as virtuous for not killing him.
One could also give this a deontological interpretation: Batman strictly follows "Thou shall not kill". I think, in general, that deontology and virtue ethics have a lot in common: if you follow deontology,...
Yet, intuitively, we view Batman as virtuous for not killing him.
I don't.
I'm frequently annoyed with supposed "good guys" letting the psychopathic super baddy live, taking their neck off their throats, only to lose many more lives and have to stop the bad guy again and again. I don't view them as virtuous, I view them as holding the idiot ball to keep the narrative going. It's like a bad guy stroking a white cat who sends James Bond off to die some elaborate ceremonial death, instead of clubbing him unconscious, putting a few rounds in his head, and having him rolled up in the carpet and thrown out.
Note that the storyline often allows the hero to have his "virtue" and execution too, as the bad guy will often overpower the idiot security forces holding him to pull a gun and shoot at the hero, allowing the hero to return fire in self defense. How transparent and tiresome. Generally "moral dilemmas" in movies are just this kind of dishonest exercise in having your cake and eating it too. How I long for a starship to explode when the Captain ignores the engineer and says "crank it to 11", or see some bozo snuffed out the moment he says "never tell me the odds".
Bond actually refused to play that game in Goldeneye.
[Bond is holding Trevelyan by his foot on top of the satellite antenna.]
Trevelyan: For England, James?
Bond: No. For me. [lets Trevelyan fall to his death]
Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me. I just sit there thinking, "Wow, they should definitely put a bullet in that guy's head ASAP," interleaved with, "Wait, what's the big deal, I don't see anyone getting hurt here," depending on the genre. Watch Star Trek TNG episodes with this in mind and you will quickly think that they are simultaneously completely incompetent and morally monstrous (the Prime Directive is one of the most evil rules imaginable).
I found myself thinking along similar lines about a year ago when I was faced with a legitimate moral dilemma. Situations which I can view in the abstract, or which I'm distanced from, I can generally apply dispassionate cost-benefit analysis to; but if I actually find myself in a position where I have to make decisions with moral consequences, I'll find myself agonising over what kind of person it makes me.
There's an extra frustrating element to this, because some decisions only have moral consequences as far as "what kind of person they make me"...
However, I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism. In other words, when judging one's actions we intuitively value the person's motivations over the rules they follow or the consequences of said actions.
Consequentialism has nothing to do with how to judge someone else's actions. If I am trying to poison my friend, but by some miracle the poison doesn't kill him and instead manages to cure his arthritis, then I am still a bad person. Virtue ethics seems like a rational framework to judge other people by, perhaps tautologically.
In real-life trolley problems even the committed utilitarians (like commanders during war time) are likely to hesitate before sacrificing lives to save more.
This, at least, seems to me to be entirely appropriate for a utilitarian. If you don't hesitate before sacrificing lives, you're likely to miss opportunities to accomplish the same goal without sacrificing lives.
If you have one option which is clearly superior to your known alternatives, but that option still leads to outcomes you would seriously want to avoid, then you should probably make full use of whatever time you have to look for other possible options which would be superior.
A pretty common trope in moral philosophy is the idea that since we've all met plenty of (and have many historical examples of) decent, good, and sometimes extraordinarily good people, it just can't the be case that the pre-theoretical intuitions of such people are just plain wrong. The direction of fit in a moral theory is theory->world: if our theory doesn't capture the way (decent or good) people actually do think about moral problems, it's probably wrong. If that's right, the fact that we are all virtue ethicisits at heart (or whatever we are), would be pretty good evidence for virtue ethics as the correct theory.
What do you think of this?
I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism.
I suspect that this is true, and that such differences in intuition account for the existence of these differing theories in the first place, e.g., Kant was intuitively deontological while Aristotle was intuitively a virtue ethicist.
Also, there may already be research into moral psychology that explores whether people's disagreements over ethical frameworks correlate with different personality traits. If so, this would speak to your claim.
If only a small minority of people are consequentialists by default, then coldly calculated actions that have good consequences would more likely be a sign of callous character than a finely tuned moral compass, which in turn could lead to bad consequences in other situations. People might not be as irrational judging these example situations as it seems.
I'm a virtue ethicist and a consequentialist, as the two are orthogonal. As I see it, the claim "being virtuous makes you happy, and that's why you should be virtuous" falls within both virtue ethics and consequentialism.
Is it possible that your definitions of consequentialist and virtue ethicist overlap? Consequentialism tells you to take the actions that will result in the greatest expected good, but it does not necessarily follow that the greatest expected good is obtained by doling out punishments and rewards to other people based on the immediate consequences of their actions.
Examples:
What sort of moral system to use should depend on what you're using it for. I find virtue ethics the most useful way to view the world, generally.
My sense is that we mostly can't evaluate things from a consequentialist perspective. We're not very good at predicting consequences, and we're even worse at evaluating whether somebody else is behaving in a proper consequentialist way, given the information at their disposal.
Moreover, consequentialism requires us to pin down what we mean by "consequence" and "cause", and those are hard. If a...
Disclaimer: I am not a philosopher, so this post will likely seem amateurish to the subject matter experts.
LW is big on consequentialism, utilitarianism and other quantifiable ethics one can potentially program into a computer to make it provably friendly. However, I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism. In other words, when judging one's actions we intuitively value the person's motivations over the rules they follow or the consequences of said actions. We may reevaluate our judgment later, based on laws and/or actual or expected usefulness, but the initial impulse still remains, even if overridden. To quote Casimir de Montrond, "Mistrust first impulses; they are nearly always good" (the quote is usually misattributed to Talleyrand).
Some examples:
I am not sure how to classify religious fanaticism (or other bigotry), but it seems to require a heavy dose of virtue ethics (feeling righteous), in addition to following the (deontological) tenets of whichever belief, with some consequentialism (for the greater good) mixed in.
When I try to introspect my own moral decisions (like whether to tell the truth, or to cheat on a test, or to drive over the speed limit), I can usually find a grain of virtue ethics inside. It might be followed or overridden, sometimes habitually, but it is always there. Can you?