Eliezer_Yudkowsky comments on On Terminal Goals and Virtue Ethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (205)
"Good people are consequentialists, but virtue ethics is what works," is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.
But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.
If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).
That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they'd take. "Always be kind" is also a rule. For clarity, I'd substitute the word 'algorithm' for 'rules'/'principles'. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best - be it inviolable deontological rules, character-based virtue ethics, or something else.
level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.
Level-1 is about rules which your habit and instinct can follow, but I wouldn't say they're ways to describe it. Here we're talking about normative rules, not descriptive System 1/System 2 stuff.
And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.
not quite. The initial instincts are the system-1 "presets". These can and do change with time. A particular entity's current system-1 behavior are its "habits".
Funny, I always thought it was the other way around... consequentialism is useful on the tactical level once you've decided what a "good outcome" is, but on the meta-level, trying to figure out what a good outcome is, you get into questions that you need the help of virtue ethics or something similar to puzzle through. Questions like "is it better to be alive and suffering or to be dead", or "is causing a human pain worse than causing a pig pain", or "when does it become wrong to abort a fetus", or even "is there good or bad at all?"
I think that the reason may be that consequentionalism requires more computation; you need to re-calculate the consequences for each and every action.
The human brain is mainly a pattern-matching device - it uses pattern-matching to save on computation cycles. Virtues are patterns which lead to good behaviour. (Moreover, these patterns have gone through a few millenia of debugging - there are plenty of cautionary tales about people with poorly chosen virtues to serve as warnings). The human brain is not good at quickly recalcuating long-term consequences from small changes in behaviour.
What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones... or was it the other way around? :p
Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?
In the sense that you can update on evidence until you can marginally predict end states?
I'm afraid I can't think of an example where there's a meta-level but on predictive capacity on that meta-level. Can you give an example?
I have no hope of being able to predict everything...there is always going to be a large set of end states I can't predict?
Then why have ethical opinions about it at all? Again, can you please give an example of a situation where this would come up?
Lo! I have been so instructed-eth! See above.