Douglas_Knight comments on Ethics as a black box function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
Ah, you're right, I left out a few inferential steps. The important point is that over time, the frameworks take on a moral importance of their own - they cease to become mere models, instead becoming axioms. (More about this in my addendum.) That also makes the meaning of "models that best explain intuitions" and "models that best justify intuitions" blend together, especially since a consistent ethical framework is also good for your external image.
To put it briefly: by "all forms of utilitarianism", I wasn't referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility. Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.) Of course, it looks politically better to claim that your principles are absolute and not subject to negotiation, so people want to instinctively reject any such thoughts.
I don't think that's the common usage. Maybe the same etymology means that any difference must erode, but I think it's worth fighting. A related distinction I think is important is consequentialism vs utilitarianism. I think that the modern meaning of consequentialism is using "good" purely in an ordinal sense and purely based on consequences, but I'm not sure what Anscombe meant. Decision theory says that coherent consequentialism is equivalent to maximizing a utility function.