steven0461 comments on Hacking the CEV for Fun and Profit - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
It seems to me there's a pretty strong correlation between philosophical competence and endorsement of utilitarian (vs egoist) values, and also that most who endorse egoist values do so because they're confused about e.g. various issues around personal identity and the difference between pursuing one's self-interest and following one's own goals.
Can we taboo utilitarian since nobody ever seems to be able to agree what it means? Also, do you have any references to strong arguments for whatever you mean by utilitarianism? I've yet to encounter any good arguments in favour of it but given how many apparently intelligent people seem to consider themselves utilitarians they presumably exist somewhere.
Utility is just a basic way to describe "happiness" (or, if you prefer, "preferences") in an economic context. Sometimes the measurement of utility is a utilon. To say you are a Utilitarian just means that you'd prefer an outcome that results in the largest total number of utilons over tthe human population. (Or in the universe, if you allow for Babyeaters, Clippies, Utility Monsters, Super Happies , and so on.)
Alicorn, who I think is more of an expert on this topic than most, had this to say:
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don't know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word 'utilitarian'.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say "killing humans is always bad." I'd thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn't narrow the field down much from that space.
I haven't seen any stats on that issue. Is there any evidence relating to the topic?
Depending on how you define 'philosophical competence' the results of the PhilPapers survey may be relevant.
Here are the stats for Philosophy Faculty or PhD, All Respondents
And for Philosophy Faculty or PhD, Area of Specialty Normative Ethics
As utilitarianism is a subset of consequentialism it appears you could conclude that utilitarians are a minority in this sample.
Thanks! For perspective:
Unfortunately the survey doesn't directly address the main distinction in the original post since utilitarianism and egoism are both forms of consequentialism.