I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person's quality of life can be quantified--i.e., that person's "utility"--and these utilities can be aggregated. Under preference utilitarianism, a person's utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone's utility function has the same weight when the aggregation is performed, hence the catchy phrase "greatest good for the greatest number".
However, I have also seen LW posts and comments talk about utilitarianism in relation to how much you should value the lives of people close to you compared to other people, and how much you should value abstract things like "freedom" relative to people's lives. This comment thread is one example. These discussions about valuing the lives of others and quantifying abstract values sounds a lot like utility maximization under rational choice theory rather than utilitarianism.
So are people conflating utility maximization and utilitarianism, am I getting confused and misunderstanding the distinction, or is something else going on?
It's true that people often conflate utilitarianism with consequentialism, but I don't think that's what's going on here. I think it is quite reasonable to include under utilitarianism moral theories that are pretty close, like weighting people when aggregating. If people think that raw utilitarianism doesn't describe human morality, isn't it more useful for the term to describe people departing from the outpost, rather than the single theory? Abstract values that are not per-person are more problematic to include in the umbrella, but searching for "f...
This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.