In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism.
What is your evidence for this? In The Preference Utilitarian’s Time Inconsistency Problem, the top voted comments didn't try to solve the problem posed for preference utilitarians, but instead made general arguments against preference utilitarianism.
The real answer to torture vs. dust specks is to recognize that the answer to the scenario is torture, but the scenario itself has a prior probability so astronomically low that no evidence could ever convince you that you were in it, since at most k/3^^^3 people can affect the fate of 3^^^3 people at once (where k is the number of times a person's fate is affected). However, there are higher-probability scenarios that look like torture vs. 3^^^3 dust specks, but are actually torture vs. nothing or torture vs. not-enough-specks-to-care. In philosophical pr...
I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.
(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum ind...
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
So, I'd say that you aren't preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
Really? Preference utilitarianism prevails on Less Wrong? I haven't been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Isn't this a confusion of levels, with preference utilitarianism being an ethical theory, and moral anti-realism being a metaethical theory?
Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation?
If we feel like it. I personally would say yes. What would you say?
I find it impossible to engage thoughtfully with philosophical questions about morality because I remain unconvinced of the soundness of the first principles that are applied in moral judgments. I am not interested in a moral claim that does not have a basis in some fundamental idea with demonstrable validity. I will try to contain my critique to those claims that do attempt at least what I think to be this basic level of intellectual rigor.
Note 1: I recognize that I introduced many terms in the above statement that are open to challenge as loaded and...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Each one with probability of order 1/3^^^3? Well that's what I call overconfidence.
I think the answer is that morality has to be counted, but we also have to count changes to morality. If moral preferences were entirely a matter of intellectual commitment, this might lead to double counting, but in fact people really do experience pride, guilt, and so on - and I doubt that morality could have any effect on their behavior if it didn't.
Counting the changes to morality can cut both ways. For instance: some people have a strong inclination to have sex with people of the same sex, while many people (sometimes the same ones) are deeply morally...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I think you've nailed my problem with this scenario: anyone who wouldn't go for this, I would be disinclined to listen to.
I am fairly sure that we aren't talking past each other, I just disagree with you on some points. Just to try and clarify those points...
You seem to believe that a moral theory must, first and foremost, be compelling... if moral theory X does not convince others, then it can't do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately... which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I'm the only person who ever subscribes to that theory.
You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I'm unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false.
Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I'd love to live in that world, but I see no evidence that I do. (That said, it's possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I'm inclined to agree with the former.)
I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don't affect humans in any way... but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well.
You seem to believe that things I'm biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desire for pie may be stronger in some settings than others, and it may be brought lower than my desire for the absence of pie via a variety of mechanisms, and etc. Sure, maybe I can't "will myself to unlove it," but I have stronger tools available than unaided will, and we're developing still-stronger tools every year.
I agree that the desire to be rational is a desire like any other. I intended "much of anything else" to denote an approximate absence of desire, not a complete one.
I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now --- at least your examples come from this set --- while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a ...
In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism. The fundamental idea is that the correct moral action is the one that satisfies the strongest preferences of the most people. Preferences are discussed with units such as fun, pain, death, torture, etc. One of the biggest dilemmas posed on this site is the Torture vs. Dust Specks problem. I should say, up front, that I would go with dust specks, for some of the reasons I mentioned here. I mention this because it may be biasing my judgments about my question here.
I had a thought recently about another aspect of Torture vs. Dust Specks, and wanted to submit it to some Less Wrong Discussion. Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation? I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Should we assign weight to other people's moral intuitions, and how much weight should it have?