In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism. The fundamental idea is that the correct moral action is the one that satisfies the strongest preferences of the most people. Preferences are discussed with units such as fun, pain, death, torture, etc. One of the biggest dilemmas posed on this site is the Torture vs. Dust Specks problem. I should say, up front, that I would go with dust specks, for some of the reasons I mentioned here. I mention this because it may be biasing my judgments about my question here.
I had a thought recently about another aspect of Torture vs. Dust Specks, and wanted to submit it to some Less Wrong Discussion. Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation? I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Should we assign weight to other people's moral intuitions, and how much weight should it have?
I mean "comparable" as the negation of this line of thought.
Ah.
So, you meant something like: if I think A is worse than B, but not infinitely worse than B, and I don't have some kind of threshold (e.g., a threshold of probability) below which I no longer evaluate expected utility of events at all, then my beliefs about B are irrelevant to my decisions because my decisions are entirely driven by my beliefs about A?
I mean, that's trivially true, in the sense that a false premise justifies any conclusion, and any finite system will have some threshold for which events not to evaluate.
But in a less trivial sense... hm.
OK, thanks for clarifying.