In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism.
What is your evidence for this? In The Preference Utilitarian’s Time Inconsistency Problem, the top voted comments didn't try to solve the problem posed for preference utilitarians, but instead made general arguments against preference utilitarianism.
The real answer to torture vs. dust specks is to recognize that the answer to the scenario is torture, but the scenario itself has a prior probability so astronomically low that no evidence could ever convince you that you were in it, since at most k/3^^^3 people can affect the fate of 3^^^3 people at once (where k is the number of times a person's fate is affected). However, there are higher-probability scenarios that look like torture vs. 3^^^3 dust specks, but are actually torture vs. nothing or torture vs. not-enough-specks-to-care. In philosophical pr...
I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.
(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum ind...
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
So, I'd say that you aren't preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
Really? Preference utilitarianism prevails on Less Wrong? I haven't been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Isn't this a confusion of levels, with preference utilitarianism being an ethical theory, and moral anti-realism being a metaethical theory?
Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation?
If we feel like it. I personally would say yes. What would you say?
I find it impossible to engage thoughtfully with philosophical questions about morality because I remain unconvinced of the soundness of the first principles that are applied in moral judgments. I am not interested in a moral claim that does not have a basis in some fundamental idea with demonstrable validity. I will try to contain my critique to those claims that do attempt at least what I think to be this basic level of intellectual rigor.
Note 1: I recognize that I introduced many terms in the above statement that are open to challenge as loaded and...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Each one with probability of order 1/3^^^3? Well that's what I call overconfidence.
I think the answer is that morality has to be counted, but we also have to count changes to morality. If moral preferences were entirely a matter of intellectual commitment, this might lead to double counting, but in fact people really do experience pride, guilt, and so on - and I doubt that morality could have any effect on their behavior if it didn't.
Counting the changes to morality can cut both ways. For instance: some people have a strong inclination to have sex with people of the same sex, while many people (sometimes the same ones) are deeply morally...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I think you've nailed my problem with this scenario: anyone who wouldn't go for this, I would be disinclined to listen to.
I am not convinced that if a fat man were actually standing there waiting to be shoved piteously onto the tracks that the human mind would necessarily function in the same way it does when sitting in a cafe and discussing the fate of said to-be switch-pusher.
If I were to stake the distinction between the actual and the theoretical on anything, it would be on the above point. What data have we on the reliability of these -- I think you must agree that, regardless of the hypothetical opinions of medieval scholar types, the Torture vs. Dust Specks scenario abstract for us now and here -- thought experiments to predict human behavior when, to retreat to the cliche, one is actually in the trenches?
I... don't think the point of such thought experiments was ever to predict what a human will do. That we do not make the same choices under pressure that we do when given reflection and distance is quite obvious. If you are interested in predicting what people will do, you should look at psychological battery tests, which (should) strive to strike a balance between realism and measurability.
The point of train tracks-type experiments was to force one to demand some coherence from their "moral intuition", and to this end the fact that you're making such choices sitting in a café is a feature, not a bug, because it lets you carefully figure out logical conclusions on which (at least in theory) you will then be able to unthinkingly rely once you're in the heat of the moment (probably not an actual train track scenario, but situations like giving money to beggars or voting during jury duty where you only have seconds or hours to make a choice). When you're actually in the trenches, as you put it, your brain is going to be overwhelmed by a zillion more cognitive biases than usual, so it's very much in your interest to try and pre-make as many choices as possible while you have the luxury of double-checking every one of your assumptions and implications.
One problem I have with the way such thought experiments are phrased is that they often ask "what would you do?" rather than "what's the best thing to do?", which muddles this notion of being more interested in my moral intuitions about the latter than my predictions about the former.
But I realize that people and cultures vary widely in how they interpret phrases like that.
In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism. The fundamental idea is that the correct moral action is the one that satisfies the strongest preferences of the most people. Preferences are discussed with units such as fun, pain, death, torture, etc. One of the biggest dilemmas posed on this site is the Torture vs. Dust Specks problem. I should say, up front, that I would go with dust specks, for some of the reasons I mentioned here. I mention this because it may be biasing my judgments about my question here.
I had a thought recently about another aspect of Torture vs. Dust Specks, and wanted to submit it to some Less Wrong Discussion. Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation? I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Should we assign weight to other people's moral intuitions, and how much weight should it have?