In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism.
What is your evidence for this? In The Preference Utilitarian’s Time Inconsistency Problem, the top voted comments didn't try to solve the problem posed for preference utilitarians, but instead made general arguments against preference utilitarianism.
The real answer to torture vs. dust specks is to recognize that the answer to the scenario is torture, but the scenario itself has a prior probability so astronomically low that no evidence could ever convince you that you were in it, since at most k/3^^^3 people can affect the fate of 3^^^3 people at once (where k is the number of times a person's fate is affected). However, there are higher-probability scenarios that look like torture vs. 3^^^3 dust specks, but are actually torture vs. nothing or torture vs. not-enough-specks-to-care. In philosophical pr...
I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.
(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum ind...
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
So, I'd say that you aren't preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
Really? Preference utilitarianism prevails on Less Wrong? I haven't been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Isn't this a confusion of levels, with preference utilitarianism being an ethical theory, and moral anti-realism being a metaethical theory?
Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation?
If we feel like it. I personally would say yes. What would you say?
I find it impossible to engage thoughtfully with philosophical questions about morality because I remain unconvinced of the soundness of the first principles that are applied in moral judgments. I am not interested in a moral claim that does not have a basis in some fundamental idea with demonstrable validity. I will try to contain my critique to those claims that do attempt at least what I think to be this basic level of intellectual rigor.
Note 1: I recognize that I introduced many terms in the above statement that are open to challenge as loaded and...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Each one with probability of order 1/3^^^3? Well that's what I call overconfidence.
I think the answer is that morality has to be counted, but we also have to count changes to morality. If moral preferences were entirely a matter of intellectual commitment, this might lead to double counting, but in fact people really do experience pride, guilt, and so on - and I doubt that morality could have any effect on their behavior if it didn't.
Counting the changes to morality can cut both ways. For instance: some people have a strong inclination to have sex with people of the same sex, while many people (sometimes the same ones) are deeply morally...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I think you've nailed my problem with this scenario: anyone who wouldn't go for this, I would be disinclined to listen to.
If I take you correctly, you are pointing out that thought experiments, now abstract, can become actual through progress and chance of time, circumstance, technology, etc., and thus are useful in understanding morality.
If this is an unfair assessment, correct me!
I agree with you, but I also hold to my original claim, as I do not think that they contradict. I agree that the thought experiment can be a useful tool for talking about morality as a set of ideas and reactions out-of-time. However, I do not agree that the thought experiments I have read have convinced me of anything about morality in actual practice. This is for one reason alone: I am not convinced that the operation of human reason is the same in all cases, and in this particular, in the two cases of the theoretical and the physical/actual.
I am not convinced that if a fat man were actually standing there waiting to be shoved piteously onto the tracks that the human mind would necessarily function in the same way it does when sitting in a cafe and discussing the fate of said to-be switch-pusher.
If I were to stake the distinction between the actual and the theoretical on anything, it would be on the above point. What data have we on the reliability of these -- I think you must agree that, regardless of the hypothetical opinions of medieval scholar types, the Torture vs. Dust Specks scenario abstract for us now and here -- thought experiments to predict human behavior when, to retreat to the cliche, one is actually in the trenches?
This may have some connection to the often-experienced phenomenon when in conversation of casual nonchalance and liberalism about issues that do not affect the speaker and a sudden and contradictory conservatism about issues that do affect the speaker. This is a phenomenon I encounter very often as a college student. It is gratis to be easy-going about topics that never impact oneself, but when circumstances change and a price is paid, reason does not reliably produce similar conclusions. Perhaps this is not a fair objection however, as we could claim that such a person is being More Wrong.
If you can convince me of a reliable connection, you'll have convinced me of the larger point.
...I am not convinced that if a fat man were actually standing there waiting to be shoved piteously onto the tracks that the human mind would necessarily function in the same way it does when sitting in a cafe and discussing the fate of said to-be switch-pusher.
If I were to stake the distinction between the actual and the theoretical on anything, it would be on the above point. What data have we on the reliability of these -- I think you must agree that, regardless of the hypothetical opinions of medieval scholar types, the Torture vs. Dust Specks scenario a
In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism. The fundamental idea is that the correct moral action is the one that satisfies the strongest preferences of the most people. Preferences are discussed with units such as fun, pain, death, torture, etc. One of the biggest dilemmas posed on this site is the Torture vs. Dust Specks problem. I should say, up front, that I would go with dust specks, for some of the reasons I mentioned here. I mention this because it may be biasing my judgments about my question here.
I had a thought recently about another aspect of Torture vs. Dust Specks, and wanted to submit it to some Less Wrong Discussion. Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation? I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Should we assign weight to other people's moral intuitions, and how much weight should it have?