In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism.
What is your evidence for this? In The Preference Utilitarian’s Time Inconsistency Problem, the top voted comments didn't try to solve the problem posed for preference utilitarians, but instead made general arguments against preference utilitarianism.
The real answer to torture vs. dust specks is to recognize that the answer to the scenario is torture, but the scenario itself has a prior probability so astronomically low that no evidence could ever convince you that you were in it, since at most k/3^^^3 people can affect the fate of 3^^^3 people at once (where k is the number of times a person's fate is affected). However, there are higher-probability scenarios that look like torture vs. 3^^^3 dust specks, but are actually torture vs. nothing or torture vs. not-enough-specks-to-care. In philosophical pr...
I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.
(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum ind...
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
So, I'd say that you aren't preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
Really? Preference utilitarianism prevails on Less Wrong? I haven't been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Isn't this a confusion of levels, with preference utilitarianism being an ethical theory, and moral anti-realism being a metaethical theory?
Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation?
If we feel like it. I personally would say yes. What would you say?
I find it impossible to engage thoughtfully with philosophical questions about morality because I remain unconvinced of the soundness of the first principles that are applied in moral judgments. I am not interested in a moral claim that does not have a basis in some fundamental idea with demonstrable validity. I will try to contain my critique to those claims that do attempt at least what I think to be this basic level of intellectual rigor.
Note 1: I recognize that I introduced many terms in the above statement that are open to challenge as loaded and...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Each one with probability of order 1/3^^^3? Well that's what I call overconfidence.
I think the answer is that morality has to be counted, but we also have to count changes to morality. If moral preferences were entirely a matter of intellectual commitment, this might lead to double counting, but in fact people really do experience pride, guilt, and so on - and I doubt that morality could have any effect on their behavior if it didn't.
Counting the changes to morality can cut both ways. For instance: some people have a strong inclination to have sex with people of the same sex, while many people (sometimes the same ones) are deeply morally...
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I think you've nailed my problem with this scenario: anyone who wouldn't go for this, I would be disinclined to listen to.
I think we may indeed be talking past each other, so I will try to state my case more cogently.
I am not denying that people do possess ideas about something named "morality". It would be absurd to claim otherwise, as we are here discussing such ideas.
I am denying that, even if I accept all of their assumptions, individuals who claim these ideas as more-than-subjective --- by that I think I mean that they claim their ideas able to be applied to a group rather than only to one man, the holder of the ideas --- can convince me that these ideas are not wholly subjective and individual-dependent.
If it is the case that morality is individual only, then that is an interesting conclusion and something to talk about, but it does seem, at least to a first approximation, that for a judgment to be considered moral, it must have some broader applicability among individuals, rather than concerning but one person. What can Justice be if it is among one man only? This seems a critical part of what is meant by "morality". It is in this latter, broad case, that moral philosophy appears null.
If you possess an idea of morality desire that I consider it to have some connection with the world and with all persons --- and surely I must require that it have such a connection, as moral claims attempt to dictate the interaction between people, and thus cannot be content to be contained in one mind alone --- at least enough of a connection that you can, through reasoned argument, convince me that your claims are both valid and sound, then surely your ideas must make reference to principles that I can discover individually to both exist and serve as predicates to your ideas. If you cannot elucidate these foundations, then how can I be brought to your view through reason? This was the intent of my original criticism, to ask why these foundations are so lousy and to beg that someone make them otherwise if moral claims are to be made.
I think that this is the crux of my objection. I cannot find moral claims that I can be brought to accept through reason alone, as even in the most impressive cases such claims are deeply infected by subjective assumptions that are incommunicable and --- dare I write it? --- irrational.
(This is to change the subject somewhat, but I find that the quality of an idea that allows it to be communicated is necessary to its being considered the result of reason and objective. I use that last word with 10,000 pounds of hesitation.)
However, and now I think that we are talking to each other directly, if, when you write of moral ideas, you refer only to those ideas that currently do exist, whether logically well-constructed or not, and you say that you are interested in studying these for their effects, then I am agreed.
I certainly agree that, whether I am convinced of its validity or use, morality does exist as a thing in the minds of men and thus as an influence on human life. But, I think that restricting ourselves to this case has gargantuan ramifications for the definition of "moral" and drastically cuts the domain of objects on which moral ideas can act. It seems this domain can include only those which involve human beings in some fashion. If morality is exclusively a consequence of the history of human evolution and particular to our biology -- and I do agree that it is -- then I feel that I am bound by it only as far as my own biology has imprinted this moral sense upon me. If it is just biological and not possible to derive through application of reason, then, if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?
I suspect that we agree, but that I took a bottom-up approach to get there and left the conclusion implicit, if present at all. All apologies.
Avoided in this post has been struggle with the word "morality" itself. I suspect we could write reams on that. If you think it worthwhile, we should, as the debate may be swung on the ability or inability to pin-down this notion.
(Note: As for SIAI, I think imprinting upon an AI human notions of moral judgments would be hideously dangerous for two reasons: 1) Human beings seem capable in almost every situation of overthrowing such judgments. If said AI is bound in similar manner, then what matters it for controlling or predicting its behavior? 2) If said AI is to possess a notion of justice and of a being who has abdicated certain rights due to immoral conduct, what will its judgment be of the humanity that has taught it morals? Can it not glance, not at history, but simply at the current state of the world and find immediately and with disgust ample grounds for the conclusion that very many humans have surrendered any claim to the moral life? It would be a strange moral algorithm if an AI did not come to this conclusion. Perhaps that is rather the point, as morality even among humans is a strange and often-blind algorithm.)
I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them.
That said, I think you might be introducing unnecessary confusion by talking about "subjective" and "individual." To pick a simple and trivial objection, it might be that two people, by happenstance, share a set of moral intuitions, and those intuitions might include references to other people. For example, they mig...
In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism. The fundamental idea is that the correct moral action is the one that satisfies the strongest preferences of the most people. Preferences are discussed with units such as fun, pain, death, torture, etc. One of the biggest dilemmas posed on this site is the Torture vs. Dust Specks problem. I should say, up front, that I would go with dust specks, for some of the reasons I mentioned here. I mention this because it may be biasing my judgments about my question here.
I had a thought recently about another aspect of Torture vs. Dust Specks, and wanted to submit it to some Less Wrong Discussion. Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation? I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Should we assign weight to other people's moral intuitions, and how much weight should it have?