Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Sean300

If you don't accept the additivity of harm, you accept that for any harm x, there is some number of people y where 2^y people suffering x harm is the same welfare wise as y people suffering x harm.

(Not to mention that when normalized across people, utils are meant to provide direct and simple mathematical comparisons. In this case, it doesn't really matter how the normalization occurs as the inequality stands for any epsilon of dust-speck harm greater than zero.)

Polling people to find if they will take a dust speck grants an external harm to the torture (e.g., mental distress at the thought of someone being tortured). Since they would prefer the dust speck, this indicates that they find the thought of someone being subject to 50 years of torture (Harm a) to be of more harm than a dust speck (Harm b). Harm a > Harm b, so n Harm a > n Harm b, and it doesn't even matter what Harm a or Harm b is, nor the additional nondistributed harm of the actual torture. How could this be tricked? Replace the dust speck with the least harm greater than the distress at the thought of someone being tortured for 50 years, say, the thought of someone being tortured for 51 years.

Sean300

The issue with a utility function U(T,S) = ST + S is that there is no motivation to have torture's utility depend on dust's utility. They are distinct and independent events, and in no way will additional specks worsen torture. If it is posited that dust specks asymptotically approach a bound lower than torture's bound, order issues present themselves and there should be rational preferences that place certain evils at such order that people should be unable to do anything but act to prevent those evils.

There's additional problems here, like the idea that distributing a dust speck to the group needs calculation in the group's net utility function, rather than in the individual's utility function. That is, if a group of ten people has 600 apples, they do not have 600*U(A), nor U(600A), but U_1(A_1)+ ... + U_10(A_10). Adding an additional apple has a marginal effect on net utility equal to the marginal effect on the net utility of the person receiving the apple. This result is in utils, and utils do sum linearly.

I'll say that again: Utils sum linearly. It's what they do. The rational utilitarian favors n utils gained from a chocolate as much as he favors avoiding -n utils of a stubbed toe. Summing -n utils over m people will have an identical effect on total or average utility as granting -(n*m) utility to one person.

If you reject any of the utilitarian or rational premises of the question, point them out, suggest your fix and defend it.

Caledonian: The idea is to make the math obvious. If you can't get the right answer with the math clean and easy, how can you do it on your own? If you insist there is a natural number greater than the cardinality of the reals, you will run into problems somewhere else. (And on the other hand, if you reject any of the concepts such as cardinality, reals, or "greater than", you probably shouldn't be taking a math class.)

Sean390

A dust speck in the eye with no external ill effects was chosen as the largest non-zero negative utility. Torture, absent external effects (e.g., suicide), for any finite time, is a finite amount of negative utility. Death in a world of literal immortality cuts off an infinite amount of utility. There is a break in the continuum here.

If you don't accept that dust specks are negative utility, you didn't follow the rules. Pick a new tiny ill effect (like a stubbed toe) and rethink the problem.

If you still don't like it because for a given utility n, n + n != 2n, there are then issues with circular preferences. Two units of utility are defined as twice as "utilitous" as one unit of utility. (This is not saying that two dollars are twice as good as one dollar.)

Sean310

Maybe I'm mistaken, but I think this is a pretty good example of how easily people get hung up on a false dichotomy.