I guess Parfit's already said everything that should be said here --- we're almost following him line for line, no? Parfit doesn't like self-defeating theories is all. Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying "hey people, I'm your servant, tell me what you want me to be and I'll mold myself into it as best I can". But that's really suspect meta-ethically. That's not what morality is. And I think that becomes clearer when you show where utilitarianism ends up.
"Oh you don't know what love is --- you just do as you're told."
ETA: Basically, I'm with Richard Chappell. But, uh, theist -- where he says "rational agent upon infinite reflection" or whatever, I say "God", and that makes for some differences, e.g. moral disagreement works differently. (Also I try to push it up to super mega meta.)
Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying "hey people, I'm your servant, tell me what you want me to be and I'll mold myself into it as best I can".
This can also lead to the situation where if everyone decides to be a utilitarian, you wind up with a bunch of people asking each other what they want and answering "I want whatever the group wants".
Reasoning using a representation of human utility that's a simple continuum from pain to pleasure, as torture vs dust specks does, is a shattering blow to the complexity of value.
Making moral decisions of such vast scope without understanding the full multidimensionality of human experience and utility is completely irresponsible. An AI using the kind of reasoning found in Torture vs Specks would probably just wirehead everyone for huge-integer-pleasure for eternity.
I don't pretend to know the correct answer to Torture vs Specks because I don't have a full understanding of human value, and because I don't understand how to do calculations with hypercomplex numbers. A friendly AI *has* to take into account the full complexity of our value and not just a one-dimensional continuum whenever it makes any moral decision. So only a friendly AI which has correctly extrapolated our values can know to high confidence the best answer to torture vs specks.
(edit 1) re:Oscar Cunningham
Why does complexity of value apply here specifically and not a curiosity stopper? Well consequentialist problems come in different difficulty levels - Torture for 5 years vs Torture for 50 years is easy - torture is bad, so less torture is less bad. You are comparing amounts of the same thing. You don't have to understand complexity of value to do that. To compare the value of two very different things, like Torture and Specks, requires you to understand the complexity of value. You can't simplify experiences to integers, because complex value isn't simply an integer.
The intuition that torture must be outweighed by a large enough number of specks, is just that: an intuition. You don't know the dynamics involved in a formal comparison based on a technical understanding of complex value.