shminux comments on A (small) critique of total utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (237)
Hedonistic utilitarianism ("what matters is the aggregate happiness") runs into the same repugnant conclusion.
But this happens exactly because interpersonal (hedonistic) utility comparison is possible.
Right, if you cannot compare utilities, you are safe from the repugnant conclusion.
On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.
Yes but it doesn't have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.
As I mentioned in the other reply, I'm not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.
Hedonistic utilitarianism is not about preferences at all. It's about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
Maybe I misunderstand how total hedonistic utilitarianism works. Don't you ever construct an aggregate utility function?
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that's preference utilitarianism or something else entirely.
How is that not an aggregate utility function?
Utilons aren't hedons. You have one simple utility function that states you should maximize happiness minus suffering. That's similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don't claim that I, or anyone else, can do that right now. I'm saying there doesn't seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don't even see the problem. If (a) works, then you just do simple math after that. In case you're worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here's an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren't, why isn't a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I'll look through the logic again.
Hedonism doesn't specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how "happy." Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?
But aren't those empirical difficulties, not fundamental ones? Don't you think there's a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can't bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.
What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.