Gabriel comments on A (small) critique of total utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (237)
I feel this way. The linear theories are usually nothing but first order approximations.
Also, the very idea of summing of individual agent utilities... that's, frankly, nothing but pseudomathematics. Each agent's utility function can be modified without changing agent's behaviour in any way. The utility function is a phantom. It isn't so defined that you could add two of them together. You can map same agent's preferences (whenever they are well ordered) to infinite variety of real valued 'utility functions'.
Utilitarians don't have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don't come from evaluating any of the infinite variety of 'true' utility functions of every individual. They come from evaluating the total utilitarian's model of individual preference satisfaction (or happiness or whatever).
Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn't really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual's utility function then that is silly but it's a problem of confused terminology and not really a strong argument against utilitarianism.
But the spirit of the argument is ungrounded in anything. What evidence is there that you can do this stuff at all using actual numbers without repeatedly bumping into "don't do non-normative things even if you got that answer from a shut-up-and-multiply"?
Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.
You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual's internal state.