Larks comments on Peter Singer and Tyler Cowen transcript - Less Wrong

39 Post author: jkaufman 06 April 2012 12:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: jkaufman 07 April 2012 02:03:19AM 0 points [-]

For a Total Utilitarian it's not a problem to be missing a zero point (unless you're talking about adding/removing people).

For an Average Utilitarian, or a Total Utilitarian considering birth or death, you try to identify the point at which a life is not worth living. You estimate as well as you can.

Comment author: Larks 07 April 2012 04:08:55AM 3 points [-]

Multiplication by a constant is an affine transformation. This clearly is a very big problem.

Comment author: Dre 08 April 2012 06:23:15AM -2 points [-]

But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.

Comment author: jkaufman 07 April 2012 03:36:34PM -2 points [-]

Doesn't "multiplication by a constant" mean births and deaths? Which puts you in my second paragraph: you try to figure out at what point it would be better to never have lived at all. The point at which a life is a net negative is not very clear, and many Utilitarians disagree on where it is. I agree that this is a "big problem", though I think I would prefer the phrasing "open question".

Comment author: Nisan 07 April 2012 07:42:10PM *  4 points [-]

Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can't even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person's death is as good as another's, and one person's outcome O is as good as another's, ceteris paribus, independently of their preferences.

Comment author: Larks 08 April 2012 07:43:43PM 2 points [-]

Yes, thank you for taking the time to explain.