Blueberry comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 26 January 2010 07:12:57PM 0 points [-]

Other theories about what's important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.

Why not, if they're about preferences?

My understanding is that a utility function is nothing but a scaled preference ordering, and I interpret ethical debates as being disputes about what one's preferences --i.e. one's utility function -- ought to be.

For example (to oversimplify and caricature): the "consequentialist" might argue that one should be willing to torture one person to save 1000 from certain death, while the "deontologist" argues that one should not because Torture is Wrong. Both sides of this argument are asserting preferences about the state of the world: the "consequentialist" assigns higher utility to the situation in which 1000 people are alive and you're guilty of torture, and the "deontologist" assigns higher utility to the situation in which the 1000 have perished but your hands are clean.

Comment author: Blueberry 26 January 2010 07:17:46PM 0 points [-]

You may run into problems trying to create a utility function for some forms of deontology, at least if you're mapping into the real numbers. For instance, some deontologists would say that killing a person has infinite negative utility which can't be cancelled out by any number of positive utility outcomes.

Comment author: komponisto 26 January 2010 07:23:38PM 0 points [-]

That wouldn't be mapping into the real numbers, of course, since infinity isn't a real number.

As I understand it, utility functions are supposed to be equivalence classes of mappings into the real numbers, where two such mappings are said to be equivalent if they are related by a (positive) affine transformation (x -> ax + b where a>0).

Comment author: wnoise 02 February 2010 12:20:38AM 0 points [-]

Why do you think this restricts to positive affine transformations, rather than any strictly monotonic transformation?

Comment author: Nick_Tarleton 02 February 2010 12:23:54AM 3 points [-]

Other monotonic transformations don't preserve preferences over gambles.

Comment author: wnoise 02 February 2010 12:45:21AM 0 points [-]

Ah, right, that's what I was missing. Thanks.

Comment author: Jordan 02 February 2010 12:29:26AM 0 points [-]

A strictly monotonic transformation will preserve your preference ordering of states but not your preference ordering for actions to achieve those states. That is, only affine transformations preserve the ordering of expected values of different actions.

Comment author: Blueberry 26 January 2010 07:27:27PM 0 points [-]

Right, which is why I was saying that some ethical theories can't be expressed by a utility function. And there could be many such incomparable qualities: even adding in infinity and negative infinity may not be enough (though the transfinite ordinals, or the surreal numbers, might be).

I'm surprised at that +b, because that doesn't preserve utility ratios.

Comment author: komponisto 26 January 2010 07:48:11PM *  1 point [-]

Right, which is why I was saying that some ethical theories can't be expressed by a utility function.

Ah, I see. But I'm still not actually sure that's true, though...see below.

I'm surprised at that +b, because that doesn't preserve utility ratios.

Indeed not; utilities are measured on an interval scale, not a ratio scale. There's no "absolute zero". (I believe Eliezer made a youthful mistake along these lines, IIRC.) This expresses the fact that utility functions are just (scaled) preference orderings.