Eliezer_Yudkowsky comments on One Life Against the World - Less Wrong

32 Post author: Eliezer_Yudkowsky 18 May 2007 10:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 23 May 2007 06:54:20PM 2 points [-]

Paul, since my background is in AI, it is natural for me to ask how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?

How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?

If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham's Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?

Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.