Perplexed comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 16 June 2011 08:56:45PM *  3 points [-]

At the end of the day I am left with the decision to either abandon unbounded utility maximization or indulge myself into the craziness of infinite ethics.

How about, for example, assigning .5 probability to a bounded utility function (U1), and .5 probability to an unbounded (or practically unbounded) utility function (U2)? You might object that taking the average of U1 and U2 still gives an unbounded utility function, but I think the right way to handle this kind of value uncertainty is by using a method like the one proposed by Bostrom and Ord, in which case you ought to end up spending roughly half of your time/resources on what U1 says you should do, and half on what U2 says you should do.

Comment author: Perplexed 17 June 2011 02:39:01PM *  1 point [-]

Why spend only half on U1? Spend (1 - epsilon). And write a lottery ticket giving the U2-oriented decision maker the power with probability epsilon. Since epsilon * infinity = infinity, you still get infinite expected utility (according to U2). And you also get pretty close to the max possible according to U1.

Infinity has uses even beyond allocating hotel rooms. (HT to A. Hajek)

Of course, Hajek's reasoning also makes it difficult to locate exactly what it is that U2 "says you should do".

Comment author: Will_Sawin 17 June 2011 03:40:35PM 0 points [-]

In general, it should be impossible to allocate 0 to U2 in this sense. What's the probability that an angel comes down and magically forces you to do the U2 decision? Around epsilon, i'd say.

U2 then becomes totally meaningless, and we are back with a bounded utility function.