Gabriel comments on Real-life expected utility maximization [response to XiXiDu] - Less Wrong

8 Post author: Gabriel 12 March 2012 07:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gabriel 14 March 2012 01:38:04AM 1 point [-]

Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?

You can't be perfect but that doesn't mean that you can't do better. It also doesn't mean that you can do better. Maybe thinking about all this rationality business is pretty useless after all. But complaining that you can't perfectly apply expected utility is not a good argument for that.

People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:

They don't use EU in the sense of coming up with a big complicated model, plugging probabilities into it and then concluding "gee, option A has 13.743% larger expected utility than option B; A it is." I think they reasoned qualitatively and arrived at the conclusion that some subset of actions has much greater potential impact than others. You don't have to do precise calculations when comparing a mountain with a pebble. The references to expected utility made in those quotes don't read to me like claims that all the beliefs were arrived at using formal mathematical methods but rather a method to remind people of the counterintuitive fact that the magnitudes of outcomes should affect your decision.

It's unreasonable to say that unless you are a perfect reasoner yourself, you should never talk about the theoretical principles underlying perfect reasoning even when faced with simple situations where those principles can be applied trivially. Again, it can be argued that the decision to direct effort at existential risk mitigation isn't as overdetermined as it is claimed and so you should make some calculations before talking about expected utility in that context but it can't be argued by pointing out that Yudkowsky doesn't calculate the expected utility of plane trips.