Eugine_Nier comments on [SEQ RERUN] Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong

5 Post author: MinibearRex 01 October 2011 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 01 October 2011 05:24:16AM *  0 points [-]

Why isn't something like this the answer?

The statement "Do X or I will cause maximum badness according to your desires by using magic powers," is so unlikely to be true that I don't know how one can justify being confident that the being uttering the statement would be more likely to do as it says than to do the opposite - if you give the being five dollars as it asked, it creates and painfully kills 3^^^^3 people, if you do not, nothing happens (when it had asked for five dollars as payment for not creating and torturing people).

How can you say that a magic being that either cares about your money or is obviously testing you would likely do as it said it would?

Comment author: Eugine_Nier 02 October 2011 03:11:47AM 1 point [-]

If one attempts to do calculations taking all permutations of Pascal's mugging into account, one gets ∞ − ∞ as the result of all one's expected utility calculations.

Comment author: lessdazed 02 October 2011 03:13:55AM 0 points [-]

What are the consequences of that?

Comment author: Eugine_Nier 02 October 2011 03:22:50AM 1 point [-]

We have no idea how to do expected utility calculations in these kind of situations. Furthermore, even if the AI figured out some way, e.g., using some form of renormalization, we have to reason to believe the result would at all resemble our preferences.