Eugine_Nier comments on [SEQ RERUN] Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
Why isn't something like this the answer?
The statement "Do X or I will cause maximum badness according to your desires by using magic powers," is so unlikely to be true that I don't know how one can justify being confident that the being uttering the statement would be more likely to do as it says than to do the opposite - if you give the being five dollars as it asked, it creates and painfully kills 3^^^^3 people, if you do not, nothing happens (when it had asked for five dollars as payment for not creating and torturing people).
How can you say that a magic being that either cares about your money or is obviously testing you would likely do as it said it would?
If one attempts to do calculations taking all permutations of Pascal's mugging into account, one gets ∞ − ∞ as the result of all one's expected utility calculations.
What are the consequences of that?
We have no idea how to do expected utility calculations in these kind of situations. Furthermore, even if the AI figured out some way, e.g., using some form of renormalization, we have to reason to believe the result would at all resemble our preferences.