Today's post, Pascal's Mugging: Tiny Probabilities of Vast Utilities was originally published on 19 October 2007. A summary (taken from the LW wiki):
An Artificial Intelligence coded using Solmonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was "Can't Say No" Spending, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
That's a different problem than Pascal's Wager. Taking it back to the original, it would be like saying "Convert to Christianity pro forma for a chance at heaven rather than no chance of heaven, ignoring all other magical options." The problem with this isn't the quantities of utility involved, it's the assumption that a god who cares about such conversions to Christianity is the only option for a divine, rather than a God of Islam who would burn Christian converts hotter than atheists, or a PC Christian god who would have a heaven for all who were honest with themsleves and didn't go through pro forma conversions. The answer to the wager is that the random assumption that all forms of magic but one have less probability than that one story about magic is a dumb assumption.
It's fine to consider Pascal's Wager*, where Pascal's Wager* is under the assumption that our interlocutor is trustworthy, but that's a different problem and is well articulated as the lifespan dilemma, which is legitimately posed as a separate problem.
As probability is in the mind, when I ask "what would a magical being of infinite power be doing if it asked me for something in a context where it was disguised as a probably not magical being?" My best guess is that it is a test with small consequences, and I can't distinguish between the chances of "It's serious" and "It's a sadistic being who will do the opposite of what it said."
Each of these possibilities has some probability associated with it. Taking them all into account, what is the expected utility of being a Christian? One may ignore thos... (read more)