Today's post, Pascal's Mugging: Tiny Probabilities of Vast Utilities was originally published on 19 October 2007. A summary (taken from the LW wiki):
An Artificial Intelligence coded using Solmonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was "Can't Say No" Spending, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Physicists deal with this issue daily, and they invented renormalization and cutoff techniques to make divergent things converge. This has been discussed before, not sure why it won't work.
I don't think physicists actually have the right answer when they do that. You can use Feynman path integrals for quantum physics, and it will get the right answer if you cheat like that, but I'd bet that it's actually more related to linear equations, which don't require cheating.
Physicists use renormalization and cutoff techniques. The universe doesn't.
Also, Pascal's mugging seems to be looking at a special case where it does converge. If you actually used Solomonoff induction, it wouldn't converge because of the possibility of this sort of thing, whether or not someone ever actually makes a threat.