Today's post, Pascal's Mugging: Tiny Probabilities of Vast Utilities was originally published on 19 October 2007. A summary (taken from the LW wiki):
An Artificial Intelligence coded using Solmonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was "Can't Say No" Spending, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
There should be a limit to utility based on the pattern theory of identity, a finite number of sentient patterns, and identical patterns counting as one.
I phrased this as confidently as I did in the hopes it would provoke downvotes with attached explanations of why it is wrong. I am surprised to see it without downvotes and granting that even more surprised to see it without upvotes.
In truth I am not so certain of some of the above, and would appreciate comments. I'm asking nicely this time! Is identity about being in a pattern? Is there a limit to the number of sentient patterns? Do identical patterns count as one for moral purposes?
Finally: is it truly impossible to infinitely care about a finite thing?