This doesn't work with an unbounded utility function, for standard reasons:
1) The mixed strategy. If there is at least one lottery with infinite expected utility, then any combination of taking that lottery and other actions also has infinite expected utility. For example, in the traditional Pascal's Wager involving taking steps to believe in God, you could instead go around committing Christian sins: since there would be nonzero probability that this would lead to your 'wagering for God' anyway, it would also have infinite expected utility. See Alan Hajek's classic article "Waging War on Pascal's Wager."
Given the mixed strategy, taking and not taking your bet both have infinite expected utility, even if there are no other infinite expected utility lotteries.
2) To get a decision theory that actually would take infinite expected utility lotteries with high probability we would need to use something like the hyperreals, which would allow for differences in the expected utility of different probabilities of infinite payoff. But once we do that, the fact that your offer is so implausible penalizes it. We can instead keep our money and look for better opportunities, e.g. by ac...
Why should I not attach a probability of zero to the claim that you are able to grant unbounded utility?
Let GOD(N) be the claim that you are a god with the power to grant utility at least up to 2**N. Let P(GOD(N)) be the probability I assign to this. This is a nonincreasing function of N, since GOD(N+1) implies GOD(N).
If I assign a probability to GOD(N) of 4**(-N), then the mugging fails. Of course, this implies that I have assigned GOD(infinity), the conjunction of GOD(N) over all N, a probability of zero, popularly supposed to be a sin. But while I can appreciate the reason for not assigning zero to ordinary, finite claims about the world, such as the existence of an invisible dragon in your garage, I do not see a reason to avoid this zero.
If extraordinary claims demand extraordinary evidence, what do infinite claims require?
This doesn't necessarily show that humans have bounded utility, just that the heuristics we use to estimate our utility break down in some circumstances. We already know that. Does one consider the fact that people have non-transitive preferences for certain bets indicate that they don't have utility functions? If not, how is that argument different from this one?
Let's call your utility function "UTILITY". We assume it takes a state of the universe as an argument.
My utility function takes the entire history of the universe as an argument (past and future). You could call that a "state" but in that context I'd need a clearer definition of:
Counterargument #1:
If you are god, then the universe allows for "gods" which can arbitrarily alter the state of the universe. Therefore, any utility gains I make have an unknown duration - it's entirely possible that an instant after you grant my utility, you'll take it away. Furthermore, if you are god, you're (a) flipping a coin and (b) requiring a donation, so I strongly suspect you are neither friendly nor omni-benevolent. Therefore, I have no reason to favour "god will help me for $1" over "god will hurt me for $1" - you ...
I'm sceptical of the maths. It seems like you may have committed the grave sin of taking an infinity without using a limit, though I'm not sure. Certainly there is something very funny going on when it is a mathematical certainty that my actual winnings will be less than my 'expected winnings'.
Also, even if I bought the maths, why should I give the money to you? You've explicitly claimed not to be God, I'm sure there's some crazy guy I can find who'll happily claim the opposite, it seems like I should have significantly better odds with him :)
This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.
Note that when GiveWell, a charity evaluation service, interviewed the SIAI, they hinted at the possibility that one could consider the SIAI to be a sort of Pascal's Mugging:
...GiveWell: OK. Well that's where I stand - I accept a lot of the controversial premises of your mission, but I'm a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don't need to be sold - that even at an in
When people buy insurance, they often plan for events that are less probable than 1%. The intuitive difficulty here is not that you act on an event with probability of 1%, but that you act on an event where the probability (be it 1% or 10% or 0.1%) is estimated intuitively, so that you have no frequency statistics to rely on, and there remains great uncertainty about the value of the probability.
People fear acting on uncertainty that is about to be resolved, for if it's resolved not in their favor, they will be faced with wide agreement that in retrospect their action was wrong. Furthermore, if the action is aimed to mitigate an improbable risk, they even expect that the uncertainty will resolve not in their favor. But this consideration doesn't make the estimated probability any lower, and estimation is the best we have.
One very critical factor you forgot is goal uncertainty! Your argument is actually even better than you think it is. If you assign an extremely low but non-zero probability that your utility function is unbounded, then you must still multiply it with infinity. And 1 is not a probability... There is no possible state that represent sufficient certainty that your utility function is bounded to justify not giving all your money to the mugging.
I WOULD send you my money, except the SIAI is a lot of orders of magnitude more likely than you to be a god (you didn'...
So I explained this to my girlfriend, and she agreed to send you $1.00. Sadly, I apparently managed to completely lock myself out of PayPal the last time I had a grudge against them (they've made the news a few times for shady practices...), so I can't provide the $1.
But, um, congratulations on mugging my girlfriend for $1! :)
(Her comment was "I was going to spend this on soda anyway; giving it away is a net utility gain since it means I won't have it available")
I think that it is possible for me to have unbounded utility, yet still to assign a rather small utility to every outcome in any world in which TimFreeman is God (and I am not).
The same applies to Omega. If I do, in fact, live in a universe in which an omnipotent maniac performs psychological experiments, then much of my joy in living is lost.
There is an implicit assumption in all of these mugging scenarios that the existence of an all-powerful mugger who can intervene at any time has no effect on relative cardinal utilities of outcomes. That assumption seems unjustified.
Upvoted because the objection makes me uncomfortable, and because none of the replies satisfy my mathematical/aesthetic intuition.
However, requiring utilities to be bounded also strikes me as mathematically ugly and practically dangerous– what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?
Thus I view this as a currently unsolved problem in decision theory, and a better intuition-pump version than Pascal's Mugging. Thanks for posting.
My poor feeble meat brain can only represent finitely many numbers. My subjective probability that you'll pay off on the bet rounds to zero as the utilities get big. So the sum converges to less than a dollar, even without hitting an upper bound on utilities.
"I am a god" is to simplistic. I can model it better as a probability, that varies with N, that you are able to move the universe to UN(N). This tracks how good a god you are, and seems to make the paradox disappear.
Are you certain that the likeliness of all your claims being true is not proportional to the size of the change in universe you are claiming to affect.
Almost any person can reasonably claim to be a utility generating god for small values of n for some set of common utility functions (and we don't even have to give up our god-like ability). That is how most of us are able to find gainful employment.
The implausible claim is the ability to generate universe changes of arbitrary utility value.
My proposal is that any claim of utility generation ability has pl...
Once again, this only implies that utility has to be controlled by probability, not that utility has to be bounded.
unless you were born with certain knowledge that I am not a god, you have to assign positive probability to it
"I am a god" may sound like a simple enough hypothesis to have positive probability, but if it entails that you can grant arbitrary amounts of utility, and if probability approaches 0 as utility approaches infinity, then there is no escaping the fact that the probability that you are a god is 0.
This doesn't seem to say anything about the boundedness of human utility functions (which I think is pretty likely) that Pascal's mugging doesn't. And heck, Pascal's mugger can just say "give me all your money."
If you are a god, how do I know you won't just make the coin come out tails so you don't have to pay up?
(ETA: But yes, my utility function is bounded.)
I'd say it's an error to give weight to any particular highly-improbable scenario without any evidence to distinguish it from the other highly-improbable scenarios. Here's why.
There is a nonzero possibility that some entity will acquire (or already have) godlike powers later today (as per your "I am a god" definition), and decide to use them to increase utility exponentially in response to a number derived somehow from an arbitrary combination of actions by any arbitrary combination of people in the past and the ever-moving present (and let's rem...
I don't see that "being jerked around by unlikely gods" is necessarily a problem. Doesn't the good sense in donating to SIAI basically boil down to betting on the most-plausible god?
I have an unbounded utility function, but my priors are built in such a way that expected utility is the same regardless of how you calculate it. For example, if there was a 2^-n chance of getting 2^n/n utility and a 2^-n chance of getting -2^n/n utility (before normalizing), you could make the expected utility add to whatever you want by changing the order. As such, my priors don't allow that to happen.
This has two interesting side effects. First, given any finite amount of evidence, my posteriors would follow those same laws, and second, pascal's mugging and the like are effectively impossible.
Edit: fixed utility example
When I thought about it, I realized this seemed very similar to a standard hack used on people that we already rely on computers to defend us against. To be specific, it follows an incredibly similar framework to one of those Lottery/Nigerian 419 Scam emails.
Opening Narrative: Attempt to establish some level of trust and believability. Things with details tend to be more believable than things without details, although the conjunction fallacy can be tricky here. Present the target with two choices: (Hope they don't realize it's a false dichotomy)
Choice A: ...
Couldn't I just believe with equal probability that you are a god and will do exactly the opposite?
This post describes an infinite gamble that, under some reasonable assumptions, will motivate people who act to maximize an unbounded utility function to send me all their money. In other words, if you understand this post and it doesn't motivate you to send me all your money, then you have a bounded utility function, or perhaps even upon reflection you are not choosing your actions to maximize expected utility, or perhaps you found a flaw in this post.
Briefly, we do this with The St. Petersburg Paradox, converted to a mugging along the lines of Pascal's Mugging. I then tweaked it to extract all of the money instead of just a fixed sum.
I have always wondered if any actual payments have resulted from Pascal's Mugging, so I intend to track payments received for this variation. If anyone does have unbounded utility and wants to prove me wrong by sending money, send it with Paypal to tim at fungible dot com. Annotate the transfer with the phrase "St. Petersburg Mugging", and I'll edit this article periodically to say how much money I received. In order to avoid confusing the experiment, and to exercise my spite, I promise I will not spend the money on anything you will find especially valuable. SIAI would be better charity, if you want to do charity, but don't send that money to me.
Here's the hypothetical (that is, false) offer to persons with unbounded utility:
If I am lying and the offer is real, and I am a god, what utility will you receive from sending me a dollar? Well, the probability of me seeing N Tails followed by a Head is (1/2)**(N + 1), and your utility for the resulting universe is UTILITY(UN(N)) >= DUT * 2**N, so your expected utility if I see N tails is (1/2)**(N + 1) * UTILITY(UN(N)) >= (1/2)**(N + 1) * DUT * 2 ** N = DUT/2. There are infinitely many possible values for N, so your total expected utility is positive infinity * DUT/2, which is positive infinity.
I hope we agree that it is unlikely that I am a god, but it's consistent with what you have observed so far, so unless you were born with certain knowledge that I am not a god, you have to assign positive probability to it. Similarly, the probability that I'm lying and the above offer is real is also positive. The product of two positive numbers is positive. Combining this with the result from the previous paragraph, your expected utility from sending me a dollar is infinitely positive.
If you send me one dollar, there will probably be no result. Perhaps I am a god, and the above offer is real, but I didn't do anything beyond flipping the first coin because it came out Tails. In that case, nothing happens. Your expected utility for the next dollar is also infinitely positive, so you should send the next dollar too. By induction you should send me all your dollars.
If you don't send money because you have bounded utility, that's my desired outcome. If you do feel motivated to send me money, well, I suppose I lost the argument. Remember to send all of it, and remember that you can always send me more later.
As of 7 June 2011, nobody has sent me any money for this.
ETA: Some interesting issues keep coming up. I'll put them here to decrease the redundancy: