A mugger appears and says "For $5 I'll offer you a set of deals from which you can pick any one. Each deal, d(N), will be N bits in length and I guarantee that if you accept d(N) I will run UTM(d(N)) on my hypercomputer, where UTM() is a function implementing a Universal Turing Machine. If UTM(d(N)) halts you will increase your utility by the number of bits written to the tape by UTM(d(N)). If UTM(d(N)) does not halt, I'll just keep your $5. Which deal would you like to accept?"
The expected increase in utility of any deal is p(d(N)) * U(UTM(d(N)), where p(d(N)) is the probability of accepting d(N) and actually receiving as many utilons as the number of bits a halting UTM(d(N)) writes to its tape. A non-empty subset of UTM programs of length N will write BB(N) bits to the tape where BB(X) is the busy-beaver function for programs of bit length X. Since BB(X) >= UTM(F) for any function F of bit length X, for every finite agent there is some N for which p(UTM(d(N)) = BB(N)) * BB(N) > 0. To paraphrase: Even though the likelihood of being offered a deal that actually yields BB(N) utilons is incredibly small, the fact that BB(X) grows at least as fast as any function of length X means that, at minimum, an agent that can be emulated on a UTM by a program of M bits cannot provide a non-zero probability of d(M) such that the expected utility of accepting d(M) is negative. In practice N can probably be much less than M.
Since p("UTM(d(X)) = BB(X)") >= 2^-X for d(X) with bits selected at random it doesn't make sense for the agent to assign p(d(X))=0 unless the agent has other reasons to absolutely distrust the mugger. For instance, discounting the probability of a deal based on a function of the promised number of utilons won't work; no discounting function grows as fast as BB(X) and an agent can't compute an arbitrary UTM(d(X)) to get a probability estimate without hypercomputational abilities. Any marginal-utility calculation fails in a similar manner.
I'm not sure where to go from here. I don't think it's rational to spend the rest of my life trying to find the largest integer I can think of to acausally accept d(biggest-integer) from some Omega. So far the strongest counterargument I've been able to think of is attempting to manage the risk of accepting the mugging by attempting to buy insurance of some sort. For example, a mugger offering intractably large amounts of utility for $5 shouldn't mind offering the agent a loan for $5 (or even $10,000) if the agent can immediately pay it back with astronomical amounts of interest out of the wealth that would almost certainly become available if the mugger fulfilled the deal. In short, it doesn't make sense to exchange utility now for utility in the future *unless* the mugger will accept what is essentially a counter-mugging that yields more long term utility for the mugger at the cost of some short term disutility. The mugger should have some non-zero probability, p, for which zhe is indifferent between p*"have $10 after fulfilling the deal" and (1-p)*"have $5 now". If the mugger acts like p=0 for this lottery, why can't the agent?
Question: Is Pascal's Mugging Isomorphic to the Two General's Problem, or am I confused?
I tried to start making a comparison between them, since each additional Messenger should grant you Utility, with Busy Beaver levels of Utility being Busy Beaver levels of Messengers, but the conclusion I came to is even if you trust the other person 100%, can never actually be safe on the on the attack/mugging unless the other person says "I will attack/pay with Certainty, regardless of any replies you send." and then does.
At that point, the worst thing that can happen to you is that the other general gets slaughtered because you didn't hear or chose not to trust the other general. The best possible result for you is still for them to be the one that takes the risk of choosing to move first with certainty so either way you get as good a result as you can.
The Pascal's Mugging equivalent of this would seem to be for the mugger to appear and say "I am going to take a chance such that the first thing I do is that I give you a small, small chance of Fabulously large utility, and I'm going to do that regardless of whether you pay me or not. But after I DO that... I really need you to send me 5 dollars."
But that doesn't seem like a mugging anymore!
I guess that means a possible reply is essentially "If this offer is so good, then pay me first, and then we'll talk."
If they resist or say no, then yes, you can just reply "But there's got to be SOME payment I can offer to you so that you move first, right?" But that's basically the offer they made to you initially!
If they are isomorphic, it makes sense that we would have trouble solving it, since there IS no solution to the two generals problem, according to the wiki:
http://en.wikipedia.org/wiki/Two_Generals%27_Problem
If they aren't isomorphic, that's weird and I am confused, because they have a lot of similarities when I look at them.
In one, utility approaches an upper bound, in the other, it grows without bound.