Still very much a work in progress

EDIT: model/existence proof of unlosing agents can be found here.

Why do we bother about utility functions on Less Wrong? Well, because of results of the New man and the Morning Star, which showed that, essentially, if you make decisions, you better use something equivalent to expected utility maximisation. If you don't, you lose. Lose what? It doesn't matter, money, resources, whatever: the point is that any other system can be exploited by other agents or the universe itself to force you into a pointless loss. A pointless loss being a lose that give you no benefit or possibility of benefit - it's really bad.

The justifications for the axioms of expected utility are, roughly:

  1. (Completeness) "If you don't decide, you'll probably lose pointlessly."
  2. (Transitivity) "If your choices form loops, people can make you lose pointlessly."
  3. (Continuity/Achimedean) This axiom (and acceptable weaker versions of it) is much more subtle that it seems; "No choice is infinity important" is what it seems to say, but " 'I could have been a contender' isn't good enough" is closer to what it does. Anyway, that's a discussion for another time.
  4. (Independence) "If your choice aren't independent, people can expect to make you lose pointlessly."

 

Equivalency is not identity

A lot of people believe a subtlety different version of the result:

  • If you don't have a utility function, you'll lose pointlessly.

This is wrong. The correct result is:

  • If you don't lose pointlessly, then your decisions are equivalent with having a utility function.

What's the difference? I'll illustrate with Eliezer's paraphrase of Omohundro:

If you would rather be in Oakland than San Francisco, and you would rather be in San Jose than Oakland, and you would rather be in San Francisco than San Jose, you're going to spend an awful lot of money on taxi rides.

If you believed the first bullet point, then you would decide which you preferred between the three cities (and by how much), this would be you "utility function", and you'd them implement it and drive to the top city. But you could just as well start off in San Francisco, drive to Oakland, then to San Jose, be tempted to drive to San Francisco again, realise that that's stupid, and stay put. No intransitivity. Or, even better, notice the cycle, and choose to stay put in the San Francisco.

Phrased that way, the alternative method seems ridiculous. Why should you let you choice of city be determined by the arbitrary accident of your starting point? But actually, the utility function approach is just as arbitrary. Humans are far from rational, and we're filled with cycles of intransitive preferences. We need to break these cycles by using something from outside of our preference ordering (because those are flawed). And the way we break these cycles can depend on considerations just as random and contingent as "where are we located right now" - our moods, the availability of different factors in the three cities, etc...

 

Unlosing agents

You could start with an agent that has a whole host of incomplete, intransitive and dependent preferences (such as those of a human) and program it with the meta rule "don't lose pointlessly". It would then proceed through life, paying attention to its past choices, and breaking intransitive cycles or dependencies as needed, every time it faced a decision which threatened to make it lose pointlessly.

This agent is every bit as good as an expected utility maximiser, in terms of avoiding loss. Indeed, the more and varied choices it faced, the more it would start to resemble an expected utility maximiser, and the more its preferences would resemble a utility function. Ultimately, it could become an expected utility maximiser, if it faced enough choices and decisions. In fact, an expected utility maximiser could be conceived of as simply an unlosing agent that had actually faced every single imaginable choice between every single possible lottery.

So we cannot say that an unlosing agent is better or worse than an expected utility maximiser. The difference between them has to be determined by practical considerations. I can see several relevant ones (there are certainly more):

  • Memory capacity and speed. An unlosing agent has to remember all its past decisions and may need to review them before making a new decision, while an expected utility maximiser could be much faster.
  • Predictability 1: the actions of an expected utility maximiser are more predictable, if the utility function is easy to understand.
  • Predictability 2: the actions of an unlosing agent are more predictable, if the utility function is hard to understand.
  • Predictability 3: over very long time scales, the expected utility maximiser is probably more predictable.
  • Graceful degradation: small changes to an unlosing agent should result in smaller differences in decisions that small changes to an expected utility maximiser.
  • Dealing with moral uncertainty: an unlosing agent is intrinsically better setup to deal with decision uncertainty, though its ultimate morality will be more contingent on circumstances than an expected utility maximiser.

If we wanted to formalise human preferences, we can either front load the effort (devise the perfect utility function) or back load it (setup a collection of flawed preferences for the agent to update). The more complicated the perfect utility function could be, the more the back loaded approach seems preferable.

In practice, I think the appeal of the expected utility maximiser is that it is more attractive to philosophers and mathematicians: it involves solving everything perfectly ahead of time, and then everything is implementation. I can see the unlosing agent being more attractive to an engineer, though.

The other objection could be that the unlosing agent would have different preferences in different universes. But this is no different from an expected utility maximiser! Unless the Real True morality rise from hell to a chorus of angels, any perfect utility function is going to depend on choice made by it designer for contingent or biased reasons. Even on a more formal level, the objection depends on a particular coarse graining of the universe. Let A and B be universes, A' and B' the same universes with a particular contingent fact change. Then an expected utility maximiser that prefers A to B would also prefer A' to B', while the unlosing agent could have it preferences reversed. But note that this depends on the differences being contingent, which is a human definition rather than an abstract true fact about universes A, B, A', and B'.

Another way in which an unlosing agent could seem less arbitrary is if it could adjust its values according to its expected future, not just its known past. Call that a forward-thinking unlosing agent. We'll see an example of this in the next section.

 

Applications

Unlosing agents can even provide some solutions to thorny decision theory problems. Take Pascal's mugging:

Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

Let's assume that the odds you assign of the person telling the truth is greater than 1/3^^^^3. One thing that is clear is that if you faced that decision 3^^^^3 times, each decision independent from the others... then you should pay each time. When you aggregate independent decisions, it narrows your total variance, forcing you closer to an expected utility maximiser (see this post).

But if you were sure that you'd face it only a few thousand times, what then? Take a forward-thinking unlosing agent. If it expected that it would get Pascal mugged only a few thousand times, it could perfectly well reject all of them without hesitation (and derive all the advantages of this). If it expected that there was a significant risk of getting Pascal mugged over and over and over again, it would decide to accept.

In more traditional terms, this would be an expected utility maximiser with a utility that's unbounded in universe with high risk of 3^^^^3 or more Pascal's muggings, and a bounded ones in other universes.

Similar unlosing agent designs can work in some cases of infinite ethics, or with the torture and dust speck example. These arguments often share common features: the decision is clear and easy in some universes (eg many independent choices), but not in others. And it's then argued that expected utility arguments must push the decision from the clear and easy universes onto the others. But a forward-thinking unlosing agent is perfectly placed to break that link, and decide one way in the "clear and easy" universes, and otherwise in the "others".

If you allow agents that are not perfectly unlosing (which is reasonable, for a bounded agent) it could even move between decision modes depending on what universe its in. A certain cost (in pointless loss) for a certain benefit in flexibility.

Anyway, there is certainly more to be said about unlosing agents and similar ideas, but I'll stop here for the moment and ask people what they think.

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 9:52 PM

Just a sidenote, but IMO the solution to Pascal mugging is simply using a bounded utility function. I don't understand why people insist on unboundedness.

Rejecting components with extremely low probabilities even if they have extremely large utilities works better, avoids selection bias and is also what common sense would tell you to do. I think of it as "ignore noise". At least it works for humans, an AGI may have to do something else.

I don't understand in what sense it works better or the relation to selection bias. Ignoring components with low probability is a good heuristic since you have bounded computing resources. However it seems reasonable to require your decision theory to deal with extreme cases in which you can spare the computing resource to take these components into account. On the other hand, I agree that it is rarely practical for humans to emulate perfect Bayesian reasoners.

Selection bias: when you are presented with a specific mugging scenario, you ought to realize that there are many more extremely unlikely scenarios where the payoff is just as high, so selecting just one of them to act on is suboptimal.

As for the level at which to stop calculating, bounded computational power is a good heuristic. But I suspect that there is a better way to detect the cutoff (it is known as infrared cutoff in physics). If you calculate the number of choices vs their (log) probability, once you go low enough, the number of choices explodes. My guess is that for reasonably low probabilities you would get exponential increase for the number of outcomes, but for very low probabilities the growth in the number of outcomes becomes super-exponential. This is, of course, a speculation, I would love to see some calculation or modeling, but this is what my intuition tells me.

for very low probabilities the growth in the number of outcomes becomes super-exponential

There can't be more than 2^n outcomes each with probability 2^-n.

OK, I have thought about it some more. The issue is how accurately one can evaluate the probabilities. If the best you can do is, say, 1%, then you are forced to count even the potentially very unlikely possibilities at 1% odds. The accuracy of the probability estimates would depend on something like the depth of your Solomonoff induction engine. If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright. What I am saying is that the latter is better than the former.

If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright.

The primary problem with Pascal's Mugging is that the Mugging string is short and easy to evaluate. 3^^^3 is a big number; it implies a very low probability but not necessarily 1 / 3^^^3; so just how outrageous can a mugging be without being discounted for low probability? That least-likely-but-still-manageable Mugging will still get you. If you're allowed to reason about descriptions of utility and not just shut up and multiply to evaluate the utility of simulated worlds then in the worst case you have to worry about the Mugger that offers you BusyBeaver(N) utility, where 2^-N is the lowest probability that you can process. BusyBeaver(N) is well-defined, although uncomputable, it is at least as large as any other function of length N. Unfortunately that means BusyBeaver(N) 2^-N > C, for some N-bit constant C, or in other words EU(Programs-of-length-N) is O(BusyBeaver(N)). It doesn't matter what the mugger offers, or if you mug yourself. Any N-bit utility calculation program has expected utility O(BB(N)) because it might* yield BB(N) utility.

The best not-strictly-bounded-utility solution I have against this is discounting the probability of programs as a function of their running time as well as their length. Let 1/R be the probability that any given step of a process will cause it to completely fail as opposed to halting with output or never halting. Solomonoff Induction can be redefined as the sum over programs, P, producing an output S in N steps, of 2^-Length(P) x (R - 1 / R)^N. It is possible to compute a prior probability with error less than B, for any sequence S and finite R, by enumerating all programs shorter than log_2(1 / B) bits that halt in fewer than ~R / B steps. All un-enumerated programs have cumulative probability less than B of generating S.

For Pascal's Mugging it suffices to determine B based on the number of steps required to, e.g. simulate 3^^^3 humans. 3^^^3 ~= R / B, so either the prior probability of the Mugger being honest is infinitesimal, or it is infinitesimally unlikely that the universe will last fewer than the minimum 3^^^3 Planck Units necessary to implement the mugging. Given some evidence about the expected lifetime of the Universe, the mugging can be rejected.

The biggest advantage of this method over a fixed bounded utility function is that R is parameterized by the agent's evidence about its environment, and can change with time. The longer a computer successfully runs an algorithm, the larger the expected value of R.

OK, I guess I understand your argument that the mugger can construct an algorithm producing very high utility using only N bits. Or that I can construct a whole whack of similar algorithms in response. And end up unable to do anything because of the forest of low-probability high-utility choices. Which are known to be present if only you spend enough time looking for them. So that's why you suggest limiting not (only) the number of states, but (also) the number of steps. I wonder what Eliezer and others think about that.

You are right, of course. I have to rethink and rephrase it.

This may be an impossible task, but can you elaborate on your intuition? My intuitions, such as they are, are Solomonoffy and do not predict super-exponential growth.

I tried to elaborate in another comment, suggesting that we should reject all hypotheses more complicated than our Solomonoff engine can handle.

I probably agree in practice, but let's see if there's other avenues first...

[-][anonymous]10y20

I don't understand why people insist on unboundedness.

Possibly because there do appear to be potential solutions to Pascal's mugging that do not require bounding your utility function.

Example:

I claim that in general, I find it reasonable to submit and give the Pascal's mugger the money, and I am being Pascal's mugged and considering giving a Pascal's mugger money.

I also consider: What is the chance that a future mugger will make a Pascal's mugging with a higher level of super exponentiation and that I won't be able to pay?

And I claim that the answer appears to be: terribly unlikely, but considering the risks of failing at a higher level of super exponentiation, likely enough that I shouldn't submit to the current Pascal's mugger.

Except, that's ALSO true for the next Pascal's mugger.

So despite believing in Pascal's mugging, I act exactly as if I don't, and claiming that I 'believe' in Pascal's mugging, doesn't actually pay rent (for the muggers)

End of Example.

Since there exist examples like this and others that appear to solve Pascal's mugging without requiring a bounded utility function, a lot of people wouldn't want to accept a utility bound just because of the mugging.

I'm afraid that this kind of reasoning cannot avoid the real underlying problem, namely that Solomonoff expectation values of unbounded utility functions tend to divergence since utility grows as BB(n) and probability falls only as 2^{-n}.

[-][anonymous]10y00

What if the utility function is bound but and the bound itself is expandable without limit in at least some cases?

For instance, take a hypothetical utility function, Coinflipper bot.

Coinflipper bot has utility equal to the number of fair coins it has flipped.

Coinflipper bot has a utility bound equal to the 2^(greatest number of consecutive heads on fair coins it has flipped+1)

For instance, a particular example of Coinflipper bot might have flipped 512 fair coins and it's current record is 10 consecutive heads on fair coins, so it's utility is 512 and it's utility bound is 2^(10+1) or 2048.

On the other hand, a different instance of Coinflipper bot might have flipped 2 fair coins, gotten 2 tails, and have a utility of 2 and a utility bound of 2^(0+1)=2.

How would the math work out in that kind of situation?

I don't understand what you mean by "utility bound". A bounded utility function is just a function which takes values in a finite interval.

[-][anonymous]10y00

Let me try rephrasing this a bit.

What if, depending on other circumstances(say the flip of a fair coin), your utility function can take values in either a finite(if heads) or infinite(if tails) interval?

Would that entire situation be bounded, unbounded, neither, or is my previous question ill posed?

If you use a bounded utility function, it will inevitably be saturated by unlikely but high-utility possibilities, rendering it useless.

For any possible world W, |P(W) BoundedUtility(W)| < |P(W) UnboundedUtility(W)| as P(W) goes to zero.

Maybe you bound your utility function so you treat all universes that produce 100 billion DALY's/year as identical. But then you learn that the galaxy can support way more than 100 billion humans. Left with a bounded utility function you're unable to make good decisions at a civilizational scale.

This is not how bounded utility functions work. The fact it's bounded doesn't mean it reaches a perfect "plateau" at some point. It can approach its upper bound asymptotically. For example, a bounded paperclip maximizer can use the utility function 1 - exp(-N / N0) where N is the "number of paperclips in the universe" and N0 is a constant.

OK, here's a reason to be against a utility function like the one you describe. Let's use H to indicate the number of happily living humans in the universe. Let's say that my utility function has some very high threshold T of happy lives such that as H increases past T, although the function does continue to increase monotonically, it's still not very far above the value it takes on at point T. Now supposed civilization has 2T people living in it. There's a civilization-wide threat. The civilization-wide threat is not very serious, however. Specifically, with probability 0.00001 it will destroy all the colonized worlds in the universe except for 4. However, there's a measure suggested that will completely mitigate the threat. This measure will require sacrificing the lives of half of the civilization's population, bringing the population all the way back down to T. If I understand your proposal correctly, an agent operating under your proposed utility function would choose to take this measure, since a world with T people does not have an appreciably lower utility than a world with 2T people, relative to a tiny risk of an actual dent in the proposed utility function taking place.

To put it more succinctly, an agent operating under this utility function with total utilitarianism that was running an extremely large civilization would have no qualms throwing marginal lives away to mitigate remotely unlikely civilizational threats. We could replace the Pascal's Mugging scenario with a Pascal's Murder scenario: I don't like Joe, so I go tell the AI running things that Joe has an un-foilable plan to infect the extremely well-defended interstellar internet with a virus that will bring humanity back to the stone ages (and that furthermore, Joe's hacking skills and defenses are so comprehensive that the only way to deal with Joe is to kill him immediately, as opposed to doing further investigation or taking a humane measure like putting him in prison). Joe's life is such an unappreciable dent in the AI's utility function compared to the loss of all civilization that the AI complies with my request. Boom--everyone in society has a button that lets them kill arbitrary people instantly.

It's possible that you could (e.g.) ditch total utilitarianism and construct your utility function in some other clever way to avoid this problem. I'm just trying to demonstrate that it's not obviously a bulletproof solution to the problem.

It seems to me that any reasoning, be it with bounded or unbounded utility will support avoiding unlikely civilizational threats at the expense of small number of lives for sufficiently large civilizations. I don't see anything wrong with that (in particular I don't think it leads to mass murder since that would have a significant utility cost).

There is a different related problem, namely that if the utility function saturates around (say) 10^10 people and our civilization has 10^20 people, then the death of everyone except some 10^15 people will be acceptable to prevent a an event killing everyone except some 10^8 people with much lower probability. However, this effect disappears once we sum over all possible universes weighted by the Solomonoff measure as we should (like done here). Effectively it normalizes the utility function to saturate at the actual capacity of the multiverse.

And the utility function doesn't have to be bounded by a constant. An agent will "blow out its speakers" if it follows a utility function whose dynamic range is greater than the agent's Kolmogorov complexity + the evidence the agent has accumulated in its lifetime. The agent's brain's subjective probabilities will not have sufficient fidelity for such a dynamic utility function to be meaningful.

Super-exponential utility values are OK, if you've accumulated a super-polynomial amount of evidence.

The Solomonoff expectation value of any unbounded computable utility function diverges. This is because the program "produce the first universe with utility > n^2" is roughly of length log n + O(1) therefore it contributes 2^{-log n + O(1)} n^2 = O(n) to the expectation value.

...whose dynamic range is greater than the agent's Kolmogorov complexity + ...

Oops, that's not quite right. But I think that something like that is right :-). 15 Quirrell points to whoever formalizes it correctly first.

Because utility functions are not bounded. Without using infinities, could you describe an outcome so bad that it could never get worse?

Without using infinities, could you describe an outcome so bad that it could never get worse?

Doesn't mean my utility function is unbounded; it might have a finite infimum but never attain it.

You are thinking in terms of numbers (never gets worse than -1000), when what matters is outcomes. Your finite infimum would have to represent some event that you could describe or else it would have no meaning in ordinal utility terms. (At least I think this is how it works.)

Why? Suppose my utility function is 1/(number of staples in the universe). The infimum of my utility function would be for there to be infinitely many staples in the universe, which I cannot describe without using infinities.

I think you are correct. Good example.

[-][anonymous]10y00

You don't need infinities for Pascal's mugging to work.

With a bounded utility function U, you never pay the mugger X$ if the probability she is telling the truth is below X / (Umax - Umin).

(I liked your post, but here's a sidenote...)

It bothers me that we keep talking about preferences without actually knowing what they are. I mean yes, in the VNM formulation a preference is something that causes you to choose one of two options, but we also know that to be insufficient as a definition. Humans have lots of different reasons for why they might choose A over B, and we'd need to know the exact reasons for each choice if we wanted to declare some choices as "losing" and some as "not losing". To use Eliezer's paraphrase, maybe the person in question really likes riding a taxi between those locations, and couldn't in fact use their money in any better way.

The natural objection to this is that in that case, the person isn't "really" optimizing for their location and being irrational about it, but is rather optimizing for spending a lot of time in the taxi and being rational about it. But 1) human brains are messy enough that it's unclear whether this distinction actually cuts reality at the joints; and 2) "you have to look deeper than just their actions in order to tell whether they're behaving rationally or not" was my very point.

Valid point, but do let me take babysteps away from vNM and see where that leads, rather than solving the whole preference issue immediately :-)

That's reasonable. :-)

Unlosing agents, living in a world with extorters, might have to be classically irrational in the sense that they would not give into threats even when a rational person would. Furthermore, unlosing agents living in a world in which other people can be threatened might need to have an irrationally strong desire to carry out threats so as not to lose the opportunity of extorting others. These examples assume that others can correctly read your utility function.

Generalizing, an unlosing agent would have an attitude towards threats and promises that maximized his utility given that other people know his attitude towards threats and promises. I strongly suspect that this situation would have multiple equilibria when multiple unlosing agents interacted.

The problem isn't solved for expected utility maximisers. Would unlosing agents be easier to solve?

I don't understand how precisely an unlosing agent is supposed to work, but either it is equivalent to some expected utility maximizer, or it violates at least one of the VNM axioms. Which of these do you expect to be the case? If it is equivalent to some expected utility maximizer, then what do you get by formulating it as an unlosing agent instead? If it violates the VNM axioms, then isn't that a good reason not to be an unlosing agent?

One thing that is clear is that if you faced that decision 3^^^^3 times, each decision independent from the others... then you should pay each time.

If you face that decision 3^^^^3 times and attempt to pay every time, you will almost instantly run out of money and find it physically impossible to pay every time. Unless you start out with $5*3^^^^3, in which case the marginal value of $5 to you is probably so trivially low that it makes sense to pay the mugger even in a one-shot Pascal's mugger. I don't see how the repitition changes anything.

... this would be an expected utility maximiser with a utility that's unbounded....

There is literally no such thing as an expected utility maximiser with unbounded utility. It's mathematically impossible. If your utility is unbounded, then there exist lotteries with infinite expected utility, and then you need some way of comparing series and picking a "bigger" one even though neither one of them converges. If you do that, that makes you something other than an expected utility maximiser, since the expected utility of such lotteries is simply "undefined". This basically comes down to the fact from functional analysis that continuous linear maps on a Banach space are bounded. I have gone over this before, and am disappointed that people are still trying to defend them. It's like saying that it is foolish to ride a horse because unicorns are more awesome, so you should ride one of those instead.

but either it is equivalent to some expected utility maximizer, or it violates at least one of the VNM axioms.

It's equivalent to an expected utility maximiser, but the utility function that it maximises is not determined ahead of time.

If there is more than one utility function that it could end up maximizing, then it is not an expected utility maximizer, because any particular utility function is better maximized by maximizing it directly than by possibly maximizing some other utility function depending on certain circumstances. As an example, suppose you could end up using one of two utility functions: u and v, there are three possible outcomes: X, Y, and Z, and u(X)>u(Y) while v(X)<v(Y). Consider two possible circumstances: 1) You get to choose between X and Y. 2) You get to choose between the lotteries .5X+.5Z and .5Y+.5Z.

If you would end up using u if (1) happens but end up using v if (2) happens, then you violate the independence axiom.

Here's a better proof of the existence of unlosing agents: http://lesswrong.com/r/discussion/lw/knv/model_of_unlosing_agents/

Relate this to value loading. If the programmer says cake, you value cake; if they say death, you value death. You could see this as choosing between two utilities, or you could see it as having a single utility function where "what the programmer says" strongly distinguishes between otherwise identical universes.

Stimulating post. Thanks!

The justifications for the axioms of expected utility are, roughly: (Completeness) "If you don't decide, you'll probably lose pointlessly." [...] (Independence) "If your choice aren't independent, people can expect to make you lose pointlessly."

Just to record my objections: the axioms of Completeness and Independence are stronger than needed to guard against the threats mentioned.

I probably agree with you, but what's your exact point?

You can avoid losing pointlessly without having complete preference orderings. Having complete preference orderings is unnecessary work. Like Dilbert, I love the sweet smell of unnecessary work!

You can avoid being Dutch Booked without Independence. A sufficient principle to avoid being Dutch Booked is "don't get Dutch Booked." That can be achieved without Independence. For example, people who choose as Allais noted most do in the Allais Paradox thereby violate (the complete axiom set including, it seems especially including) Independence. But they do not thereby get Dutch Booked.

"avoid getting dutch booked" is essentially what unlosing agents do.

And people who make non-independent choices in the Allais paradox will get dutch booked if they remain consistent to their non-independent choices.

Your point about completeness not being needed ahead of time is very valid.

Yes, I like your "unlosing agents" approach a lot. It is more modest than some interpretations of utility, and largely for that reason, a big step in the right direction, in my view.

I disagree that Allais choosers will get Dutch Booked if they remain consistent, unless perhaps you mean "consistent" to build in some very strong set of other axioms of decision theory. They simply make more distinctions among gambles and sequences of gambles than traditional theory allows for. An Allais chooser can reasonably object, for example, that a sequence of choices and randomized events is different from a single choice followed by a single randomized event, even if decision theory treats them as "equivalent".

If you're an active investor, the markets or the universe can punish you for deviating from independence unless you're paying very close attention.

But this is again my general point - the mode decision you have to make (including decisions not to do something) the closer an unlosing agent resembles an expected utility maximiser.

In practice, I think the appeal of the expected utility maximiser is that it is more attractive to philosophers and mathematicians: it involves solving everything perfectly ahead of time, and then everything is implementation. I can see the unlosing agent being more attractive to an engineer, though.

Postulating a utility function makes for cleaner exposition. It probably is more realistic to suppose that one's utility function is only imperfectly known and/or difficult to calculate (at least outside a narrow setting), so some other approach might not be a bad idea.

But if you were sure that you'd face it only a few thousand times, what then? Take a forward-thinking unlosing agent. If it expected that it would get Pascal mugged only a few thousand times, it could perfectly well reject all of them without hesitation (and derive all the advantages of this). If it expected that there was a significant risk of getting Pascal mugged over and over and over again, it would decide to accept.

If it expected a significant risk of getting mugged over and over, it would take its $5*3^^^^3 and build an army capable of utterly annihilating any known mugger.

Let's assume that the odds you assign of the person telling the truth is greater than 1/3^^^^3. One thing that is clear is that if you faced that decision 3^^^^3 times, each decision independent from the others... then you should pay each time. When you aggregate independent decisions, it narrows your total variance, forcing you closer to an expected utility maximiser (see this post).

The odds I assign to the person telling the truth are themselves uncertain. I can't assign odds accurately enough to know if it's 3^^^^3, or a millionth of that or a billion times that.

Now, one typical reply to "the odds I assign are themselves uncertain" is "well, you can still compute an overall probability from that--if you think that the odds are X with probability P1 and the odds are Y with probability 1-P1, you should treat the whole thing as having odds of X P1 + Y (1-P1)".

But that reply fails in this situation. In that case, if you face 3^^^^3 such muggings, then if the odds the first time are P1, the odds the second time are likely to be P1 too. In other words, if you're uncertain about exactly what the odds are, the decisions aren't independent and aggregating the decision doesn't reduce the variance, so the above is correct only in a trivial sense.

Furthermore, the problem with most real-life situations that even vaguely resemble a Pascal's mugging is that the probability that the guy is telling the truth depends on the size of the claimed genocide, in ways that have nothing to do with Kolomogorov complexity or anything like that. Precisely because a higher value is more likely to make a naive logician willing to be mugged, a higher value is better evidence for fraud.