I'll keep this quick:

In general, the problem presented by the Mugging is this: As we examine the utility of a given act for each possible world we could be in, in order from most probable to least probable, the utilities can grow much faster than the probabilities shrink. Thus it seems that the standard maxim "Maximize expected utility" is impossible to carry out, since there is no such maximum. When we go down the list of hypotheses multiplying the utility of the act on that hypothesis, by the probability of that hypothesis, the result does not converge to anything. 

Here's an idea that may fix this:

For every possible world W of complexity N, there's another possible world of complexity N+c that's just like W, except that it has two parallel, identical universes instead of just one. (If it matters, suppose that they are connected by an extra dimension.) (If this isn't obvious, say so and I can explain.)

Moreover, there's another possible world of complexity N+c+1 that's just like W except that it has four such parallel identical universes.

And a world of complexity N+c+X that has R parallel identical universes, where R is the largest number that can be specified in X bits of information. 

So, take any given extreme mugger hypothesis like "I'm a matrix lord who will kill 3^^^^3 people if you don't give me $5." Uncontroversially, the probability of this hypothesis will be something much smaller than the probability of the default hypothesis. Let's be conservative and say the ratio is 1 in a billion. 

(Here's the part I'm not so confident in)

Translating that into hypotheses with complexity values, that means that the mugger hypothesis has about 30 more bits of information in it than the default hypothesis. 

So, assuming c is small (and actually I think this assumption can be done away with) there's another hypothesis, equally likely to the Mugger hypothesis, which is that you are in a duplicate universe that is exactly like the universe in the default hypothesis, except with R duplicates, where R is the largest number we can specify in 30 bits.

That number is very large indeed. (See the Busy Beaver function.) My guess is that it's going to be way way way larger than 3^^^^3. (It takes less than 30 bits to specify 3^^^^3, no?)

So this isn't exactly a formal solution yet, but it seems like it might be on to something. Perhaps our expected utility converges after all.

Thoughts?

(I'm very confused about all this which is why I'm posting it in the first place.)

 

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 4:31 PM

"It takes less than 30 bits to specify 3^^^^3, no?"

That depends on the language you specify it in.

It also depends on the implied probability curve of other things you might specify and the precision you intend to convey. There's no way to distinguish between integers up to and including that one in 30 bits.

Oh, and that's only a counting of identical/fungible things. Specifying the contents of that many variants is HUGE.

Yes, but I don't think that's relevant. Any use of complexity depends on the language you specify it in. If you object to what I've said here on those grounds, you have to throw out Solomonoff, Kolmogorov, etc.

More specifically, it seems that your c must include information about how to interpret the X bits. Right? So it seems slightly wrong to say "R is the largest number that can be specified in X bits of information" as long as c stays fixed. c might grow as the specification scheme changes.

Alternatively, you might just be wrong in thinking that 30 bits are enough to specify 3^^^^3. If c indicates that the number of additional universes is specified by a standard binary-encoded number, 30 bits only gets you about a billion.

It only takes less than 30 bits if your language supports the ^^^^ notation and that's not standard notation.

True. So maybe this only works in the long run, once we have more than 30 bits to work with.

Let's be conservative and say the ratio is 1 in a billion.

Why?

Why not 1 in 10? Or 1 in 3^^^^^^^^3?

Choosing an arbitrary probability has good chances of leading us unknowingly into circular reasoning. I've seen too many cases of using for example Bayesian reasoning about something we have no information about, which went like "assuming the initial probability was x", getting some result after a lot of calculations, then defending the result to be accurate because the Bayesian rule was applied so it must be infallible.

It's arbitrary, but that's OK in this context. If I can establish that this works when the ratio is 1 in a billion, or lower, then that's something, even if it doesn't work when the ratio is 1 in 10.

Especially since the whole point is to figure out what happens when all these numbers go to extremes--when the scenarios are extremely improbable, when the payoffs are extremely huge, etc. The cases where the probabilities are 1 in 10 (or arguably even 1 in a billion) are irrelevant.

See https://arxiv.org/abs/0712.4318 , you need to formally reply to that.

Update: The conclusion of that article is that the expected utilities don't converge for any utility function that is bounded below by a computable, unbounded utility function. That might not actually be in conflict with the idea I'm grasping at here.

The idea I'm trying to get at here is that maybe even if EU doesn't converge in the sense of assigning a definite finite value to each action, maybe it nevertheless ranks each action as better or worse than the others, by a certain proportion.

Toy model:

The only hypotheses you consider are H1, H2, H3, ... etc. You assign 0.5 probability to H1, and each HN+1 has half the probability of the previous hypothesis, HN.

There are only two possible actions: A or B. H1 says that A gives you 2 utils and B gives you 1. Each HN+1 says that A gives you 10 times as many utils as it did under the previous hypothesis, HN, and moreover that B gives you 5 times as many utils as it did under the previous hypothesis, HN.

In this toy model, expected utilities do not converge, but rather diverge to infinity, for both A and B.

Yet clearly A is better than B...

I suppose one could argue that the expected utility of both A and B is infinite and thus that we don't have a good reason to prefer A to B. But that seems like a problem with our ability to handle infinity, rather than a problem with our utility function or hypothesis space.

In your example, how much should you spend to choose A over B? Would you give up an unbounded amount of utility to do so?

This was helpful, thanks!

As I understand it, you are proposing modifying the example so that on some H1 through HN, choosing A gives you less utility than choosing B, but then thereafter choosing B is better, because there is some cost you pay which is the same in each world.

It seems like the math tells us that any price would be worth it, that we should give up an unbounded amount of utility to choose A over B. I agree that this seems like the wrong answer. So I don't think whatever I'm proposing solves this problem.

But that's a different problem than the one I'm considering. (In the problem I'm considering, choosing A is better in every possible world.) Can you think of a way they might be parallel--any way that the "I give up" which I just said above applies to the problem I'm considering too?

The problem there, and the problem with Pascal's Mugging in general, is that outcomes with a tiny amount of probability dominate the decisions. A could be massively worse than B 99.99999% of the time, and still naive utility maximization says to pick B.

One way to fix it is to bound utility. But that has its own problems.

The problem with your solution is that it's not complete in the formal sense: you can only say some things are better than other things if they strictly dominate them, but if neither strictly dominates the other you can't say anything.

I would also claim that your solution doesn't satisfy framing invariants that all decision theories should arguably follow. For example, what about changing the order of the terms? Let us reframe utility as after probabilities, so we can move stuff around without changing numbers. E.g. if I say utility 5, p:.01, that really means you're getting utility 500 in that scenario, so it adds 5 total in expectation. Now, consider the following utilities:

1<2 p:.5

2<3 p:.5^2

3<4 p:.5^3

n<n+1 p:.5^n

...

etc. So if you're faced with choosing between something that gives you the left side or the right side, choose the right side.

But clearly re-arranging terms doesn't change the expected utility, since that's just the sum of all terms. So the above is equivalent to:

1>0 p:.5

2>0 p:.5^2

3>2 p:.5^3

4>3 p:.5^4

n>n-1 p:.5^n

So your solution is inconsistent if it satisfies the invariant of "moving around expected utility between outcomes doesn't change the best choice".

Again, thanks for this.

"The problem with your solution is that it's not complete in the formal sense: you can only say some things are better than other things if they strictly dominate them, but if neither strictly dominates the other you can't say anything."

As I said earlier, my solution is an argument that in every case there will be an action that strictly dominates all the others. (Or, weaker: that within the set of all hypotheses of probability less than some finite N, one action will strictly dominate all the others, and that this action will be the same action that is optimal in the most probable hypothesis.) I don't know if my argument is sound yet, but if it is, it avoids your objection, no?

I'd love to understand what you said about re-arranging terms, but I don't. Can you explain in more detail how you get from the first set of hypotheses/choices (which I understand) to the second?

I'd love to understand what you said about re-arranging terms, but I don't. Can you explain in more detail how you get from the first set of hypotheses/choices (which I understand) to the second?

I just moved the right hand side down by two spaces. The sum still stays the same, but the relative inequality flips.

As I said earlier, my solution is an argument that in every case there will be an action that strictly dominates all the others.

Why would you think that? I don't really see where you argued for that, could you point me at the part of your comments that said that?

OH ok I get it now: "But clearly re-arranging terms doesn't change the expected utility, since that's just the sum of all terms." That's what I guess I have to deny. Or rather, I accept that (I agree that EU = infinity for both A and B) but I think that since A is better than B in every possible world, it's better than B simpliciter.

The reshuffling example you give is an example where A is not better than B in every possible world. That's the sort of example that I claim is not realistic, i.e. not the actual situation we find ourselves in. Why? Well, that was what I tried to argue in the OP--that in the actual situation we find ourselves in, the action A that is best in the simplest hypothesis is also better.... well, oops, I guess it's not better in every possible world, but it's better in every possible finite set of possible worlds such that the set contains all the worlds simpler than its simplest member.

I'm guessing this won't be too helpful to you since, obviously, you already read the OP. But in that case I'm not sure what else to say. Let me know if you are still interested and I"ll try to rephrase things.

Sorry for taking so long to get back to you; I check this forum infrequently.

Yes, I've read it, but not at the level of detail where I can engage with it. Since it is costly for me to learn the math necessary to figure this out for good, I figured I'd put the basic idea up for discussion first just in case there was something obvious I overlooked.

Edit: OK, now I think I understand it well enough to say how it interacts with what I've been thinking. See my other comment .

If someone claims something about modest numbers there is little need to differentiate between the scenario described and the utterance being evidence for that kind of scenario to hold.

To me its not that the there is only a certain amount of threat per letter you can use (where 3^^^3 tries to be efficient) but the communicative details of the threat lose signficance in the limit.

Its about how much credible threat can be conveyed in a speech bubble. And I don't think that has the form of "well that depends on how many characters the bubble can fill". One does not up their threat level by being able to say large numbers and then saying "I am going to hurt you X much". At the limit when your act would register in my mind as you credibly speaking a big threat would be hardly recognised as speaking any more. Its the point of instead of making air vibrate, you initiate supernovas to make a point.