Pascal's Mugging, Finite or Unbounded Resources?
This article addresses Pascal's Mugging, for information about this scenario see here.
I am going to attack the problem by breaking it up into two separate cases, one in which the mugger claims to be from a meta-world with large but finite resources, and one in which the mugger claims to be from a meta-world with unbounded resources. I will demonstrate that in both cases the mugging fails, for different reasons, and argue that much of the appeal of the mugging comes from the conflation of these two cases.
Large but finite resources
In this case, the mugger claims to be from a world with a bounded number of resources, but still large enough to torture n, e.g. n=3^^^^3, people. I will argue that the prior for such a world should be of the order of 1/n or lower, and in particular not 1/complexity(n). With a prior of 1/n or less, the mugging fails, because no matter how large a number the mugger claims, the likelihood of their claim being true decreases proportionally. Thus there need be no value for which the claim is more worrisome than implausible.
We're faced with uncertainty because the world the mugger claims to be from is outside our universe. We have no information on which to base our estimate of its size, other than that it is substantially bigger than our universe (at least in the case of a matrix-like simulating world this is necessary). However, that ignorance is also our strength, because a prior distribution in the face of ignorance of a scale exists, and that prior is 1/n.
The first reason not to use a complexity prior is that there is simply no reason to use one. What reason is there that a world with a particular finite number of resources would be more likely to be a computable size? If you were to guess the size of our universe, certainly you might round the number to the nearest power of 10, but not because you think a round number is more likely to be correct. A world of difficult to describe size is just as likely to exist as a world with a similar but easily describable size.
A critical point here is that the complexity of the world itself of size n is proportional to n, not complexity(n). In order for a computer program to model the behaviour of a world of size n, it does not suffice to just generate the number n itself. It needs to model the behaviour of every single one of those n elements that make up the world. Such a program would need memory of size n just to keep track of one time step. To say that such a world should be give a prior of 1/complexity(n) is to conflate complexity(n) with complexity(world(n)). If AIXI were to consider such a world, it would need to treat that world as having a complexity of n. Otherwise it would be like AIXI measuring the complexity of the size of the computer program that could generate its inputs, rather than measuring the complexity of the program itself.
You may have noticed that the 1/n prior is itself unnormalisable, due to its infinite integral (at both zero and towards infinity in this case). Ignorance priors all have this property of being "improper" priors, which cannot be normalised. They work because once you add a single piece of evidence, the resulting distribution can be normalised. Which raises the question: What is that additional evidence in this case?
Well, in the particular case of a matrix like simulating world, there's the one other piece of knowledge we have that it's large enough to simulate our universe. Aside from setting a lower bound (which helps with the infinite integral near zero but not out to infinity), you might then ask, given a world of a particular size, what are the chances that it would simulate a universe of specifically the size of ours. The number of alternatively sized universes which they could simulate is proportional to n for sufficiently large n, thus the chance of ours being the size it is becomes 1/n. Combined with the ignorance prior you reach 1/n^2, and now you can actually integrate and normalise.
Thus I would conclude that overall the plausibility of the large but finite world of size n which the mugger claims to be from is proportional to 1/n^2, making the desire to pay lower, not higher, as 'n' grows. Note that either of the two arguments here is sufficient for the mugging to fail.
An aside on sufficient evidence
One final aside on this case; in Pascal's Muggle: Infinitesimal Priors and Strong Evidence, Eliezer ridicules the idea of assigning priors this low to an event based on the idea that it would imply that compelling evidence to the contrary would be unable to convince you otherwise. However, this is a flat-out misapplication of probability theory.
p(unlikely scenario | extreme evidence) = p(unlikely scenario) * p(extreme evidence | unlikely scenario) / p(extreme evidence)
In order for p(unlikely scenario | extreme evidence) ~= 1 in the face of the prior p(unlikely scenario) ~= 1/3^^^^3, all that's required is p(extreme evidence) ~= 1/3^^^^3. That is to say, the likelihood of seeing such evidence is low. Forget "no amount of evidence", just one such piece of evidence would be sufficient. All that's required is that the evidence itself is unlikely. And evidence which can only be generated by an unlikely scenario will of course be itself unlikely. As a simple example, imagine I found a method of picking a random integer between 0 and 3^^^^3 (just assume for the sake of argument that such a thing was possible to do). I would correctly assign a probability of 1/3^^^^3 of seeing the number '7'. But, if I performed the method, and saw the output of 7, I wouldn't "fail to consider this sufficient evidence to convince me" that the result was 7. The arguments relating to the bandwidth of our sensory system fail to account for (inefficient) encodings of that information which may have some configurations with arbitrarily low likelihood.
Of course in practice, in these unlikely situations, competing theories that start with "I'm dreaming" or "I'm delusional" may dominate. All scenarios markedly less likely than those have the burden of disproving those possibilities first. But this is not an impossible burden, and is in any case exactly as it should be.
Unbounded resources
I'm going to use access to a machine with unlimited computing resources as my working example here but I hope that the points translate well enough to other settings. I'm also going to briefly make a distinction between "infinite" and "unbounded": There are infinitely many of something if the cardinality of the set of such things in existance is infinite. There are unboundedly many of something if, for any number 'n', it would be possible to generate 'n' of those things. Unbounded is a lower requirement, but is sufficient for this discussion. I make this distinction mostly just to explain why I'm using the term at all (since you might otherwise expect "infinite").
In contrast to the finite resources scenario, in the infinite or unbounded resources scenario I think it's quite correct to say that the difficulty of generating a program that would torture n people is in proportion to the complexity, rather than the scale, of 'n'. Given unlimited resources, the only barrier is writing the program itself, the difficulty of which is barely any more work than required by the definition of complexity.
However, in this scenario, there's no need for a mugger at all! We've mugged ourselves already with our own moral mathematics. The 3^^^^3 people to mugger wishes to torture are utterly insignificant in the face of the 3^^^^^3 people who we could simulate in paradise, if we outsmart or overpower the mugger and take control of those resources ourselves. Does it sound unlikely we'd be able to overcome them? Of course, but how unlikely? It certainly doesn't scale with the value, so as with the original dilemna just pick a bigger number if you need to (which you don't, I can assure you, it's big enough).
And yet, even that is insufficiently ambitious. I would posit that with unbounded resources available, any course of action we could describe is dominated by considerations of an only slightly more complicated but substantially more important alternative. We're frozen with inaction in the face of the utter futility of anything we're even capable of thinking of. And we don't even need a mugger to trigger this catastrophe. So long as we assign a non-zero probability of such a mugging occurring in the future, we should be worrying about it right now.
The point is that in this situation, just paying the mugger and carrying on cannot be the best course of action, because it's not the right choice if they're lying, and if they're not then it's dominated by other much larger considerations. Thus the mugging still fails, not necessarily because of the implausibility of their threat but because of the utter irrelevance of it in the face of unboundedly more important other considerations.
Bounding the unbounded
Although this is tangential to my main point, I will consider how the concept of unbounded resources could be handled. Even though I've demonstrated that the mugging fails, the larger issue of considering the possibility of unbounded resources still seems a little unresolved. Here's a few options, each of which take seriously but none of which I'm completely convinced of yet. In some cases I also talk about how this resolution impacts the mugging. I'll add that they are not at all mutually exclusive either, they could all be valid.
* Ignore the possibility, at least until we actually have to deal with it, which will most likely be never and in any case gives us time to work out the maths in the meantime. A practical if thoroughly unsatisfying solution. A sub-case of this would be to plan to completely reinvent or even abandon quantitative morality in the face of the collapse of quantitative limits. What we replace it with is hard to say without better understanding the nature of the unlimited resources available.
* Ignore the possibility by symmetry. We know nothing about worlds with unbounded resources, so any action we take is just as likely to hurt as help our chances of utilising them for unbounded good. The question then is whether a mugger as described would be sufficient to break that symmetry. Personally I don't think they do, in the same way that I don't think the religions on earth break the symmetry of what a god might want were one to exist. I see no reason to privilege their hypotheses over the negation of them. Similarly the threats of a mugger who is clearly psychopathic and in any case has absolutely no need of my money may not break the symmetry on what I might expect to happen if I pay or don't. Essentially, I'm saying don't trust the mugger any more than I distrust them. Still, even if you accept this claim, it feels a little like dodging the question. It shouldn't be that hard to reformulate the scenario in a way that's sufficient to break the symmetry.
* Assign probability zero to infinite (and unbounded?) hypotheticals. Note that mathematically, something can be "possible" and still have probability 0. One example is the chance of a randomly chosen Real number chosen within (0, 1) being rational. This would be the natural extension of the 1/n prior for resources of scale n. While mathematically plausible and philosophically satisfying, I'm willing to be, but not yet quite convinced this is correct. The trouble I have is that infinite things seem in some ways far less complex than large finite things. Generating an infinite loop is one of the easiest things to program a computer to do. In saying so though, am I making the same mistake I describe above, in conflating complexity(X) with complexity(size(X))? AIXI may consider an unbounded space of programs and unbounded computing resources, but it certainly does not integrate over any programs of themselves infinite length (and indeed would get nowhere if it even tried). Do unbounded resources correspond to a program of infinite length or just a finite program running on unbounded hardware? I'm not yet sure either way.
* Fail to lose sleep over it regardless. Personally I act to optimise my own utility. That utility does honestly consider the utility of others, but it is nonetheless my own. It is also bounded within a time-frame because there's only so happy or sad I can be, and also bounded over time by geometric discounting. Being just my own utility it's not subject to being multiplied by an arbitrary number of people (and no I don't care if they're copies of me either). In being bounded, the harsh reality is that there's only so much I can care about the scale of a tragedy before it all just becomes numbers. So call me evil if you like but either way I'm not motivated to pay, nor, more generally, motivated to worry about the possibility of unbounded resources existing. Of course this doesn't really resolve the mugging itself. You could modify the scenario to replace myself having to pay with instead a small, plausible but entirely moral threat (e.g. "I'll punch that guy in the face"). I would then be motivated to make the correct moral decision regardless of bounds on my utility (though I suppose my motivation to be correct is itself bounded). It makes me wonder actually, nobody wants to pay themselves, but how many people actually would pay in this alternative case of an entirely moral trade off?
Conclusion
In the finite resources case, the decision to make is real, and not dominated by unavoidable larger considerations. The scenario itself is reasonable and entirely finite.
In the infinite resources case, the plausibility of the mugger's threat is only as low as 1/complexity(n) and thus they are able to create a threat which scales faster than its implausibility.
By not making it entirely clear which of these cases is considered, the original presentation of Pascal's Mugging served to generate a scenario which appeared to have the merits of both cases and the weaknesses of neither. However, by separating these two cases it becomes clear that the mugging fails in both, either because of the implausibility of finite but large resources, or the overwhelming, moral-system destroying power of unbounded resources. Although the unbounded resources problem is still unresolved (to my satisfaction at least), any resolution of it would be very likely to also resolve this case of the mugging (or if not then at least change our thinking about it substantially). Thus, in no case is it correct to pay, at least without the mugger providing unimaginably stronger evidence than is presented.
The collapse our our moral systems in the face of unlimited resources may have been the key point Elizier was making with Pascal's Mugging, and I certainly haven't contradicted that here. But I have I hope made it clear that unbounded resources are required to do this not just large numbers, and the hypothetical muggers are the least of our problems in these scenarios.
Justification Through Pragmatism
Justification Through Pragmatism
In this article, I propose a new method of justifying fundamental philosophical assumptions.
The fundamental assumptions on which we base our thinking cannot ever be proven to be true, as such a proof must rely on our own thinking and be circular as a result. The proposed alternative is that rather than even worry about whether these assumptions are true at all, instead demonstrate that regardless of their truth, there is no conceivable benefit to negating them. This article demonstrates this point by way of a number of examples of such assumptions.
0. Basic Capacity for Reason
Assumption: That one has the most basic ability to understand anything correctly.
Negation: One has no ability to understand anything nor to make any rational decisions whatsoever. One may have a delusion of understanding, but it bears no correlation with truth.
I start at 0 because this is such a fundamental assumption that it's on a level below everything else. Descartes started by doubting even his own existance, but found a proof of his existence from the very doubt itself. However, he could have gone a step further and doubted his basic ability to even understand that proof. Sure it seems a compelling argument that you must exist in order to doubt your own existance, but just seeming compelling doesn't make it true. How do we know for sure that anything at all we think is true?
The fact is, we demonstrably don't. One visible property of the insane is that they often do not know they are insane, and no-one is able to tell them. Every thought we have really could be worthless, and their seeming correspondence with reality a delusion. But if so, well, there's nothing you can really do about it is there? So why worry about it?
If every thought is meaningless, then it doesn't matter what you think. So you might as well assume that at least some of what you think makes sense. Thus the assumption that we have some basic capacity for reason may be justified pragmatically, without any concern for whether it is even true.
This assumption of basic capacity for reason is absolutely not to be confused with assuming every thought you have to be correct, nor even assuming any particular thought or belief to be correct. By all means question your beliefs and your thought processes. Indeed choosing not to do so is a good step towards failing to live up to this assumption in the first place. This assumption is simply that such questioning need not go on for ever in an endless chain. At some point we have to just accept that at least some our methods of basic reasoning actually work.
1. Better and Worse
Assumption: There exist experiences which are better and worse than other experiences.
Negation: Every possible experience is identical in merit. Nothing is better, more desirable, preferable or superior to anything else.
I say "experiences" to bring it down to the most fundamental interface, and avoid even implicitly assuming the existance of a real world.
If nothing is better or worse than anything else, in any way, then it fundamentally does not matter what we do. So there cannot be any harm in acting as if better and worse really do exist.
Of course this says nothing about what better and worse actually are, nor even how to go about figuring such a thing out. It's also possible that both the existance and nature of better and worse can be learned through experience, or even that they or aspects of them are fundamentally self-evident. So this assumption may or may not be either necessary or helpful, but it still remains as another good example of a pragmatically justifiable assumption.
2. Future, Causality, Free Will and Control
Assumption: That there are experiences to be had in the future, and that choices we make have some impact on those experiences.
Negation: Either there will not be any future experiences, or there will be but we have no control whatsoever about what they will be.
Yes, this is quite clearly about four assumptions rolled into one - as listed in the name. However, it's really quite difficult to talk about any of them on their own. It's hard to even define any of these without the ones earlier in the list, but each is also somewhat worthless without the next.
Without some sort of control over our future experiences, all of our choices and actions are meaningless. Could time suddenly come to an unexpected halt? Could we be simply be riding along on some sort of experience-movie, under some sort of illusion that our minds control the ride? Sure, it's possible, but if we really do have no control over the future it doesn't really matter what we do. So again, there can be no harm in assuming that we do have control, just in case it is true.
Again, whether this needs to be assumed or can be learned is a separate issue, the point is there's no benefit to removing this assumption so it might just as well be made regardless.
3. Sufficient Information
Assumption: That we are able to determine (using #0), which of the choices (#2) we make will lead to a better (#1) outcome.
Negation: That although we may have motivation and ability to control our future, we have no way to figure out what the right choice actually is.
To explain the need for this, consider the hypothetical universe of the left-handed god (this is not my own original idea, but I have no clue who I've stolen it from, so, well, sorry whoever you are). In this universe, people who have their left hand raised (and explicitly not their right hand, nor neither hand) when they die go to heaven, and experience an eternity of peace and fullfillment. Everyone else is doomed to an eternity of torment. However, in this universe there is also talk of a right-handed god, a strikingly similar entity but somewhat reflected in nature. The trouble is, the living inhabitants of this universe have no means of determining which of these gods is the true figure. The world has a built in symmetry about it and no clue was given.
Such situations, or at least their smaller scale approximations, can and do occur in life. But in these situations, there's nothing for it but to pick a hand and move on. We might as well assume though that not every situtation is like that, and figure out and concentrate our efforts on the ones which are not.
Again, note that this assumption is not that we have sufficient information about everything, only that we have sufficient information about something. Distinguishing left-handed gods from solvable dilemnas is still clearly a worthwhile task.
Overall then I have shown four examples of assumptions which may be taken for purely pragmatic reasons, regardless of the actual likelihood of their truth. It is my further view that these assumptions (and indeed possibly even just #0) are sufficient in the sense that no other base assumptions are necessary. But that is a much longer story.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)