The most common formalizations of Occam's Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation. What if this makes a mind vulnerable to finite forms of Pascal's Wager? A compactly specified wager can grow in size much faster than it grows in complexity. The utility of a Turing machine can grow much faster than its prior probability shrinks.
Consider Knuth's up-arrow notation:
- 3^3 = 3*3*3 = 27
- 3^^3 = (3^(3^3)) = 3^27 = 3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3 = 7625597484987
- 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = 3^(3^(3^(... 7625597484987 times ...)))
In other words: 3^^^3 describes an exponential tower of threes 7625597484987 layers tall. Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe. This, even though writing out 3^^^3 in base 10 would require enormously more writing material than there are atoms in the known universe (a paltry 10^80).
Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
Call this Pascal's Mugging.
"Magic powers from outside the Matrix" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.
Thus the Kolmogorov complexity of "magic powers from outside the Matrix" is larger than the mere English words would indicate. Therefore the Solomonoff-inducted probability, two to the negative Kolmogorov complexity, is exponentially tinier than one might naively think.
But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large. If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.
Most people, I think, envision an "infinite" God that is nowhere near as large as 3^^^^3. "Infinity" is reassuringly featureless and blank. "Eternal life in Heaven" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds. The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large. Similarly for envisioning an "infinite" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.
The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the "Professor God" who places only atheists in Heaven. And since all the expected utilities here are allegedly "infinite", it's easy enough to argue that they cancel out. Infinities, being featureless and blank, are all the same size.
But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as "large" or "small".
If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.
You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal's Mugger is just a philosopher out for a fast buck.
But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI is its code. What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?
How do I know to be worried by this line of reasoning? How do I know to rationalize reasons a Bayesian shouldn't work that way? A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence. It would simply go by whatever answer Solomonoff induction obtained.
It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it. What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it "right" or "wrong"?
Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging? Do I have an instinct to resist exploitation by arguments "anyone could make"? Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss? Do I drop sufficiently small probabilities from consideration entirely? Would an AI that lacks these instincts be exploitable by Pascal's Mugging?
Is it me who's wrong? Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the "mainline" probabilities?
It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability. I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".
Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely? Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.
I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006. I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias.
This might be overly simplistic, but it seems relevant to consider the probability per murder. I am feeling a bit of scope insensitivity on that particular probability, as it is far too small for me to compute, so I need to go through the steps.
If someone tells me that they are going to murder one person if I don't give them $5, I have to consider the probability of it: not every attempted murder is successful, after all, and I don't have nearly as much incentive to pay someone if I believe they won't be successful. Further, most people don't actually attempt murder, and the cost to that person of telling me they will murder someone if they don't get $5 is much, much smaller then the cost of actually murdering someone. Consequences usually follow from murder, after all. I also have to consider the probability that this person is insane and doesn't care about the consequences: only the $5.
Still, only .00496% of people are murdered in a year. (According to Wolfram Alpha, at least) And while I would assign a higher probability to a person claiming to murder someone, it wouldn't jump dramatically- they could be lying, they could try but fail, etc. Even if I let "I will kill someone" be a 90% accurate test with only a 10% false positive rate- which I think is generous in the case of $5 with no additional evidence- as only being .004%. Even if it was 99% sure and 1% false positive, EXTREMELY generous odds, there is only a .4% total probability of it occurring.
In reality, I think there would be some evidence in the case of one murder. At very least I could get strong sociological cues that the person was likely to be telling the truth. However, since I am moving to an end point where they will be killing 3^^^^3 people, I'll leave that aside as it is irrelevant to the end example.
If such a person claimed they would murder 2 people, it would depend on whether I thought the probabilities of the events occurring together were dependent or independent: if him killing one person made it more likely that he would kill two, given the event (the threat) in question.
Now, if he says he will kill two people, and he kills one, he is unlikely to stop before killing another. BUT, there are more chances for complication or failure, and the cost:benefit for him shrinks by half, making the probability that he manages to or tries to kill anyone smaller. These numbers in reality would be affected by circumstance: it is a lot easier to kill two people with a pistol or a bomb than it is with your bare hands. But since I see no bomb or pistol and he is claiming some mechanism I have no evidence for, we'll ignore that reality for now.
I had trouble finding information on the rate of double homicide:single homicide to use as a baseline, but it seems likely that it is neither totally dependent, nor totally independent. In order to believe the threat credible, I have to believe (after hearing the threat) that they will attempt to kill two people, successfully kill one, AND successfully kill another. And if I put the probability of A+B at .004%, I can't very well put A+B+C at any higher. Since I used a 90% false positive rate for my initial calculation, let's use it twice: 81% false positive. We'll assume that the false negative (he murders people even when he says he won't) stays constant.
This means that each murder is slightly more likely than 90% as likely to occur as the murder before it. Now, it isn't exact, and these numbers get really, really small, so I'm looking at 3^3 as a reference.
At 3^3, the cost has gone up 27x if he kills people, but the probability of the event has gone down to .06 of what it was. So, something like 1.7x more costly, given what was said above.
But all this was dependent on several assumed figures. So at what points does it balance out?
I'm a little tired for doing all the math right now, but some quick work showed that being only 80% sure of the test, with a 10% false positive rate, would be enough to where it would go down continuously. So if I am less than 80% sure of the test of "he says he will murder one person if I don't give him 5 dollars" then I can be sure that the probability that he will kill 3^^^^3 is far, far less than the cost if I am wrong.
I'm assuming that I am getting my math right here, and I am quite tired, so if anyone wishes to correct me on some portion of this I would be happy for the criticism.