The most common formalizations of Occam's Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation. What if this makes a mind vulnerable to finite forms of Pascal's Wager? A compactly specified wager can grow in size much faster than it grows in complexity. The utility of a Turing machine can grow much faster than its prior probability shrinks.
Consider Knuth's up-arrow notation:
- 3^3 = 3*3*3 = 27
- 3^^3 = (3^(3^3)) = 3^27 = 3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3 = 7625597484987
- 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = 3^(3^(3^(... 7625597484987 times ...)))
In other words: 3^^^3 describes an exponential tower of threes 7625597484987 layers tall. Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe. This, even though writing out 3^^^3 in base 10 would require enormously more writing material than there are atoms in the known universe (a paltry 10^80).
Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
Call this Pascal's Mugging.
"Magic powers from outside the Matrix" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.
Thus the Kolmogorov complexity of "magic powers from outside the Matrix" is larger than the mere English words would indicate. Therefore the Solomonoff-inducted probability, two to the negative Kolmogorov complexity, is exponentially tinier than one might naively think.
But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large. If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.
Most people, I think, envision an "infinite" God that is nowhere near as large as 3^^^^3. "Infinity" is reassuringly featureless and blank. "Eternal life in Heaven" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds. The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large. Similarly for envisioning an "infinite" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.
The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the "Professor God" who places only atheists in Heaven. And since all the expected utilities here are allegedly "infinite", it's easy enough to argue that they cancel out. Infinities, being featureless and blank, are all the same size.
But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as "large" or "small".
If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.
You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal's Mugger is just a philosopher out for a fast buck.
But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI is its code. What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?
How do I know to be worried by this line of reasoning? How do I know to rationalize reasons a Bayesian shouldn't work that way? A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence. It would simply go by whatever answer Solomonoff induction obtained.
It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it. What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it "right" or "wrong"?
Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging? Do I have an instinct to resist exploitation by arguments "anyone could make"? Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss? Do I drop sufficiently small probabilities from consideration entirely? Would an AI that lacks these instincts be exploitable by Pascal's Mugging?
Is it me who's wrong? Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the "mainline" probabilities?
It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability. I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".
Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely? Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.
I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006. I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias.
Regarding the comments about exploding brains, it's a wonder to me that we are able to think about these issues and not lose our sanity. How is it that a brain evolved for hunting/gathering/socializing is able to consider these problems at all? Not only that, but we seem to have some useful intuitions about these problems. Where on Earth did they come from?
Nick> Does your proposal require that one accepts the SIA?
Yes, but using a complexity-based measure as the anthropic probability measure implies that the SIA's effect is limited. For example, consider two universes, the first with 1 observer, and the second with 2. If all of the observers have the same complexity you'd assign a higher prior probability (i.e., 2/3) to being in the second universe. But if the second universe has an infinite number of observers, the sum of their measures can't exceed the measure of the universe as a whole, so the "presumptuous philosopher" problem is not too bad.
Nick> If I understand your suggestion correctly, you propose that the same anthropic probability measure should also be used as a measure of moral importance.
Yes, in fact I think there are good arguments for this. If you have an anthropic probability measure, you can argue that it should be used as the measure of moral importance, since everyone would prefer that was the case from behind the veil of ignorance. On the other hand, if you have a measure of moral importance, you can argue that for decisions not involving externalities, the global best case can be obtained if people use that measure as the anthropic probability measure and just consider their self interests.
BTW, when using both anthropic reasoning and moral discounting, it's easy to accidentally apply the same measure twice. For example, suppose the two universes both have 1 observer each, but the observer in the second universe has twice the measure of the one in the first universe. If you're asked to guess which universe you're in with some payoff if you guess right, you don't want to think "There's 2/3 probability that I'm in the second universe, and the payoff is twice as important if I guess 'second', so the expected utility of guessing 'second' is 4 times as much as the EU of guessing 'first'."
I think that to avoid this kind of confusion and other anthropic reasoning paradoxes (see http://groups.google.com/group/everything-list/browse_frm/thread/dd21cbec7063215b/), it's best to consider all decisions and choices from a multiversal objective-deterministic point of view. That is, when you make a decision between choices A and B, you should think "would I prefer if everyone in my position (i.e., having the same perceptions and memories as me) in the entire multiverse chose A or B?" and ignore the temptation to ask "which universe am I likely to be in?".
But that may not work unless you believe in a Tegmarkian multiverse. If you don't, you may have to use both anthropic reasoning and moral discounting, being very careful not to double-count.
To be fair, humans are surrounded by thousands of other species that evolved under the same circumstances and can't consider them.