Many thanks to Unknowns for inventing the scenario that led to this post, and to Wei Dai for helpful discussion.
Imagine you subscribe to the universal prior. Roughly, this means you assign credence 2^-k to each program of length k whose output matches your sensory inputs so far, and 0 to all programs that failed to match. Does this imply you should assign credence 2^-m to any statement about the universe ("hypothesis") that has length m? or maybe Kolmogorov complexity m?
The answer is no. Consider the following examples:
1. The complexity of "A and B and C and D" is roughly equal to the complexity of "A or B or C or D", but we know for certain that the former hypothesis can never be more probable than the latter, no matter what A, B, C and D are.
2. The hypothesis "the correct theory of everything is the lexicographically least algorithm with K-complexity 3^^^^3" is quite short, but the universal prior for it is astronomically low.
3. The hypothesis "if my brother's wife's first son's best friend flips a coin, it will fall heads" has quite high complexity, but should be assigned credence 0.5, just like its negation.
Instead, the right way to derive a prior over hypotheses from a prior over predictors should be to construct the set of all predictors (world-algorithms) that "match" the hypothesis, and see how "wide" or "narrow" that set is. There's no connection to the complexity of the hypothesis itself.
An exception is if the hypothesis gives an explicit way to construct a predictor that satisfies it. In that case the correct prior for the hypothesis is bounded from below by the "naive" prior implied by length, so it can't be too low. This isn't true for many interesting hypotheses, though. For example the words "Islam is true", even expanded into the complete meanings of these words as encoded in human minds, don't offer you a way to implement or predict an omnipotent Allah, so the correct prior value for the Islam hypothesis is not obvious.
This idea may or may not defuse Pascal's Mugging - I'm not sure yet. Sorry, I was wrong about that, see Spurlock's comment and my reply.
Can you elaborate on how it might defuse Pascal's Mugging? It seems the problem there is that, no matter how low your prior, the mugger can just increase the number of victims until the expected utility of paying up overwhelms that of not paying. Hypothesis complexity doesn't seem to play in, and even if I were using it to assign a low prior, this could still be overcome.
That said, any solution to the problem (Robin's of course being a good start) is more than welcome.
Isn't the solution the same as for Pascal's Wager? That is, just as Muslim Heaven/Hell cancels out Christian Heaven/Hell, the possibility that hell is triggered if you give in to the mugger cancels out the possibility that the mugger is telling the truth.