Many thanks to Unknowns for inventing the scenario that led to this post, and to Wei Dai for helpful discussion.
Imagine you subscribe to the universal prior. Roughly, this means you assign credence 2^-k to each program of length k whose output matches your sensory inputs so far, and 0 to all programs that failed to match. Does this imply you should assign credence 2^-m to any statement about the universe ("hypothesis") that has length m? or maybe Kolmogorov complexity m?
The answer is no. Consider the following examples:
1. The complexity of "A and B and C and D" is roughly equal to the complexity of "A or B or C or D", but we know for certain that the former hypothesis can never be more probable than the latter, no matter what A, B, C and D are.
2. The hypothesis "the correct theory of everything is the lexicographically least algorithm with K-complexity 3^^^^3" is quite short, but the universal prior for it is astronomically low.
3. The hypothesis "if my brother's wife's first son's best friend flips a coin, it will fall heads" has quite high complexity, but should be assigned credence 0.5, just like its negation.
Instead, the right way to derive a prior over hypotheses from a prior over predictors should be to construct the set of all predictors (world-algorithms) that "match" the hypothesis, and see how "wide" or "narrow" that set is. There's no connection to the complexity of the hypothesis itself.
An exception is if the hypothesis gives an explicit way to construct a predictor that satisfies it. In that case the correct prior for the hypothesis is bounded from below by the "naive" prior implied by length, so it can't be too low. This isn't true for many interesting hypotheses, though. For example the words "Islam is true", even expanded into the complete meanings of these words as encoded in human minds, don't offer you a way to implement or predict an omnipotent Allah, so the correct prior value for the Islam hypothesis is not obvious.
This idea may or may not defuse Pascal's Mugging - I'm not sure yet. Sorry, I was wrong about that, see Spurlock's comment and my reply.
It doesn't apply in quite the same way. You would have to be able to assert that there was an equal or greater chance that the mugger would do the opposite of what he says.
If there is a 99% (obviously it's much higher, but you see the idea) chance he's lying and won't do anything, that still doesn't cancel out that 1% chance he's telling the truth, because the expected utility multiplied by 3^^^3 people (or whatever) still overwhelms. Now if you could say it was equally likely that he would torture those people only if you DID pay him, that would nullify it. But it's not clear that you can do this because most muggers are not playing tricksy opposite-day games when they threaten you. And if the guy is really evil enough to set up a trick like that, it seems like he'd just go ahead and torture the people without consulting you.
Evidence on the actual tendencies of omnipotent muggers is lacking, but you can at least see why it's not clear that these cancel out.