Many thanks to Unknowns for inventing the scenario that led to this post, and to Wei Dai for helpful discussion.
Imagine you subscribe to the universal prior. Roughly, this means you assign credence 2^-k to each program of length k whose output matches your sensory inputs so far, and 0 to all programs that failed to match. Does this imply you should assign credence 2^-m to any statement about the universe ("hypothesis") that has length m? or maybe Kolmogorov complexity m?
The answer is no. Consider the following examples:
1. The complexity of "A and B and C and D" is roughly equal to the complexity of "A or B or C or D", but we know for certain that the former hypothesis can never be more probable than the latter, no matter what A, B, C and D are.
2. The hypothesis "the correct theory of everything is the lexicographically least algorithm with K-complexity 3^^^^3" is quite short, but the universal prior for it is astronomically low.
3. The hypothesis "if my brother's wife's first son's best friend flips a coin, it will fall heads" has quite high complexity, but should be assigned credence 0.5, just like its negation.
Instead, the right way to derive a prior over hypotheses from a prior over predictors should be to construct the set of all predictors (world-algorithms) that "match" the hypothesis, and see how "wide" or "narrow" that set is. There's no connection to the complexity of the hypothesis itself.
An exception is if the hypothesis gives an explicit way to construct a predictor that satisfies it. In that case the correct prior for the hypothesis is bounded from below by the "naive" prior implied by length, so it can't be too low. This isn't true for many interesting hypotheses, though. For example the words "Islam is true", even expanded into the complete meanings of these words as encoded in human minds, don't offer you a way to implement or predict an omnipotent Allah, so the correct prior value for the Islam hypothesis is not obvious.
This idea may or may not defuse Pascal's Mugging - I'm not sure yet. Sorry, I was wrong about that, see Spurlock's comment and my reply.
In other words, universal prior assigns probability to points, not events. Probability of an event is sum over its elements, which is not related to the complexity of the event specification (and hypotheses are usually events, not points, especially in the context of Bayesian updating, which is why high-complexity hypotheses can have high probability according to universal prior).
For an explicit example of a high-complexity event with high prior, let one-element event S be one of very high Kolmogorov complexity, and therefore contain a single program that is assigned very low universal prior. Then, not-S is an event with about the same K-complexity as S, but which is assigned near-certain probability by universal prior (because it includes all possible programs, except for the one high-complexity program included in S).
The problem with your example is that 'not-S' is not an event, it's a huge set of events. It's like talking about the integer 452 and the 'integer' not-452.
The simplest beliefs about the events within that set can be extremely simple. For example, if S is described by the bits 1001001, the beliefs corresponding to non-S would be, "the first bit is 0', or "the second bit is 1", or, "the third bit is 1", and so forth.