Alice: "I just flipped a coin [large number] times. Here's the sequence I got:
(Alice presents her sequence.)
Bob: No, you didn't. The probability of having gotten that particular sequence is 1/2^[large number]. Which is basically impossible. I don't believe you.
Alice: But I had to get some sequence or other. You'd make the same claim regardless of what sequence I showed you.
Bob: True. But am I really supposed to believe you that a 1/2^[large number] event happened, just because you tell me it did, or because you showed me a video of it happening, or even if I watched it happen with my own eyes? My observations are always fallible, and if you make an event improbable enough, why shouldn't I be skeptical even if I think I observed it?
Alice: Someone usually wins the lottery. Should the person who finds out that their ticket had the winning numbers believe the opposite, because winning is so improbable?
Bob: What's the difference between finding out you've won the lottery and finding out that your neighbor is a 500 year old vampire, or that your house is haunted by real ghosts? All of these events are extremely improbable given what we know of the world.
Alice: There's improbable, and then there's impossible. 500 year old vampires and ghosts don't exist.
Bob: As far as you know. And I bet more people claim to have seen ghosts than have won more than 100 million dollars in the lottery.
Alice: I still think there's something wrong with your reasoning here.
Well, Alice -could- produce a random sequence, then crunch algorithms until she finds an encoding methodology that returns a desired output string from the given random sequence, or runs standard encoding methodologies until one returns an interesting-looking (i/e, seemingly non-random) result, such as the aforementioned ASCII binary encoding for "This is God, quit flipping that coin."
Which is to say, assuming somebody produced a sequence of coin flips, and presented a binary encoding, given some arbitary encoding schema, that produced the result "This is God, quit flipping that coin.", there should be a way of determining how nonrandom the result actually is. If we took a sequence and decoded it as ASCII, and it produced that phrase, that would be considerably stronger evidence than if we took a sequence, ran it through a database of character encodings, and discovered that it produced that string after we ran it through 7zip, inserted a 1 every third bit and a 0 every eleven bits, and Base64 decoded the results.
The Solomonoff prior of a given random sequence should be, on average, the same as the odds of getting a given item in that sequence if it were genuinely random (since any probability assigned to cheating is unassigned from random, and there are the same number of items overall). The odds of getting -any- sequence of length N, given you flip a coin N times, is approximately 1, less situations where the coin spontaneously stops existing or whatnot. A Solomonoff prior for a truly random number doesn't resolve the dilemma that a given apparently random sequence is an incredibly unlikely outcome, and, by naive analysis, it's -always- more likely that Alice cheated than that this particular sequence came up by chance.
Which is to say, I'm trying to distinguish between the probability of an event occurring and whether or not that event occurring is actually surprising. Which is probably related to Solomonoff complexity, as it seems like it should be proportional to its inverse.
Why? I agree that Pr(Alice cheated) is likely higher than Pr(these coin-flip results on this occasion), but that's the wrong comparison. Pr(Alice cheated to produce these coin-flip results) is typically about 2^-n times Pr(Alice cheated), and in particular is generally smaller than Pr(Alice got these coin-flip results fairly).