I can't seem to get my head around a simple issue of judging probability. Perhaps someone here can point to an obvious flaw in my thinking.
Let's say we have a binary generator, a machine that outputs a required sequence of ones and zeros according to some internally encapsulated rule (deterministic or probabilistic). All binary generators look alike and you can only infer (a probability of) a rule by looking at its output.
You have two binary generators: A and B. One of these is a true random generator (fair coin tosser). The other one is a biased random generator: stateless (each digit is independently calculated from those given before), with probability of outputting zero p(0) somewhere between zero and one, but NOT 0.5 - let's say it's uniformly distributed in the range [0; .5) U (.5; 1]. At this point, chances that A is a true random generator are 50%.
Now you read the output of first ten digits generated by these machines. Machine A outputs 0000000000. Machine B outputs 0010111101. Knowing this, is the probability of machine A being a true random generator now less than 50%?
My intuition says yes.
But the probability that a true random generator will output 0000000000 should be the same as the probability that it will output 0010111101, because all sequences of equal length are equally likely. The biased random generator is also just as likely to output 0000000000 as it is 0010111101.
So there seems to be no reason to think that a machine outputting a sequence of zeros of any size is any more likely to be a biased stateless random generator than it is to be a true random generator.
I know that you can never know that the generator is truly random. But surely you can statistically discern between random and non-random generators?
We can test algorithms which we use to predict which atom would explode and when. The variables are part of the theory, not of the atoms. Absence of hidden variables effectively means that there is no regularity such that we could infer a law that would predict the state of an arbitrary system at time t1 with certainty* from observations made at time t0 < t1. Nevertheless any selected atom* is either going to explode or isn't at a given time, and we can observe which was the case afterwards. Bayesianism doesn't prohibit updating our beliefs about events after those events happened, in fact it doesn't say anything at all about time. The "inherent randomness" of radioactive decay doesn't make the uncertainty non-Bayesian in any meaningful way.
That said, I am afraid we may start to argue over the silly problem of future contingents and over definitions in general. The right question to ask now is: why do you want to distinguish truly random numbers from apparently random ones? The answer to the question about the quality of quantum randomness may depend on that purpose.
*) Although I know that certainty is impossible to achieve and atoms are indistinguishable, I have chosen to formulate the sentences the way I did for sake of brevity.