One good property for a logical predictor (A Turing machine which tries to guess the output of an environment that is too complex to simulate) is that it is able to to do asymptotically at least as well as any other logical predictor in some class.

We say that a logical predictor dominates all the logical predictors in a set if for every in , we get where is the probability that and agree on the first bits. That is to say that the probability assigns to the correct environment is at least a constant times the probability that assigns the correct environment is the limit.

Theorem: It is not in general possible for a machine which runs in time to dominate the class of all functions which run in .

Proof: Consider an environment which flips pseudorandom coins for its output, such that the th bit output by could be computed it time , where is the number of times divides . Given less than this amount of time, there is no way to predict the coin flips that does asymptotically better than random.

Let be a Turing machine which outputs the th bit in time. Consider the set of all with . can do no better than random guessing, and so can get the first such bits correct with probability at most . However, there exists a Turing machine which runs in time and computes the correct answer for all with , and guess randomly on the rest. This algorithm does is twice as likely on average to get each with . Therefore, the probability that this algorithm gets the first elements correct divided by the probability that gets the first elements correct must go to infinity.

In principle, it is still possible for a logical predictor that runs in time to dominate any logical predictor that runs in time , where and are any functions with . I am working on trying to find a predictor with this property.

New Comment