Suppose a new scientific hypothesis, such as general relativity, explains a well-know observation such as the perihelion precession of mercury better than any existing theory. Intuitively, this is a point in favor of the new theory. However, the probability for the well-known observation was already at 100%. How can a previously-known statement provide new support for the hypothesis, as if we are re-updating on evidence we've already updated on long ago? This is known as the problem of old evidence, and is usually leveled as a charge against Bayesian epistemology.
It is typical for a Bayesian analysis to resolve the problem by pretending that all hypotheses are around "from the very beginning" so that all hypotheses are judged on all evidence. The perihelion precession of Mercury is very difficult to explain from Newton's theory of gravitation, and therefore quite improbable; but it fits quite well with Einstein's theory of gravitation. Therefore, Newton gets "ruled out" by the evidence, and Einstein wins.
A drawback of this approach is that it allows scientists to formulate a hypothesis in light of the evidence, and then use that very same evidence in their favor. Imagine a physicist competing with Einstein, Dr. Bad, who publishes a "theory of gravity" which is just a list of all the observations we have made about the orbits of celestial bodies. Dr. Bad has "cheated" by providing the correct answers without any deep explanations; but "deep explanation" is not an objectively verifiable quality of a hypothesis, so it should not factor into the calculation of scientific merit, if we are to use simple update rules like Bayes' Law. Dr. Bad's theory will predict the evidence as well or better than Einstein's. So the new picture is that Newton's theory gets eliminated by the evidence, but Einstein's and Dr. Bad's theories remain as contenders....