Hi DevilMaster, welcome to LessWrong!
Generally, the answer to your question is Bayes' Theorem. This theorem is essentially the mathematical formulation of how evidence ought to be weighed when testing ideas. If the wikipedia article doesn't help you much, Eliezer has written an in-depth explanation of what it is and why it works.
The specific answer to your question can be revealed by plugging into this equation, and defining "proof". We say that nothing is ever "proven" to 100% certainty, because if it were (again, according to Bayes' Theorem), no amount of new evidence against it could ever refute it. So "proof" should be interpreted as "really, really likely". You can pick a number like "99.9% certain" if you like. But your best bet is to scrap the notion of absolute "proof" and start thinking in likelihoods.
You'll notice that an integral part of Bayes' Theorem is the idea of how strongly we would expect to see a certain piece of evidence. If the Hypothesis A is true, how likely is it that we'll see Evidence B? And additionally, how likely would it be to see Evidence B regardless of Hypothesis A?
For a piece of evidence to be strong, it has to be something that we would expect to see with much greater probability if a hypothesis is true than if it is false. Otherwise there's a good chance it's a fluke. Furthermore, if that evidence is something that we wouldn't expect to see much either way, than it's not very informative when we don't see it.
So you see how this bears on your examples. I'm not especially familiar with astronomy, so I don't know whether it's true that we haven't seen other galaxies with planets, or how powerful our telescopes are. But let's assume that what you've said is all true.
If we know our telescopes aren't powerful enough to see other planets, then the fact that they don't see any is virtually zero evidence. The probability of us seeing other planets is basically the same whether they're out there or not (because we won't see them either way), so our inability to see them doesn't count as evidence at all. This test doesn't actually tell us anything because we already know that it will tell us the same thing either way. It's like counting how many fingers you have to determine if the stock market will go up or down. You're gonna get "ten" no matter what, and this tells you nothing about the market.
The same reasoning applies to the bacteria example. If we're not more likely to see them given that they're real than we are given that they're not real, then our inability to see them is not evidence in either direction. The test is a bad one because it fails to distinguish one possibility from the other.
But all this isn't to say that it would be valid to reject these notions based on the absence of these evidences alone. There may be other tests we can run that would be more likely to come out one way or the other based on whether the hypothesis is true. So no, it wouldn't make sense to reject the existence of planets or bacteria, because in both of your examples people are using tests that are known to be useless.
If we're not more likely to see them given that they're real than we are given that they're not real, then our inability to see them is not evidence in either direction. The test is a bad one because it fails to distinguish one possibility from the other
Thank you. That's what I did not understand.
From Robyn Dawes’s Rational Choice in an Uncertain World:
Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a higher likelihood to that evidence gain probability, at the expense of hypotheses that assigned a lower likelihood to the evidence. This is a phenomenon of relative likelihoods and relative probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher.
Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage.
Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The conditional probability P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H were true.
Whatever the likelihood that a Fifth Column would do no sabotage, the probability P(¬E | H1), it won’t be as large as the likelihood that there’s no sabotage given that there’s no Fifth Column, the probability P(¬E | H2). So observing a lack of sabotage increases the probability that no Fifth Column exists.
A lack of sabotage doesn’t prove that no Fifth Column exists. Absence of proof is not proof of absence. In logic, (A ⇒ B), read “A implies B,” is not equivalent to (¬A ⇒ ¬B), read “not-A implies not-B .”
But in probability theory, absence of evidence is always evidence of absence. If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.1
Under the vast majority of real-life circumstances, a cause may not reliably produce signs of itself, but the absence of the cause is even less likely to produce the signs. The absence of an observation may be strong evidence of absence or very weak evidence of absence, depending on how likely the cause is to produce the observation. The absence of an observation that is only weakly permitted (even if the alternative hypothesis does not allow it at all) is very weak evidence of absence (though it is evidence nonetheless). This is the fallacy of “gaps in the fossil record”—fossils form only rarely; it is futile to trumpet the absence of a weakly permitted observation when many strong positive observations have already been recorded. But if there are no positive observations at all, it is time to worry; hence the Fermi Paradox.
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
1 If any of this sounds at all confusing, see my discussion of Bayesian updating toward the end of The Machine in the Ghost, the third volume of Rationality: From AI to Zombies.