Politics is the mind-killer. Debate is war, arguments are soldiers. There is the temptation to search for ways to interpret every possible experimental result to confirm your theory, like securing a citadel against every possible line of attack. This you cannot do. It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence.
But it’s okay if your cherished belief isn’t perfectly defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can take a hit or two, and still survive, so long as the hits don't keep on coming in.2
Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.
You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs.
It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world.
I just know someone in the audience out there is going to say, “But you can’t concede even a single point if you want to win debates in the real world! If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can’t let the Enemy do that! You’ll lose! What could be more viscerally terrifying than that?”
Whatever. Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you’d best integrate all the evidence.
Rationality is not a walk, but a dance. On each step in that dance your foot should come down in exactly the correct spot, neither to the left nor to the right. Shifting belief upward with each iota of confirming evidence. Shifting belief downward with each iota of contrary evidence. Yes, down. Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.
If an iota or two of evidence happens to countersupport your belief, that’s okay. It happens, sometimes, with probabilistic evidence for non-exact theories. (If an exact theory fails, you are in trouble!) Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence. If the theory is true, supporting evidence will come in shortly, and the probability will climb again. If the theory is false, you don’t really want it anyway.
The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not. When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away. And this rules out incremental progress; it rules out correct integration of all the evidence. Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: “This is gently contrary evidence, I will shift my belief downward.” Yes, down. It does not destroy your cherished theory. That is qualitative reasoning; think quantitatively.
For every expectation of evidence, there is an equal and opposite expectation of counterevidence. On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward. If you think you already know what evidence will come in, then you must already be fairly sure of your theory—probability close to 1—which doesn’t leave much room for the probability to go further upward. And however unlikely it seems that you will encounter disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side. The weighted mean of your expected posterior probability must equal your prior probability.
How silly is it, then, to be terrified of revising your probability downward, if you’re bothering to investigate a matter at all? On average, you must anticipate as much downward shift as upward shift from every individual observation.
It may perhaps happen that an iota of antisupport comes in again, and again and again, while new support is slow to trickle in. You may find your belief drifting downward and further downward. Until, finally, you realize from which quarter the winds of evidence are blowing against you. In that moment of realization, there is no point in constructing excuses. In that moment of realization, you have already relinquished your cherished belief. Yay! Time to celebrate! Pop a champagne bottle or send out for pizza! You can’t become stronger by keeping the beliefs you started with, after all.
Although I also think Psi is bogus, my belief has nothing to do with the fact that previous claims of psi have been bogus. Evidence can never justify a theory, any more than finding 10 white swans in a row proves that there are no black swans! Believing that psi is false because of evidence that psi has been false in the past is the logical fallacy of inductivism. Most rational people do not believe in Psi because it has no logical theoretical/scientific basis and because it does not explain things well.
Much of this type of argument strikes me as nonsense. Something that is true can not be justified. One can (and should) argue that something is true. But argument is not justification. If the argument explains something well, then one should believe it, if it is the best theory available.
But evidence can never support any argument. It merely corroborates it. The reason that you believe a coin is fair is not ultimately because the results of an experiment convince you. It would be easy to set up an algorithm that causes the first 3000 examples of a computer simulated coin-flip to have the correct number of heads or tails to make the uninformed believe that the simulated coin flip is fair. But the next 10,000 could yield very different results, just by using an easy-to-create mathematical algorithm. No p-value can be assigned even after 3000 computer simulations of a coin flip. The data never tell a story (to quote someone on another site).
The reason we rationally believe the results of experiment when we flip the coin, but not when we see an apparent computer simulation of a coin flip is: In the case of the actual coin we already have explanations of the effects of gravity on two-sided metal objects, well before we have any data about coin flips. The same is not true about the computer simulation of the coin flip, unless we see the program ahead of time.
It is the theory about the effects of gravity on two-sided metal objects (with a particular pattern of metal distribution) that we try to evaluate when we flip coins. The data never tell us a story about whether the coin is fair. We first have a theory about the coin and its properties and then we utilize the experiment (the coin flip) to try to falsify our notion that the coin is fair if the coin looks balanced. Or, we falsify the notion that the coin is not fair, if our initial theory is that the coin does not look balanced. Examples of a phenomena do not increase the probability of it being true.
The reason we may believe that a coin could be fair is that we first evaluate the structure of the material, note that it seems to have a structure that would promote fairness given standard human flips of coins. Only then do we test it. But it is our rational understanding of the properties of the coin and expectations about the environment which make the coin flip reasonable. The results of any test tell you nothing (logically, nothing at all) about the fairness of a coin unless you first have a theory and an explanation about why the coin should or should not be considered fair.
The reason we do not believe in psi is that it does not explain anything, violates multiple known laws of physics, yet creates no alternative scientific structure that allows us to understand and predict events in our world.
This is pretty muddled and wrong. You use a lot of terms in an unorthodox way. For example I don't know how something that is true cannot ever be justified (how else do you know it's true!). Also, there is no such thing as science without induction, no laws of physics or predictions. So I'm pretty confused about what your position is. That's okay though because it looks like you've never heard of Bayesian inference. In which case this is a really important day in your life.
The wikipedia enty
The SEP entry
Eliezer's explanation of the Math
Also: the "Rat... (read more)