I sometimes say that the method of science is to amass such an enormous mountain of evidence that even scientists cannot ignore it; and that this is the distinguishing characteristic of a scientist, a non-scientist will ignore it anyway.
Max Planck was even less optimistic:
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
I am much tickled by this notion, because it implies that the power of science to distinguish truth from falsehood ultimately rests on the good taste of grad students.
The gradual increase in acceptance of many-worlds in academic physics, suggests that there are physicists who will only accept a new idea given some combination of epistemic justification, and a sufficiently large academic pack in whose company they can be comfortable. As more physicists accept, the pack grows larger, and hence more people go over their individual thresholds for conversion—with the epistemic justification remaining essentially the same.
But Science still gets there eventually, and this is sufficient for the ratchet of Science to move forward, and raise up a technological civilization.
Scientists can be moved by groundless prejudices, by undermined intuitions, by raw herd behavior—the panoply of human flaws. Each time a scientist shifts belief for epistemically unjustifiable reasons, it requires more evidence, or new arguments, to cancel out the noise.
The "collapse of the wavefunction" has no experimental justification, but it appeals to the (undermined) intuition of a single world. Then it may take an extra argument—say, that collapse violates Special Relativity—to begin the slow academic disintegration of an idea that should never have been assigned non-negligible probability in the first place.
From a Bayesian perspective, human academic science as a whole is a highly inefficient processor of evidence. Each time an unjustifiable argument shifts belief, you need an extra justifiable argument to shift it back. The social process of science leans on extra evidence to overcome cognitive noise.
A more charitable way of putting it is that scientists will adopt positions that are theoretically insufficiently extreme, compared to the ideal positions that scientists would adopt, if they were Bayesian AIs and could trust themselves to reason clearly.
But don't be too charitable. The noise we are talking about is not all innocent mistakes. In many fields, debates drag on for decades after they should have been settled. And not because the scientists on both sides refuse to trust themselves and agree they should look for additional evidence. But because one side keeps throwing up more and more ridiculous objections, and demanding more and more evidence, from an entrenched position of academic power, long after it becomes clear from which quarter the winds of evidence are blowing. (I'm thinking here about the debates surrounding the invention of evolutionary psychology, not about many-worlds.)
Is it possible for individual humans or groups to process evidence more efficiently—reach correct conclusions faster—than human academic science as a whole?
"Ideas are tested by experiment. That is the core of science." And this must be true, because if you can't trust Zombie Feynman, who can you trust?
Yet where do the ideas come from?
You may be tempted to reply, "They come from scientists. Got any other questions?" In Science you're not supposed to care where the hypotheses come from—just whether they pass or fail experimentally.
Okay, but if you remove all new ideas, the scientific process as a whole stops working because it has no alternative hypotheses to test. So inventing new ideas is not a dispensable part of the process.
Now put your Bayesian goggles back on. As described in Einstein's Arrogance, there are queries that are not binary—where the answer is not "Yes" or "No", but drawn from a larger space of structures, e.g., the space of equations. In such cases it takes far more Bayesian evidence to promote a hypothesis to your attention than to confirm the hypothesis.
If you're working in the space of all equations that can be specified in 32 bits or less, you're working in a space of 4 billion equations. It takes far more Bayesian evidence to raise one of those hypotheses to the 10% probability level, than it requires further Bayesian evidence to raise the hypothesis from 10% to 90% probability.
When the idea-space is large, coming up with ideas worthy of testing, involves much more work—in the Bayesian-thermodynamic sense of "work"—than merely obtaining an experimental result with p<0.0001 for the new hypothesis over the old hypothesis.
If this doesn't seem obvious-at-a-glance, pause here and read Einstein's Arrogance.
The scientific process has always relied on scientists to come up with hypotheses to test, via some process not further specified by Science. Suppose you came up with some way of generating hypotheses that was completely crazy—say, pumping a robot-controlled Ouija board with the digits of pi—and the resulting suggestions kept on getting verified experimentally. The pure ideal essence of Science wouldn't skip a beat. The pure ideal essence of Bayes would burst into flames and die.
(Compared to Science, Bayes is falsified by more of the possible outcomes.)
This doesn't mean that the process of deciding which ideas to test is unimportant to Science. It means that Science doesn't specify it.
In practice, the robot-controlled Ouija board doesn't work. In practice, there are some scientific queries with a large enough answer space, that picking models at random to test, it would take zillions of years to hit on a model that made good predictions—like getting monkeys to type Shakespeare.
At the frontier of science—the boundary between ignorance and knowledge, where science advances—the process relies on at least some individual scientists (or working groups) seeing things that are not yet confirmed by Science. That's how they know which hypotheses to test, in advance of the test itself.
If you take your Bayesian goggles off, you can say, "Well, they don't have to know, they just have to guess." If you put your Bayesian goggles back on, you realize that "guessing" with 10% probability requires nearly as much epistemic work to have been successfully performed, behind the scenes, as "guessing" with 80% probability—at least for large answer spaces.
The scientist may not know he has done this epistemic work successfully, in advance of the experiment; but he must, in fact, have done it successfully! Otherwise he will not even think of the correct hypothesis. In large answer spaces, anyway.
So the scientist makes the novel prediction, performs the experiment, publishes the result, and now Science knows it too. It is now part of the publicly accessible knowledge of humankind, that anyone can verify for themselves.
In between was an interval where the scientist rationally knew something that the public social process of science hadn't yet confirmed. And this is not a trivial interval, though it may be short; for it is where the frontier of science lies, the advancing border.
All of this is more true for non-routine science than for routine science, because it is a notion of large answer spaces where the answer is not "Yes" or "No" or drawn from a small set of obvious alternatives. It is much easier to train people to test ideas, than to have good ideas to test.
I think that I have only now really understood what Eliezer has been getting at with the past ten or so posts, this idea that you could be a scientist if you generated hypotheses using a robot controlled Ouija board. I think other readers have already said this numerous times, but this strikes me as terribly wrong.
First of all, good luck getting research funding for such hypotheses (and it wouldn't be fair to leave out funding from the description of Science if you're including institutional inertia and bias).
And I think we all know that in general, someone who used this method would never be able to get anywhere in academia, simply because they wouldn't be respected.
That, I think, teaches an important lesson. Individual scientists are not required to come up with correct or even plausible hypotheses because we all know that individual rationality is flawed. But the aggregate community of scientists and the people who fund them work together to evaluate the plausibility of a given hypothesis, and thereby effectively carry out the Bayesian analysis that Eliezer speaks of.
So one of many thousands of scientists can propose an utterly harebrained theory, and even spend his life on it if he wants, and it will barely register as a blip on the collective scientific radar. But when SR and GR were proposed, it was pretty much taken as a given that they were true, because they HAD to be true. I read somewhere that the experiment done by Eddington to verify the bending of light around the sun was far from accurate enough to actually be a verification of relativity. But it was still taken as a verification, because everyone was pretty much convinced anyway. And conversely, no matter how many experiments the cold fusion people do that show some unexpected effects, nobody takes them very seriously.
Now, you might say that this system is horribly inefficient, and many people say this on a regular basis. But here, the problem is simply that no individual human being can process that much information, and so the time it takes for a given data point to propagate through the community is very long. Of course, the internet helps, and if scientific journals were free, that would probably help also. But ultimately, I think this inefficiency is precisely the cost of a network evaluating all of the priors to find out the plausibility of a theory.
Of course, it also reduces a scientist to nothing more than a cog in a machine, and many people who want to be heroic can't deal with that. But in real life, no scientist is expected to evaluate his own hypothesis. They are expected to come up with a hypothesis, and try to verify it if they can get funding, and let the community decide to what extent the results are valid.
In real life a real scientist must test his own hypothesis and the hypotheses of others. They must devise and test a hypothesis which lends itself to specific predictions offering a means of testing its validity. All observation, in a special field of science, must be either for or against your hypothesis or my hypothesis, if the observation is to advance science. Science advances only by investigators who know how to disprove the empty theories and are already working on it. Science advances only by disproofs. It can take many years before the scientific community gets it.