Once upon a time, a younger Eliezer had a stupid theory. Eliezer18 was careful to follow the precepts of Traditional Rationality that he had been taught; he made sure his stupid theory had experimental consequences. Eliezer18 professed, in accordance with the virtues of a scientist he had been taught, that he wished to test his stupid theory.
This was all that was required to be virtuous, according to what Eliezer18 had been taught was virtue in the way of science.
It was not even remotely the order of effort that would have been required to get it right.
The traditional ideals of Science too readily give out gold stars. Negative experimental results are also knowledge, so everyone who plays gets an award. So long as you can think of some kind of experiment that tests your theory, and you do the experiment, and you accept the results, you've played by the rules; you're a good scientist.
You didn't necessarily get it right, but you're a nice science-abiding citizen.
(I note at this point that I am speaking of Science, not the social process of science as it actually works in practice, for two reasons. First, I went astray in trying to follow the ideal of Science—it's not like I was shot down by a journal editor with a grudge, and it's not like I was trying to imitate the flaws of academia. Second, if I point out a problem with the ideal as it is traditionally preached, real-world scientists are not forced to likewise go astray!)
Science began as a rebellion against grand philosophical schemas and armchair reasoning. So Science doesn't include a rule as to what kinds of hypotheses you are and aren't allowed to test; that is left up to the individual scientist. Trying to guess that a priori, would require some kind of grand philosophical schema, and reasoning in advance of the evidence. As a social ideal, Science doesn't judge you as a bad person for coming up with heretical hypotheses; honest experiments, and acceptance of the results, is virtue unto a scientist.
As long as most scientists can manage to accept definite, unmistakable, unambiguous experimental evidence, science can progress. It may happen too slowly—it may take longer than it should—you may have to wait for a generation of elders to die out—but eventually, the ratchet of knowledge clicks forward another notch. Year by year, decade by decade, the wheel turns forward. It's enough to support a civilization.
So that's all that Science really asks of you—the ability to accept reality when you're beat over the head with it. It's not much, but it's enough to sustain a scientific culture.
Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment. If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer? 7.5%. You cannot say, "I believe she doesn't have breast cancer, because the experiment isn't definite enough." You cannot say, "I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate." 7.5% is the rational estimate given this evidence, not 7.4% or 7.6%. The laws of probability are laws.
It is written in the Twelve Virtues, of the third virtue, lightness:
If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims. For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.
In Science, when it comes to deciding which hypotheses to test, the morality of Science gives you personal freedom of what to believe, so long as it isn't already ruled out by experiment, and so long as you move to test your hypothesis. Science wouldn't try to give an official verdict on the best hypothesis to test, in advance of the experiment. That's left up to the conscience of the individual scientist.
Where definite experimental evidence exists, Science tells you to bow your stubborn neck and accept it. Otherwise, Science leaves it up to you. Science gives you room to wander around within the boundaries of the experimental evidence, according to your whims.
And this is not easily reconciled with Bayesianism's notion of an exactly right probability estimate, one with no flex or room for whims, that exists both before and after the experiment. It doesn't match well with the ancient and traditional reason for Science—the distrust of grand schemas, the presumption that people aren't rational enough to get things right without definite and unmistakable experimental evidence. If we were all perfect Bayesians, we wouldn't need a social process of science.
Nonetheless, around the time I realized my big mistake, I had also been studying Kahneman and Tversky and Jaynes. I was learning a new Way, stricter than Science. A Way that could criticize my folly, in a way that Science never could. A Way that could have told me, what Science would never have said in advance: "You picked the wrong hypothesis to test, dunderhead."
But the Way of Bayes is also much harder to use than Science. It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.
In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.
But if you try to use Bayes even qualitatively—if you try to do the thing that Science doesn't trust you to do, and reason rationally in the absence of overwhelming evidence—it is like math, in that a single error in a hundred steps can carry you anywhere. It demands lightness, evenness, precision, perfectionism.
There's a good reason why Science doesn't trust scientists to do this sort of thing, and asks for further experimental proof even after someone claims they've worked out the right answer based on hints and logic.
But if you would rather not waste ten years trying to prove the wrong theory, you'll need to essay the vastly more difficult problem: listening to evidence that doesn't shout in your ear.
Even if you can't look up the priors for a problem in the Handbook of Chemistry and Physics—even if there's no Authoritative Source telling you what the priors are—that doesn't mean you get a free, personal choice of making the priors whatever you want. It means you have a new guessing problem which you must carry out to the best of your ability.
If the mind, as a cognitive engine, could generate correct estimates by fiddling with priors according to whims, you could know things without looking them, or even alter them without touching them. But the mind is not magic. The rational probability estimate has no room for any decision based on whim, even when it seems that you don't know the priors.
Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is. Bayesian probability theory is not a toolbox of statistical methods, it's the law that governs any tool you use, whether or not you know it, whether or not you can calculate it.
As for using Bayesian methods on huge, highly general hypothesis spaces—like, "Here's the data from every physics experiment ever; now, what would be a good Theory of Everything?"—if you knew how to do that in practice, you wouldn't be a statistician, you would be an Artificial General Intelligence programmer. But that doesn't mean that human beings, in modeling the universe using human intelligence, are violating the laws of physics / Bayesianism by generating correct guesses without evidence.)
Nick Tarleton comments:
The problem is encouraging a private, epistemic standard as lax as the social one.
which pinpoints the problem I was trying to indicate much better than I did.
The problem is that when you talk about "ideal Science" it sounds like you mean something scientific practice attempts to achieve but falls short of but what you're actually discussing is a second-hand imprecise (idealized) description of science. This sort of "science as hypothesis-testing" is a philosophical model. Historians often use it to interpret the history of science (although this has thankfully changed in recent years) and even scientists will resort to it when pressed for a description of their methods. But it's not used (or aimed for) within science; I didn't get any classes on general scientific method (or logic or inductive probabilism), I just learned a set of practical (including mathematical) skills. Science itself is an institutional and social practice and like all institutional and social practices we don't presently understand how it works.
To expand on what I said about your other essay: Being able to create relevant hypotheses is an important skill and one a scientist spends a great deal of his or her time developing. It may not be part of the traditional description of science but that doesn't mean it's not included in the actual social institution of science that produces actual real science here in the real world; it's your description and not science that is faulty. Think of science as movement along a trajectory. The period of apprenticeship in the scientific community that every practicing scientist goes through exists in order to calibrate the budding scientist and set him or her on the right course; it's to get us all moving in the same direction. That this can't be encapsulated into a set of neat rules isn't a failure of science but a failure of descriptions of science.
This isn't unique to science; it's an issue in most institutions. When developing countries try to create a simulcrum of industrial practice from theory and description the result is usually a failure. When developing countries open themselves to foreign industry the newly established facilities, run by foreign experts who have causal ties through history to the very site of origin of their practices, impart a skillset on the local population who often then manage to combine that skillset and their unique understanding of their own culture to create their own businesses that can out-compete foreign industry. This is necessary because we don't have a general understanding of institutions and therefore any description or theory designed to encapsulate what we need to do in order to copy their practices is necessarily incomplete or wrong.
Now, if you're just saying the problem is that you, Eliezer, had a crappy understanding of Science and therefore went astray then what I'm saying supports your thesis. But you seem to be going further than that and making a claim about scientific practice. (It's ambiguous though so I apologize if I have misinterpreted your intent.) I still, however, would reject the notion that Bayesianism is the hidden structure behind the success of science. What you would perhaps say is that when scientists learn to develop worthy hypotheses they are secretly learning how to become good Bayesians or learning cognitive practices that approximate what Bayes would tell us to do. But inasmuch as Bayes can be made to fit any scientific inferences it's being used to address pseudo-problems (i.e., problems of justification) that the inferences did not need to be defended against to begin with; it's in this respect that I think it's unnecessary.
The difference between a scientist and a theologian is not a difference of rationality or a difference between how their cognitive processes approximate Bayesian insights. The difference is simply that one studied science and trained as a scientist and now works in a laboratory while the other studied theology and trained as a theologian and now works in the theology department. The scientist avoids coming to theological conclusions about his scientific studies as a matter of socialization. It is not necessary, however, that this socialization involve a general method for coming to the right conclusions. Science doesn't need any such thing.
The Great Secret of Science, the reason scientists more often than not are the ones who produce science, is that science has all the science. Science begets science. What you learn from rolling balls down an inclined plane allows you to predict the trajectories projectiles, which allows you to discover that motion is parabolic or analyze the periodic motion of pendulums and eventually you, or one your colleagues, develops the calculus, and so on. This doesn't all happen in one institution because of some general methodology or some universal recipe for getting the truth; it happens because that institution has all the experts. It's always going to be the guy who understands the science who uses it to create new science because you need to understand the old science to create the new science. Beyond that there's really nothing more left to explain; we have a complete causal explanation of science. If we wanted a philosophical justification of why we should accept science in the face of philosophical skepticism, then we would need to invoke Bayes (or whatever), but I'm not sure you think we need one of those. You seem to apply Bayes as the hidden cause of scientific success rather than the philosophical justification.