In my experience, it is rare that someone has a legitimate background in the appropriate use of Occam's razor. I do not agree that this is a straw man. I think you're also conflating two issues. I see it as an issue that individuals are not able to overcome biases well enough to reliably use Occam's razor when promoting solutions to problems. The scientific community as a whole is much more successful in doing this, and no one (neither me nor the OP) disagrees. But an alternate issue arises which is that the scientific community tends to simply fail to evaluate whether or not a proposed theory wins (in the Occam's razor sense) unless there is a tremendous stack of easily visible experimental evidence to motivate such an evaluation. This is a major reason why single-world views have persisted for so long. Few cling to single-world views because they are "favorite pet theories" (which would classify such an error into the appropriate-use-of-Occam's-razor type). More often it is just that alternative explanations will simply not even be considered just because they don't have the temporally aggregated endorsement of the scientific community.
If Eliezer walked up to Sir Roger Penrose and presented a great argument about why the explanation of consciousness due to quantum gravity was just a mysterious answer to a mysterious question, and Penrose replied with something like, "Come back and talk to me when you've got 20 years worth of experimental evidence on your side... I don't want to hear about your retro-active interpretations... it's not worth my time if there's not a mountain of evidence to persuade me to update to any new position", this would be the type of mistake that the OP is trying to point out. And as a grad student at an R-1 university, I can tell you this is anything but a straw man. People go around not updating their maps all the time and their reasoning is just that until some new interpretation is overwhelmingly salient in terms of a flurry of brand new experimental insights, they just won't even consider that it exists. That's a serious problem from a Bayesian perspective. And as the turnaround time for scientific results shortens, those willing to update sooner will have a distinct advantage.
Finally, I do not understand how you can say that "We can't use Solomonoff induction - because it is uncomputable" is a "criticism" with respect to the ideas in the OP. The OP has absolutely nothing to do with the computability of Solomonoff induction. We can use it in the sense that you mentioned when you said:
Distinguishing between scientific theories is listed as the first application of the razor here.
That's great that it's listed there, much as it has been repeatedly listed and emphasized in major discussions for the last 30 years. But many factors are preventing that from trickling down to the work of actual scientists.
Scott Aaronson suggests that Many-Worlds and libertarianism are similar in that they are both cases of bullet-swallowing, rather than bullet-dodging:
Now there's an analogy that would never have occurred to me.
I've previously argued that Science rejects Many-Worlds but Bayes accepts it. (Here, "Science" is capitalized because we are talking about the idealized form of Science, not just the actual social process of science.)
It furthermore seems to me that there is a deep analogy between (small-'l') libertarianism and Science:
The core argument for libertarianism is historically motivated distrust of lovely theories of "How much better society would be, if we just made a rule that said XYZ." If that sort of trick actually worked, then more regulations would correlate to higher economic growth as society moved from local to global optima. But when some person or interest group gets enough power to start doing everything they think is a good idea, history says that what actually happens is Revolutionary France or Soviet Russia.
The plans that in lovely theory should have made everyone happy ever after, don't have the results predicted by reasonable-sounding arguments. And power corrupts, and attracts the corrupt.
So you regulate as little as possible, because you can't trust the lovely theories and you can't trust the people who implement them.
You don't shake your finger at people for being selfish. You try to build an efficient system of production out of selfish participants, by requiring transactions to be voluntary. So people are forced to play positive-sum games, because that's how they get the other party to sign the contract. With violence restrained and contracts enforced, individual selfishness can power a globally productive system.
Of course none of this works quite so well in practice as in theory, and I'm not going to go into market failures, commons problems, etc. The core argument for libertarianism is not that libertarianism would work in a perfect world, but that it degrades gracefully into real life. Or rather, degrades less awkwardly than any other known economic principle. (People who see Libertarianism as the perfect solution for perfect people, strike me as kinda missing the point of the "pragmatic distrust" thing.)
Science first came to know itself as a rebellion against trusting the word of Aristotle. If the people of that revolution had merely said, "Let us trust ourselves, not Aristotle!" they would have flashed and faded like the French Revolution.
But the Scientific Revolution lasted because—like the American Revolution—the architects propounded a stranger philosophy: "Let us trust no one! Not even ourselves!"
In the beginning came the idea that we can't just toss out Aristotle's armchair reasoning and replace it with different armchair reasoning. We need to talk to Nature, and actually listen to what It says in reply. This, itself, was a stroke of genius.
But then came the challenge of implementation. People are stubborn, and may not want to accept the verdict of experiment. Shall we shake a disapproving finger at them, and say "Naughty"?
No; we assume and accept that each individual scientist may be crazily attached to their personal theories. Nor do we assume that anyone can be trained out of this tendency—we don't try to choose Eminent Judges who are supposed to be impartial.
Instead, we try to harness the individual scientist's stubborn desire to prove their personal theory, by saying: "Make a new experimental prediction, and do the experiment. If you're right, and the experiment is replicated, you win." So long as scientists believe this is true, they have a motive to do experiments that can falsify their own theories. Only by accepting the possibility of defeat is it possible to win. And any great claim will require replication; this gives scientists a motive to be honest, on pain of great embarrassment.
And so the stubbornness of individual scientists is harnessed to produce a steady stream of knowledge at the group level. The System is somewhat more trustworthy than its parts.
Libertarianism secretly relies on most individuals being prosocial enough to tip at a restaurant they won't ever visit again. An economy of genuinely selfish human-level agents would implode. Similarly, Science relies on most scientists not committing sins so egregious that they can't rationalize them away.
To the extent that scientists believe they can promote their theories by playing academic politics—or game the statistical methods to potentially win without a chance of losing—or to the extent that nobody bothers to replicate claims—science degrades in effectiveness. But it degrades gracefully, as such things go.
The part where the successful predictions belong to the theory and theorists who originally made them, and cannot just be stolen by a theory that comes along later—without a novel experimental prediction—is an important feature of this social process.
The final upshot is that Science is not easily reconciled with probability theory. If you do a probability-theoretic calculation correctly, you're going to get the rational answer. Science doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.
Regarding Science as a mere approximation to some probability-theoretic ideal of rationality... would certainly seem to be rational. There seems to be an extremely reasonable-sounding argument that Bayes's Theorem is the hidden structure that explains why Science works. But to subordinate Science to the grand schema of Bayesianism, and let Bayesianism come in and override Science's verdict when that seems appropriate, is not a trivial step!
Science is built around the assumption that you're too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn't need a social process of science... right?
So, are you going to believe in faster-than-light quantum "collapse" fairies after all? Or do you think you're smarter than that?