In the first half of the 14th century, the Franciscan friar and logician, William of Occam proposed a heuristic for deciding between alternative explanations of physical observables. As William put it: "Entities should not be multiplied without necessity". Or, as Einstein reformulated it 600 years later: "Everything should be made as simple as possible, but not simpler".
Occam's Razor, as it became known, was enthusiastically adopted by the scientific community and remains the unquestioned criterion for deciding between alternative hypotheses to this day. In my opinion, its success is traceable to two characteristics:
o Utility: OR is not a logical deduction. Neither is it a statement about which hypothesis is most likely. Instead, it is a procedure for selecting a theory which makes further work as easy is possible. And by facilitating work, we can usually advance further and faster.
o Combinability. OR is fully compatible with each the epistemological stances which have been adopted within science from time to time (empiricism, rationalism, positivism, falsifiability, etc.)
It is remarkable that such a widely applied principle is exercised with so little thought to its interpretation. I thought of this recently upon reading an article claiming that the multiverse interpretation of quantum mechanics is appealing because it is so simple. Really?? The multiverse explanation proposes the creation of an infinitude of new universes at every instant. To me, that makes it an egregiously complex hypothesis. But if someone decides that it is simple, I have no basis for refutation, since the notion of what it means for a theory to be simple has never been specified.
What do we mean when we call something simple? My naive notion is to begin by counting parts and features. A milling device made up of two stones, one stationary one mobile, fitted with a stick for rotation by hand becomes more complex when we add devices to capture and transmit water power for setting the stone in motion. And my mobile phone becomes more complex each time I add a new app. But these notions don't serve to answer the question whether Lagrange's formulation of classical mechanics, based on action, is simpler than the equivalent formulation by Newton, based on his three laws of forces.
Isn't remarkable that scientists, so renown for their exactitude, have been relying heavily on so vague a principle for 700 years?
Can we do anything to make it more precise?
SWE=Schroedinger Wave Equation. SU&C=Shut Up and Calculate.
The topic is using S.I to quantify O's R, and S.I is not a measure on assumptions , it is a measure on algorithmic complexity.
Explaining just my POV doesn't stop me making predictions. In fact predicting the observations of one observer is exactly how S.I is supposed to work. It also prevents various forms of cheating. I don't know why you are using "explain" rather than "predict". Deutsch favours explanation over prediction but the very relevant point here is that how well a theory explains is an unquantifiable human judgement. Predicting observations, on the other hand, is definite an quantifiable..that's the whole point of using S.I as a mechanistic process to quantify O's. R.
Predicting every observers observations is a bad thing from the POV of proving that MWI is simple, because if you allow one observer to pick out their observations from a morass of data, then the easisest way of generating data that contains any substring is a PRNG. You basically ending up proving that "everything random" is the simplest explanation. Private Messaging pointed that out, too.
How do you do that with S.I?
No. I run the TM with my experimental conditions as the starting state, and I keep deleting unobserved results, renormalising and re-running. That's how physics is done any way -- what I have called Shut Up and Calculate.
If you perform the same operations with S.I set up to emulate MW you'll get the same results. That's just a way of restating the truism that all interpretations agree on results. But you need a difference in algorithmic complexity as well.
You seem to be saying that MWI is a simpler ontological picture now. I dispute that, but its beside the point because what we are discussing is using SI to quantify O's R via alorithmic complexity.
I didn't say MW can't make predictions at all. I am saying that operationally, predicition-making is the same under all interpretations, and that neglect of unobserved outcomes always has to occur.
The point about predicting my observations is that they are the only ones I can test. It's operating, not metaphysical.