If you get one law, like Special Relativity, you can look at the language it's written in, and infer what the next law ought to look like. If the laws are not being generated from the same language, they surely have something in common; and this I refer to as the Tao.
I am very interested in the inhuman nature of underlying reality. This interest led me to physics, which is humanity's best attempt at making predictions about reality that hold up to observation. My (lukewarm?) take as a doctoral student in physics that the triumph of physics is quantifying the degree to which physical models fail, not necessarily creating models that are true to whatever rules actually run reality (the Tau, if you will). For example, viewing Newtonian physics from this perspective shows that a force is not a thing that causes acceleration, but rather that a force is an error term in the model which says that physical objects move in straight lines at the same speed forever. You should definitely keep predicting your surroundings by assuming that pushing on things makes them move faster, but the best fundamental matter theories we have do not have forces in them. (This should sound familiar).
The universe seems like it is quantum, but what does that actually mean? It does not mean that quantum models perfectly reflect reality any more than the great success of Newtonian mechanics at explaining the macroscopic world mean that forces are fundamental components of reality. Various quantum mechanical models have limitations just like every other physical model. The Schrödinger equation predicts measurements which affect other measurements at distances farther away than light can propagate in a given amount of time, which is a violation of relativistic causality which has never been observed and would break models which have historically worked very well. Quantum field theory can fix this particular issue, but extracting observable values requires esoteric corrections which indicate that our theories may fall apart at higher energies. Then there is the mystery of dark matter, which interacts with the rest of the universe gravitationally like the matter we have explained with quantum mechanics, but has not yet been observed to interact in any other way. The theories we have created which fit in the quantum framework are the best matter theories we have, but they seem incomplete. My hope is that picking out core principles in all of these successful quantum theories will help design a new theory more in line with the Tau, whether or not it uses the framework of quantum mechanics. A sign of success would be to elegantly explain things like dark matter, the expansion of the universe, and how quantum matter interacts gravitationally. I will list some core implications of quantum mechanics here which must either be recreated in a better replacement theory or must be shown to reemerge by some mechanism in low-energy systems such as we can currently predict well with quantum mechanics.
Quantum mechanics in a nutshell
The core insight of quantum mechanics is that the state of a set of particles (which may just be one particle) cannot be defined by listing all of its measurable properties and specifying one value for each property. This was unexpected. For macroscopic objects, we are accustomed to being able to know everything there is to know about an object to the precision of our instruments for measuring. But once you get to small enough scales, sometimes you go to measure an attribute of a set of particles and you consistently get one of multiple discrete results even if you very carefully set up the set of particles the same way every time you measure that attribute. That said, once you measure some attribute, the set of particles will continue to measure the same value for that same attribute if you measure it again, but only if you don't measure some other attribute with an uncertain value first. In an act of desperate nihilism, an observer could just list out the possible values of one attribute you could measure, list out the possible values of some second attribute you could measure, and then write the probabilities of measuring each of the first quantity's values given some system which has been measured to be in one of the second quantity's values. One might hope to live in a consistent universe where those probabilities are the same every time you try this, and it so happens that we do live in such a consistent universe.
States which correspond to a single value of one property may correspond to multiple values of another property
I want to expand on this particular bit of quantum theory which was in the previous paragraph. I facetiously presented the core insight of quantum mechanics as nihilism, and a cynic could object that the theory will work in any universe where you only ever get one value for a given measurement and the same rules are followed any time you make a measurement. If you just list the square root of the probabilities of measuring every possible measurement given some prepared state, then of course if you pick out the number in your list corresponding to a measurement and square it, you get the probability back. The art of quantum mechanics is to designing elegant theories (read roughly as "theories which can be reduced to one equation") which explain all of those probabilities, rather than just presenting a list of values. In particular, quantum mechanics theories define things you can measure as linear operators whose eigenvalues are the values you can measure for the thing.
The more mathematically inclined might have noticed that the thing I described in the last section which was listing all of the probabilities for one state given some second state sure sounded a lot like listing the elements of a matrix describing some linear transformation. The mechanics of quantum mechanics are just linear algebra. Yes, we don't always get the same value in some states, but we only ever get one value per measurement when we measure some attribute, so we use the ways we can measure a system to be as a basis for a vector space in which vectors are states that the system could be in. We get a complete and orthogonal basis, which is a fancy way to say that we never measure a state that isn't made up of basis vectors and none of our basis vectors contain any of the other basis vectors. In order to make predictions of what we will measure given a vector corresponding to a state which is a combination of basis vectors, we project a given state onto the basis element which corresponds to a possible measured value and square the the magnitude of the projection (which is just a number) to get a weight that tells us the probability of measuring the value. That's probably hard to follow, so I wrote up a toy example of this with numbers if you want details. The place where things get interesting is that you can use some second attribute to build a basis instead, and either of these bases should be equally good. After you measure this second attribute, it is possible to end up in a basis state of the second basis which is a linear combination of the first attribute's basis elements. When you go to measure the first attribute on this new basis state defined by some single value for the second attribute, the new state you have is a sum of states which correspond to measuring a single value each for the first attribute, so you have some probability of measuring any of those values for the first attribute. Again, explaining math with words is hard, and I wrote up a toy example of this as well. This is known as a superposition of states, and I think the concept is poorly understood in the popular culture it permeates. Superposition is an extremely well-vetted consequence of quantum mechanics, and something like it must be in any replacement theory.
Measurement is interaction, and the order of measurement matters
In my experience, quantum mechanics is taught as if measurement is a fundamental component of the universe, but it is not. Physics assumes an observer and a frame of reference for its explanations and predictions because physics is performed by observers in a frame of reference, but the universe simply is what it is everywhere all at once. An observer is physics just as much as a thing being observed is physics. Measurement is better understood as the joining of the state of an observed system to the state of an observer via some interaction. I will keep saying measurement because it is a useful frame for interpreting things quantum mechanically, but I think that the thing that actually matters is interactions. As an example of measurement, consider yourself seeing a cat. The electrons in the cat have transferred momentum absorbed from ambient light to the electrons in your eye via an electromagnetic interaction (which temporarily pushes molecules in your eye into a new configuration that starts a chain reaction leading to a signal your brain combines with other signals to build a model of a "colored" "surface" of an "object" some "distance" away from "you"), and this interaction has limited you to future universes in which the state of the cat is compatible with the momentum it transferred to you. When I say that I have made a measurement, I just mean that I have interacted with a system in such a way as to restrict my future states to states in which the observed system was, at the time of measurement, in the state which I measured.
In this paradigm, the statement that order of measurement matters is just the statement that interactions change states. Of course they do! If an interaction changed nothing, then it wouldn’t be an interaction! Please note that this is not me saying that you must physically change state in order to measure it, and perhaps we can find a different way to measure things which interferes with them less. This is not, for example, me saying that in order to measure on object's location, you have to hit it with light to see it, and that puts a small amount of momentum into the system which can change its position (although I think some objection of this sort will be true for any measurement technique in practice). This is me saying that the actual equations which define our theory of what things are doing regardless of whether they are measured say you will get different answers if you ask questions of it in different orders regardless of whether you actually measure the world the theory predicts. The issue is that our physics is defined in terms of counterfactual things we would measure if we could, regardless of whether we actually do, as I described above. This worked very well for classical physics, and it was shocking to the creators of quantum mechanics that the order in which you ask for physical values from the universe would change the values that you get. However, this property of quantum theories has led to extremely successful counterintuitive predictions, so we would like to preserve it in future theories or at least explain why it worked in observer-focussed theories at low energy.
The probability of measuring state A given state B is the same as B given A
Quantum mechanics is linear algebra. The probability of measuring some state vector which we represent by the ket |ϕ> if we start out with the state |ψ> may be found by projecting the state |ϕ> onto the state |ψ>, which can be calculated via the inner product we represent by <ϕ|ψ> in bra-ket notation. In the more usual vector space of arrows in 3-dimensional space, we can project one vector onto an orthonormal basis element by taking the dot product of the vector with the basis vector, and this is a generalization of that to our more abstract vector space of states. However, the inner product is not the probability we're looking for. For reasons that are unclear, the way that we turn the possibly-complex number <ψ|ϕ> into a probability which we can verify (which is among other things a real number between 0 and 1) is that we normalize each of the states to magnitude 1 in our abstract vector space and multiply the projection of one onto the other by its complex conjugate. However, the complex conjugate of <ψ|ϕ> is <ϕ|ψ> because of the mathematical structure of the inner product we use. As an equation, the probability of measuring |ψ> given |ϕ> looks like the fraction <ψ|ϕ>∗<ϕ|ψ><ψ|ψ>∗<ϕ|ϕ>. Complex numbers commute, so this is exactly the same number as <ϕ|ψ>∗<ψ|ϕ><ϕ|ϕ>∗<ψ|ψ>, which is the probability of measuring |ϕ> given |ψ>. To put this another way, the probability of measuring a state is related to how much the vector representing that state is "pointing in the same direction" as the vector representing your start state in our abstract vector space. It so happens that the start state points that same amount in the direction of the measuring state, just like the dot product of two vectors in Euclidean space is the same no matter what order you multiply the vectors.
This is nice and symmetric, but not what you would expect from an arbitrary probabilistic law. In general, P(B|A) is not equal to P(A|B). The fact that quantum mechanics predicts this symmetry seems to me a deep statement about the universe, but I don't know what to do with it. I'm pretty sure that CPT symmetry can be viewed as a downstream consequence of this, and it is one of the most tested physical symmetries the universe has been observed to possess.
The resolution of the universe is ℏ
One of the hallmarks of quantum mechanics is the discretization of states. Classically, you can add however much energy you want into most systems. If it's oscillating or spinning, you can add a little bit more energy to make the frequency slightly higher, and the frequency can take any real value. If you have a bouncing ball, you can add any amount of energy to the system and the ball will just change its maximum height and speed to whatever real values are necessary to contain that energy. In bound quantum systems, only certain energy levels are allowed. A typical consequence of this is that if you add energy to a particle system via light (or electromagnetic interactions in general), the system can only absorb photons of specific energies and relaxes to a lower energy state by releasing photons which have the same energy values every time. This is true across a wide variety of microscopic systems. In typical quantum models, the energies tend to be discrete solutions to some differential equation which derives its energy scale from the presence of the reduced Planck constant ℏ (read as "H-bar" out loud). This constant has units of action, which happen to have the same dimensions as angular momentum, and ℏ is also the amount of angular momentum between the two allowed spin states of an electron. Or of a typical neutron or proton for that matter.
What's up with ℏ? To give some semblance of a rigorous answer, I will introduce the idea of a unitary operator. We expect certain things to be true about a system no matter where or when or from what direction we measure it. We can apply this expectation to quantum mechanical systems by creating operators which represent moving the state to a different location, evolving the state forward or backward in time by some amount, or rotating it to a different orientation, and then making sure that the probabilities we get under whatever theory we build stay the same. These operations tend to have the following properties:
The operation depends on some real parameter which may have units
If you apply the operation using a parameter x and then apply the same operation with the parameter −x, then you end up back where you started
If you want to measure the overlap of two states, you will have the same overlap if you apply the same operation to both states. (Less abstractly, if you move an object three centimeters to the left, but you also move your meter stick three centimeters to the left, then you will measure the object to be at the same point relative to the meter stick as it was before.)
The type of linear operator which has these properties is called a unitary operator, and it has the form e−it^H, where t is a real number and ^H is an operator which has real eigenvalues. Operators with real eigenvalues are known as hermitian operators, and they tend to represent things you can measure. There is a subtle issue with this formulation of unitary operators, which is that many parameters which define these symmetry operators have units, and you can't take the exponent of something with units and get something with consistent units. To see this, you can replace an exponential with its Taylor expansion, so we see that e−it^H=1−it^H−12!t2^H2+i3!t3^H3+... and each of those terms has a different power of t with different units if t has any units. We solve this in quantum mechanics by making our actual general unitary operator ^U(x)=e−it^H/ℏ, where the hermitian operator ^H has whatever units it needs to have so that y^H/ℏ has no units. This is how you can define the momentum operator ^p, which comes from the translation operator e−id^p/ℏ, which moves a state distance d over from where it started. This is also how you can define energy, which comes from the hamiltonian operator ^H in the time evolution operator e−iΔt^H/ℏ, which moves a state time interval Δt forward in time. By pattern matching, you can define the angular momentum operator along one axis ^J from the rotation operator e−iθ^J/ℏ, which rotates a state by angle θ about an axis.
The major conceptual product of defining all of these operators is a set of commutation relationships for some operators which represent things you can measure. I am not going to reproduce the first chapter of the standard graduate-level quantum mechanics text here, but you can use the argument for the translation operator above to derive the so-called Heisenberg uncertainty principle. The introduction of ℏ into the translation operator puts a physical limit on how finely we can say that a particle is anywhere. The physical limit is not merely conceptual; it is an actual number with units that mean something, and physicists took measurements to make sure those limitations actually hold. I argue that this is effectively a resolution limit on location in the universe. You can define a region of space on a meter stick where a particle can be along one dimension, but if you make the region too small, physics forces you to give the particle some probability of having so much momentum that it can't really be said to be in the region at all. To what extent do all of the spaces between the lines of your meter stick matter if the best you can do is say that a particle is between two of the lines? Hopefully you find this to be a compelling argument, although it is a weaker argument to me than the corresponding argument about angular momentum. The set of rotation operators you get by defining e−iθi^Ji/ℏ about the three spatial axes defines a group which is very well studied in mathematics. If we take the units of ^Ji seriously, we define an angular momentum operator which will only allow values which vary from each other by some integer multiple of ℏ. If a particle can have angular momentum ℏ about some axis, the next lowest angular momentum it can have is 0. It will never have angular momentum ℏ/3 or something. Particles come with possible sets of angular momenta like {−ℏ,0,ℏ} or {−32ℏ,−12ℏ,12ℏ,32ℏ}, and that's it. Once we get down to the scale of atoms, we have yet to observe anything else. It is arguably the most rigorously tested theory in particle physics. Angular momentum comes only in multiples of ℏ if you measure precisely enough. The Pauli exclusion principle for fermions (which is the basis of chemistry) and Bose–Einstein statistics (which allow the particle exchange theory of interaction at the core of the standard model) are downstream effects.
Special relativity is a fundamental constraint on the universe
This section is perhaps less certain in my mind than the others. We were very convinced of the principles of Newtonian mechanics until special relativity showed they were a low energy approximation of a thing which far more elegantly explained discrepancies in electromagnetism and gravity and also correctly predicted discrepancies in time measurement that proved to be measurable once we put things in space which could go fast enough. Perhaps it's the height of arrogance to assume there is no way special relativity turns out to be a special case of some yet more elegant general theory. However, the ways that special relativity influence quantum mechanics are not only that relativity pushes slightly wrong energy predictions made with non relativistic theories to be more correct, but also that structural constraints of relativity on quantum models have predictive power to explain some of the weird stuff that shows up when we bang protons together. I have two angles from which to point to this structural limitation:
An interesting result from special relativity is that you can introduce rotations via linear acceleration in two different directions. We saw a cool result earlier about the types of angular momenta which are allowed on small scales by considering rotations. One might expect that there is a corresponding cool result from considering relativistic frame changes, which are known as Lorentz transformations, and one would be correct. The angular momentum operators are a subgroup of generators of the Lorentz transformations which move between frames in a manner which respects special relativity. Those interlocking generators act on a vector space which, properly interpreted in the lowest dimension nontrivial representation, predicts spin ℏ/2 particles with antiparticles. In the next higher dimension representation, you find spin ℏ particles with no corresponding antiparticles, consistent with the behavior of photons. (A higher representation contains a subgroup which has the properties of the gravitational field in general relativity, but finding a scheme which turns this into testable hypotheses has proven difficult, so I would be cautious about putting much weight on that.) If you want to inject the math behind this paragraph directly into your eyeballs, this YouTube video explains the group theory and makes passing references to some of the physics it represents. There is a more historical way I could have approached the connection between antimatter and special relativity covered in the Wikipedia article for the Dirac equation, although the article for the positron provides context and interpretation. All that is to say, putting special relativity in a quantum mechanical framework led to the surprising prediction of antimatter which now has a wealth of experimental evidence and is a load-bearing component of the most successful particle theories we have.
That prediction alone would not be enough to convince me that special relativity is a fundamental constraint, but the very structure of quantum field theory is inherently constrained by special relativity. The details are tedious, but the operator at the center of the theory is a Lagrangian which must be a Lorentz scalar at every point in space. Basically, we contract fields, which may be vectors or tensors, into scalars in such a way that the equations we write are still valid equations if we move to a frame which is moving at 0.7c with respect to the original or whatever. There is an obvious-to-some-physicists first equation to try based off of taking the relativistic energy equation E2=p2−m2 and replacing momentum operators with derivatives. This is taken to be the free field equation, and it turns out to have an exact solution which effectively counts the number of particles of a given momentum and mass to give the relativistic energy spectrum you would want if nothing ever interacted with anything else. That energy transforms exactly as you would expect under frame changes, unlike, say, the Schrödinger equation. The art of quantum field theory is generally to add small perturbations to the free theory and chase down the consequences. Quantum field theory in practice is a massivepile of inelegantkludges, but the one thing holding it all together is its insistence on Lorentz invariance, and it has so far been adequate for explaining every experimental result in particle physics we have the technology to measure.
To shamelessly steal from Yudkowsky:
I am very interested in the inhuman nature of underlying reality. This interest led me to physics, which is humanity's best attempt at making predictions about reality that hold up to observation. My (lukewarm?) take as a doctoral student in physics that the triumph of physics is quantifying the degree to which physical models fail, not necessarily creating models that are true to whatever rules actually run reality (the Tau, if you will). For example, viewing Newtonian physics from this perspective shows that a force is not a thing that causes acceleration, but rather that a force is an error term in the model which says that physical objects move in straight lines at the same speed forever. You should definitely keep predicting your surroundings by assuming that pushing on things makes them move faster, but the best fundamental matter theories we have do not have forces in them. (This should sound familiar).
Quantum mechanics is the framework that physicists use to explain why matter acts the way it acts. The orbital model you learn in high school chemistry is a product of quantum mechanics. The mass of neutrons and protons which makes up most of the weight of the things we interact with is mostly a product of quantum mechanics. The frequencies of light which emerge from various chemical and nuclear processes are a product of quantum mechanics. As a consequence, the best explanations for the light we see when we look away from the ground and into the universe are from quantum mechanics. The ideal gas laws are commonly derived in statistical mechanics using an assumption of quantized momentum states which is understood to be justified by quantum mechanics.
The universe seems like it is quantum, but what does that actually mean? It does not mean that quantum models perfectly reflect reality any more than the great success of Newtonian mechanics at explaining the macroscopic world mean that forces are fundamental components of reality. Various quantum mechanical models have limitations just like every other physical model. The Schrödinger equation predicts measurements which affect other measurements at distances farther away than light can propagate in a given amount of time, which is a violation of relativistic causality which has never been observed and would break models which have historically worked very well. Quantum field theory can fix this particular issue, but extracting observable values requires esoteric corrections which indicate that our theories may fall apart at higher energies. Then there is the mystery of dark matter, which interacts with the rest of the universe gravitationally like the matter we have explained with quantum mechanics, but has not yet been observed to interact in any other way. The theories we have created which fit in the quantum framework are the best matter theories we have, but they seem incomplete. My hope is that picking out core principles in all of these successful quantum theories will help design a new theory more in line with the Tau, whether or not it uses the framework of quantum mechanics. A sign of success would be to elegantly explain things like dark matter, the expansion of the universe, and how quantum matter interacts gravitationally. I will list some core implications of quantum mechanics here which must either be recreated in a better replacement theory or must be shown to reemerge by some mechanism in low-energy systems such as we can currently predict well with quantum mechanics.
Quantum mechanics in a nutshell
The core insight of quantum mechanics is that the state of a set of particles (which may just be one particle) cannot be defined by listing all of its measurable properties and specifying one value for each property. This was unexpected. For macroscopic objects, we are accustomed to being able to know everything there is to know about an object to the precision of our instruments for measuring. But once you get to small enough scales, sometimes you go to measure an attribute of a set of particles and you consistently get one of multiple discrete results even if you very carefully set up the set of particles the same way every time you measure that attribute. That said, once you measure some attribute, the set of particles will continue to measure the same value for that same attribute if you measure it again, but only if you don't measure some other attribute with an uncertain value first. In an act of desperate nihilism, an observer could just list out the possible values of one attribute you could measure, list out the possible values of some second attribute you could measure, and then write the probabilities of measuring each of the first quantity's values given some system which has been measured to be in one of the second quantity's values. One might hope to live in a consistent universe where those probabilities are the same every time you try this, and it so happens that we do live in such a consistent universe.
States which correspond to a single value of one property may correspond to multiple values of another property
I want to expand on this particular bit of quantum theory which was in the previous paragraph. I facetiously presented the core insight of quantum mechanics as nihilism, and a cynic could object that the theory will work in any universe where you only ever get one value for a given measurement and the same rules are followed any time you make a measurement. If you just list the square root of the probabilities of measuring every possible measurement given some prepared state, then of course if you pick out the number in your list corresponding to a measurement and square it, you get the probability back. The art of quantum mechanics is to designing elegant theories (read roughly as "theories which can be reduced to one equation") which explain all of those probabilities, rather than just presenting a list of values. In particular, quantum mechanics theories define things you can measure as linear operators whose eigenvalues are the values you can measure for the thing.
The more mathematically inclined might have noticed that the thing I described in the last section which was listing all of the probabilities for one state given some second state sure sounded a lot like listing the elements of a matrix describing some linear transformation. The mechanics of quantum mechanics are just linear algebra. Yes, we don't always get the same value in some states, but we only ever get one value per measurement when we measure some attribute, so we use the ways we can measure a system to be as a basis for a vector space in which vectors are states that the system could be in. We get a complete and orthogonal basis, which is a fancy way to say that we never measure a state that isn't made up of basis vectors and none of our basis vectors contain any of the other basis vectors. In order to make predictions of what we will measure given a vector corresponding to a state which is a combination of basis vectors, we project a given state onto the basis element which corresponds to a possible measured value and square the the magnitude of the projection (which is just a number) to get a weight that tells us the probability of measuring the value. That's probably hard to follow, so I wrote up a toy example of this with numbers if you want details. The place where things get interesting is that you can use some second attribute to build a basis instead, and either of these bases should be equally good. After you measure this second attribute, it is possible to end up in a basis state of the second basis which is a linear combination of the first attribute's basis elements. When you go to measure the first attribute on this new basis state defined by some single value for the second attribute, the new state you have is a sum of states which correspond to measuring a single value each for the first attribute, so you have some probability of measuring any of those values for the first attribute. Again, explaining math with words is hard, and I wrote up a toy example of this as well. This is known as a superposition of states, and I think the concept is poorly understood in the popular culture it permeates. Superposition is an extremely well-vetted consequence of quantum mechanics, and something like it must be in any replacement theory.
Measurement is interaction, and the order of measurement matters
In my experience, quantum mechanics is taught as if measurement is a fundamental component of the universe, but it is not. Physics assumes an observer and a frame of reference for its explanations and predictions because physics is performed by observers in a frame of reference, but the universe simply is what it is everywhere all at once. An observer is physics just as much as a thing being observed is physics. Measurement is better understood as the joining of the state of an observed system to the state of an observer via some interaction. I will keep saying measurement because it is a useful frame for interpreting things quantum mechanically, but I think that the thing that actually matters is interactions. As an example of measurement, consider yourself seeing a cat. The electrons in the cat have transferred momentum absorbed from ambient light to the electrons in your eye via an electromagnetic interaction (which temporarily pushes molecules in your eye into a new configuration that starts a chain reaction leading to a signal your brain combines with other signals to build a model of a "colored" "surface" of an "object" some "distance" away from "you"), and this interaction has limited you to future universes in which the state of the cat is compatible with the momentum it transferred to you. When I say that I have made a measurement, I just mean that I have interacted with a system in such a way as to restrict my future states to states in which the observed system was, at the time of measurement, in the state which I measured.
In this paradigm, the statement that order of measurement matters is just the statement that interactions change states. Of course they do! If an interaction changed nothing, then it wouldn’t be an interaction! Please note that this is not me saying that you must physically change state in order to measure it, and perhaps we can find a different way to measure things which interferes with them less. This is not, for example, me saying that in order to measure on object's location, you have to hit it with light to see it, and that puts a small amount of momentum into the system which can change its position (although I think some objection of this sort will be true for any measurement technique in practice). This is me saying that the actual equations which define our theory of what things are doing regardless of whether they are measured say you will get different answers if you ask questions of it in different orders regardless of whether you actually measure the world the theory predicts. The issue is that our physics is defined in terms of counterfactual things we would measure if we could, regardless of whether we actually do, as I described above. This worked very well for classical physics, and it was shocking to the creators of quantum mechanics that the order in which you ask for physical values from the universe would change the values that you get. However, this property of quantum theories has led to extremely successful counterintuitive predictions, so we would like to preserve it in future theories or at least explain why it worked in observer-focussed theories at low energy.
The probability of measuring state A given state B is the same as B given A
Quantum mechanics is linear algebra. The probability of measuring some state vector which we represent by the ket |ϕ> if we start out with the state |ψ> may be found by projecting the state |ϕ> onto the state |ψ>, which can be calculated via the inner product we represent by <ϕ|ψ> in bra-ket notation. In the more usual vector space of arrows in 3-dimensional space, we can project one vector onto an orthonormal basis element by taking the dot product of the vector with the basis vector, and this is a generalization of that to our more abstract vector space of states. However, the inner product is not the probability we're looking for. For reasons that are unclear, the way that we turn the possibly-complex number <ψ|ϕ> into a probability which we can verify (which is among other things a real number between 0 and 1) is that we normalize each of the states to magnitude 1 in our abstract vector space and multiply the projection of one onto the other by its complex conjugate. However, the complex conjugate of <ψ|ϕ> is <ϕ|ψ> because of the mathematical structure of the inner product we use. As an equation, the probability of measuring |ψ> given |ϕ> looks like the fraction <ψ|ϕ>∗<ϕ|ψ><ψ|ψ>∗<ϕ|ϕ>. Complex numbers commute, so this is exactly the same number as <ϕ|ψ>∗<ψ|ϕ><ϕ|ϕ>∗<ψ|ψ>, which is the probability of measuring |ϕ> given |ψ>. To put this another way, the probability of measuring a state is related to how much the vector representing that state is "pointing in the same direction" as the vector representing your start state in our abstract vector space. It so happens that the start state points that same amount in the direction of the measuring state, just like the dot product of two vectors in Euclidean space is the same no matter what order you multiply the vectors.
This is nice and symmetric, but not what you would expect from an arbitrary probabilistic law. In general, P(B|A) is not equal to P(A|B). The fact that quantum mechanics predicts this symmetry seems to me a deep statement about the universe, but I don't know what to do with it. I'm pretty sure that CPT symmetry can be viewed as a downstream consequence of this, and it is one of the most tested physical symmetries the universe has been observed to possess.
The resolution of the universe is ℏ
One of the hallmarks of quantum mechanics is the discretization of states. Classically, you can add however much energy you want into most systems. If it's oscillating or spinning, you can add a little bit more energy to make the frequency slightly higher, and the frequency can take any real value. If you have a bouncing ball, you can add any amount of energy to the system and the ball will just change its maximum height and speed to whatever real values are necessary to contain that energy. In bound quantum systems, only certain energy levels are allowed. A typical consequence of this is that if you add energy to a particle system via light (or electromagnetic interactions in general), the system can only absorb photons of specific energies and relaxes to a lower energy state by releasing photons which have the same energy values every time. This is true across a wide variety of microscopic systems. In typical quantum models, the energies tend to be discrete solutions to some differential equation which derives its energy scale from the presence of the reduced Planck constant ℏ (read as "H-bar" out loud). This constant has units of action, which happen to have the same dimensions as angular momentum, and ℏ is also the amount of angular momentum between the two allowed spin states of an electron. Or of a typical neutron or proton for that matter.
What's up with ℏ? To give some semblance of a rigorous answer, I will introduce the idea of a unitary operator. We expect certain things to be true about a system no matter where or when or from what direction we measure it. We can apply this expectation to quantum mechanical systems by creating operators which represent moving the state to a different location, evolving the state forward or backward in time by some amount, or rotating it to a different orientation, and then making sure that the probabilities we get under whatever theory we build stay the same. These operations tend to have the following properties:
The type of linear operator which has these properties is called a unitary operator, and it has the form e−it^H, where t is a real number and ^H is an operator which has real eigenvalues. Operators with real eigenvalues are known as hermitian operators, and they tend to represent things you can measure. There is a subtle issue with this formulation of unitary operators, which is that many parameters which define these symmetry operators have units, and you can't take the exponent of something with units and get something with consistent units. To see this, you can replace an exponential with its Taylor expansion, so we see that e−it^H=1−it^H−12!t2^H2+i3!t3^H3+... and each of those terms has a different power of t with different units if t has any units. We solve this in quantum mechanics by making our actual general unitary operator ^U(x)=e−it^H/ℏ, where the hermitian operator ^H has whatever units it needs to have so that y^H/ℏ has no units. This is how you can define the momentum operator ^p, which comes from the translation operator e−id^p/ℏ, which moves a state distance d over from where it started. This is also how you can define energy, which comes from the hamiltonian operator ^H in the time evolution operator e−iΔt^H/ℏ, which moves a state time interval Δt forward in time. By pattern matching, you can define the angular momentum operator along one axis ^J from the rotation operator e−iθ^J/ℏ, which rotates a state by angle θ about an axis.
The major conceptual product of defining all of these operators is a set of commutation relationships for some operators which represent things you can measure. I am not going to reproduce the first chapter of the standard graduate-level quantum mechanics text here, but you can use the argument for the translation operator above to derive the so-called Heisenberg uncertainty principle. The introduction of ℏ into the translation operator puts a physical limit on how finely we can say that a particle is anywhere. The physical limit is not merely conceptual; it is an actual number with units that mean something, and physicists took measurements to make sure those limitations actually hold. I argue that this is effectively a resolution limit on location in the universe. You can define a region of space on a meter stick where a particle can be along one dimension, but if you make the region too small, physics forces you to give the particle some probability of having so much momentum that it can't really be said to be in the region at all. To what extent do all of the spaces between the lines of your meter stick matter if the best you can do is say that a particle is between two of the lines? Hopefully you find this to be a compelling argument, although it is a weaker argument to me than the corresponding argument about angular momentum. The set of rotation operators you get by defining e−iθi^Ji/ℏ about the three spatial axes defines a group which is very well studied in mathematics. If we take the units of ^Ji seriously, we define an angular momentum operator which will only allow values which vary from each other by some integer multiple of ℏ. If a particle can have angular momentum ℏ about some axis, the next lowest angular momentum it can have is 0. It will never have angular momentum ℏ/3 or something. Particles come with possible sets of angular momenta like {−ℏ,0,ℏ} or {−32ℏ,−12ℏ,12ℏ,32ℏ}, and that's it. Once we get down to the scale of atoms, we have yet to observe anything else. It is arguably the most rigorously tested theory in particle physics. Angular momentum comes only in multiples of ℏ if you measure precisely enough. The Pauli exclusion principle for fermions (which is the basis of chemistry) and Bose–Einstein statistics (which allow the particle exchange theory of interaction at the core of the standard model) are downstream effects.
Special relativity is a fundamental constraint on the universe
This section is perhaps less certain in my mind than the others. We were very convinced of the principles of Newtonian mechanics until special relativity showed they were a low energy approximation of a thing which far more elegantly explained discrepancies in electromagnetism and gravity and also correctly predicted discrepancies in time measurement that proved to be measurable once we put things in space which could go fast enough. Perhaps it's the height of arrogance to assume there is no way special relativity turns out to be a special case of some yet more elegant general theory. However, the ways that special relativity influence quantum mechanics are not only that relativity pushes slightly wrong energy predictions made with non relativistic theories to be more correct, but also that structural constraints of relativity on quantum models have predictive power to explain some of the weird stuff that shows up when we bang protons together. I have two angles from which to point to this structural limitation:
An interesting result from special relativity is that you can introduce rotations via linear acceleration in two different directions. We saw a cool result earlier about the types of angular momenta which are allowed on small scales by considering rotations. One might expect that there is a corresponding cool result from considering relativistic frame changes, which are known as Lorentz transformations, and one would be correct. The angular momentum operators are a subgroup of generators of the Lorentz transformations which move between frames in a manner which respects special relativity. Those interlocking generators act on a vector space which, properly interpreted in the lowest dimension nontrivial representation, predicts spin ℏ/2 particles with antiparticles. In the next higher dimension representation, you find spin ℏ particles with no corresponding antiparticles, consistent with the behavior of photons. (A higher representation contains a subgroup which has the properties of the gravitational field in general relativity, but finding a scheme which turns this into testable hypotheses has proven difficult, so I would be cautious about putting much weight on that.) If you want to inject the math behind this paragraph directly into your eyeballs, this YouTube video explains the group theory and makes passing references to some of the physics it represents. There is a more historical way I could have approached the connection between antimatter and special relativity covered in the Wikipedia article for the Dirac equation, although the article for the positron provides context and interpretation. All that is to say, putting special relativity in a quantum mechanical framework led to the surprising prediction of antimatter which now has a wealth of experimental evidence and is a load-bearing component of the most successful particle theories we have.
That prediction alone would not be enough to convince me that special relativity is a fundamental constraint, but the very structure of quantum field theory is inherently constrained by special relativity. The details are tedious, but the operator at the center of the theory is a Lagrangian which must be a Lorentz scalar at every point in space. Basically, we contract fields, which may be vectors or tensors, into scalars in such a way that the equations we write are still valid equations if we move to a frame which is moving at 0.7c with respect to the original or whatever. There is an obvious-to-some-physicists first equation to try based off of taking the relativistic energy equation E2=p2−m2 and replacing momentum operators with derivatives. This is taken to be the free field equation, and it turns out to have an exact solution which effectively counts the number of particles of a given momentum and mass to give the relativistic energy spectrum you would want if nothing ever interacted with anything else. That energy transforms exactly as you would expect under frame changes, unlike, say, the Schrödinger equation. The art of quantum field theory is generally to add small perturbations to the free theory and chase down the consequences. Quantum field theory in practice is a massive pile of inelegant kludges, but the one thing holding it all together is its insistence on Lorentz invariance, and it has so far been adequate for explaining every experimental result in particle physics we have the technology to measure.