Laws as Rules
We speak casually of the laws of nature determining the distribution of matter and energy, or governing the behavior of physical objects. Implicit in this rhetoric is a metaphysical picture: the laws are rules that constrain the temporal evolution of stuff in the universe. In some important sense, the laws are prior to the distribution of stuff. The physicist Paul Davies expresses this idea with a bit more flair: "[W]e have this image of really existing laws of physics ensconced in a transcendent aerie, lording it over lowly matter." The origins of this conception can be traced back to the beginnings of the scientific revolution, when Descartes and Newton established the discovery of laws as the central aim of physical inquiry. In a scientific culture immersed in theism, it was unproblematic, even natural, to think of physical laws as rules. They are rules laid down by God that drive the development of the universe in accord with His divine plan.
Does this prescriptive conception of law make sense in a secular context? Perhaps if we replace the divine creator of traditional religion with a more naturalist-friendly lawgiver, such as an ur-simulator. But what if there is no intentional agent at the root of it all? Ordinarily, when I think of a physical system as constrained by some rule, it is not the rule itself doing the constraining. The rule is just a piece of language; it is an expression of a constraint that is actually enforced by interaction with some other physical system -- a programmer, say, or a physical barrier, or a police force. In the sort of picture Davies presents, however, it is the rules themselves that enforce the constraint. The laws lord it over lowly matter. So on this view, the fact that all electrons repel one another is explained by the existence of some external entity, not an ordinary physical entity but a law of nature, that somehow forces electrons to repel one another, and this isn't just short-hand for God or the simulator forcing the behavior.
I put it to you that this account of natural law is utterly mysterious and borders on the nonsensical. How exactly are abstract, non-physical objects -- laws of nature, living in their "transcendent aerie" -- supposed to interact with physical stuff? What is the mechanism by which the constraint is applied? Could the laws of nature have been different, so that they forced electrons to attract one another? The view should also be anathema to any self-respecting empiricist, since the laws appear to be idle danglers in the metaphysical theory. What is the difference between a universe where all electrons, as a matter of contingent fact, attract one another, and a universe where they attract one another because they are compelled to do so by the really existing laws of physics? Is there any test that could distinguish between these states of affairs?
Laws as Descriptions
There are those who take the incoherence of the secular prescriptive conception of laws as reason to reject the whole concept of laws of nature as an anachronistic holdover from a benighted theistic age. I don't think the situation is that dire. Discovering laws of nature is a hugely important activity in physics. It turns out that the behavior of large classes of objects can be given a unified compact mathematical description, and this is crucial to our ability to exercise predictive control over our environment. The significant word in the last sentence is "description". A much more congenial alternative to the prescriptive view is available. Instead of thinking of laws as rules that have an existence above and beyond the objects they govern, think of them as particularly concise and powerful descriptions of regular behavior.
On this descriptive conception of laws, the laws do not exist independently in some transcendent realm. They are not prior to the distribution of matter and energy. The laws are just descriptions of salient patterns in that distribution. Of course, if this is correct, then our talk of the laws governing matter must be understood as metaphorical, but this is a small price to pay for a view that actually makes sense. There may be a concern that we are losing some important explanatory ground here. After all, on the prescriptive view the laws of nature explain why all electrons attract one another, whereas on the descriptive view the laws just restate the fact that all electrons attract one another. But consider the following dialogue:
A: Why are these two metal blocks repelling each other?
B: Because they're both negatively charged, which means they have an excess of electrons, and electrons repel one another.
A: But why do electrons repel one another?
B: Because like charges always repel.
A: But why is that?
B: Because if you do the path integral for the electromagnetic field (using Maxwell's Lagrangian) with source terms corresponding to two spatially separated lumps of identical charge density, you will find that the potential energy of the field is greater the smaller the spatial separation between the lumps, and we know the force points in the opposite direction to the gradient of the potential energy.
A: But why are the dynamics of the electromagnetic field derived from Maxwell's Lagrangian rather than some other equation? And why does the path integral method work at all?
B: BECAUSE IT IS THE LAW.
Is the last link in this chain doing any explanatory work at all? Does it give us any further traction on the problem? B might as well have ended that conversation by saying "Well, that's just the way things are." Now, laws of nature do have a privileged role in physical explanation, but that privilege is due to their simplicity and generality, not to some mysterious quasi-causal power they exert over matter. The fact that a certain generalization is a law of nature does not account for the truth and explanatory power of the generalization, any more than the fact that a soldier has won the Medal of Honor accounts for his or her courage in combat. Lawhood is a recognition of the generalization's truth and explanatory power. It is an honorific; it doesn't confer any further explanatory oomph.
The Best System Account of Laws
David Lewis offers us a somewhat worked out version of the descriptive conception of law. Consider the set of all truths about the world expressible in a particular language. We can construct deductive systems out of this set of propositions by picking out some of the propositions as axioms. The logical consequences of these axioms are the theorems of the deductive system. These deductive systems compete with one another along (at least) two dimensions: the simplicity of the axioms, and the strength or information content of the system as a whole. We prefer systems that give us more information about the world, but this greater strength often comes at the cost of simplicity. For instance, a system whose axioms comprised the entire set of truths about the world would be maximally strong, but not simple at all. Conversely, a system whose only axiom is something like "Stuff happens" would be pretty simple, but very uninformative. What we are looking for is the appropriate balance of simplicity and strength [1].
According to Lewis, the laws of nature correspond to the axioms of the deductive system that best balances simplicity and strength. He does not provide a precise algorithm for evaluating this balance, and I don't think his proposal should be read as an attempt at a technically precise decision procedure for lawhood anyway. It is more like a heuristic picture of what we are doing when we look for laws. We are looking for simple generalizations that can be used to deduce a large amount of information about the world. Laws are highly compressed descriptions of broad classes of phenomena. This view evidently differs quite substantially from the Davies picture I presented at the beginning of this post. On Lewis's view, the collection of particular facts about the world determines the laws of nature, since the laws are merely compact descriptions of those facts. On Davies's view, the determination runs the other way. The laws are independent entities that determine the particular facts about the world. Stuff in the world is arranged the way it is because the laws compelled that arrangement.
One last point about Lewis's account. Lewis acknowledges that there is an important language dependence in his view of laws. If we ignore this, we get absurd results. For instance, consider a system whose only axiom is "For all x, x is F" where "F" is defined to be a predicate that applies to all and only events that occur in this world. This axiom is maximally informative, since it rules out all other possible worlds, and it seems exceedingly simple. Yet we wouldn't want to declare it a law of nature. The problem, obviously, is that all the complexity of the axiom is hidden by our choice of language, with this weird specially rigged predicate. To rule out this possibility, Lewis specifies that all candidate deductive systems must employ the vocabulary of fundamental physics.
But we could also regard lawhood as a 2-place function which maps a proposition and vocabulary pair to "True" if the proposition is an axiom of the best system in that vocabulary and "False" otherwise. Lewis has chosen to curry this function by fixing the vocabulary variable. Leaving the function uncurried, however, highlights that we could have different laws for different vocabularies and, consequently, for different levels of description. If I were an economist, I wouldn't be interested (at least not qua economist) in deductive systems that talked about quarks and leptons. I would be interested in deductive systems that talked about prices and demand. The best system for this coarser-grained vocabulary will give us the laws of economics, distinct from the laws of physics.
Lawhood Is in the Map, not in the Territory
There is another significant difference between the descriptive and prescriptive accounts that I have not yet discussed. On the Davies-style conception of laws as rules, lawhood is an element of reality. A law is a distinctive beast, an abstract entity perched in a transcendent aerie. On the descriptive account, by comparison, lawhood is part of our map, not the territory. Note that I am not saying that the laws themselves are a feature of the map and not the territory. Laws are just particularly salient redundancies, ones that permit us to construct useful compressed descriptions of reality. These redundancies are, of course, out there in the territory. However, the fact that certain regularities are especially useful for the organization of knowledge is at least partially dependent on facts about us, since we are the ones doing the organizing in pursuit of our particular practical projects. Nature does not flag these regularities as laws, we do.
This realization has consequences for how we evaluate certain forms of reductionism. I should begin by noting that there is a type of reductionism I tentatively endorse and that I think is untouched by these speculations. I call this mereological reductionism [2]; it is the claim that all the stuff in the universe is entirely built out of the kinds of things described by fundamental physics. The vague statement is intentional, since fundamental physicists aren't yet sure what kinds of things they are describing, but the motivating idea behind the view is to rule out the existence of immaterial souls and the like. However, reductionists typically embrace a stronger form of reductionism that one might label nomic reductionism [3]. The view is that the fundamental laws of physics are the only really existant laws, and that laws in the non-fundamental disciplines are merely convenient short-cuts that we must employ due to our computational limitations.
One appealing argument for this form of reductionism is the apparent superfluity of non-fundamental laws. Macroscopic systems are entirely built out of parts whose behavior is determined by the laws of physics. It follows that the behavior of these systems is also fixed by those fundamental laws. Additional non-fundamental laws are otiose; there is nothing left for them to do. Barry Loewer puts it like this: "Why would God make [non-fundamental laws] the day after he made physics when the world would go on exactly as if they were there without them?" If these laws play no explanatory role, Ockham's razor demands that we strike them from our ontological catalog, leaving only the fundamental laws.
I trust it is apparent that this argument relies on the prescriptive conception of laws. It assumes that real laws of nature do stuff; they push and pull matter and energy around. It is this implicit assumption that raises the overdetermination concern. On this assumption, if the fundamental laws of physics are already lording it over all matter, then there is no room for another locus of authority. However, the argument (and much of the appeal of the associated reductionist viewpoint) fizzles, if we regard laws as descriptive. Employing a Lewisian account, all we have are different best systems, geared towards vocabularies at different resolutions, that highlight different regularities as the basis for a compressed description of a system. There is nothing problematic with having different ways to compress information about a system. Specifically, we are not compelled by worries about overdetermination to declare one of these methods of compression to be more real than another. In response to Loewer's theological question, the proponent of the descriptive conception could say that God does not get to separately specify the non-fundamental and fundamental laws. By creating the pattern of events in space-time she implicitly fixes them all.
Nomic reductionism would have us believe that the lawhood of the laws of physics is part of the territory, while the lawhood of the laws of psychology is just part of our map. Once we embrace the descriptive conception of laws, however, there is no longer this sharp ontological divide between the fundamental and non-fundamental laws. One reason for privileging the laws of physics is revealed to be the product of a confused metaphysical picture. However, one might think there are still other good reasons for privileging these laws that entail a reductionism more robust than the mereological variety. For instance, even if we accept that laws of physics don't possess a different ontological status, we can still believe that they have a prized position in the explanatory hierarchy. This leads to explanatory reductionism, the view that explanations couched in the vocabulary of fundamental physics are always better because fundamental physics provides us with more accurate models than the non-fundamental sciences. Also, even if one denies that the laws of physics themselves are pushing matter around, one can still believe that all the actual pushing and pulling there is, all the causal action, is described by the laws of physics, and that the non-fundamental laws do not describe genuine causal relations. We could call this kind of view causal reductionism.
Unfortunately for the reductionist, explanatory and causal reductionism don't fare much better than nomic reductionism. Stay tuned for the reasons why!
[1] Lewis actually adds a third desideratum, fit, that allows for the evaluation of systems with probabilistic axioms, but I leave this out for simplicity of exposition. I have tweaked Lewis's presentation in a couple of other ways as well. For his own initial presentation of the view, see Counterfactuals, pp. 72-77. For a more up-to-date presentation, dealing especially with issues involving probabilistic laws, see this paper (PDF).
[2] From the Greek meros, meaning "part".
[3] From the Greek nomos, meaning "law".
There's a widely acknowledged problem involving the Second Law of Thermodynamics. The problem stems from the fact that all known fundamental laws of physics are invariant under time reversal (well, invariant under CPT, to be more accurate) while the Second Law (a non-fundamental law) is not. Now, why is the symmetry at the fundamental level regarded as being in tension with the asymmetry at the non-fundamental level? It is not true that solutions to symmetric equations must generically share those same symmetries. In fact, the opposite is true. It can be proved that generic solutions of systems of partial differential equations have fewer symmetries than the equations. So it's not like we should expect that a generic universe describable by time-reversal symmetric laws will also be time-reversal symmetric at every level of description. So what's the source of the worry then?
I think it comes from a commitment to nomic reductionism. The Second Law is, well, a law. But if you really believe that laws are rules, there is no room for autonomous laws at non-fundamental levels of description. The law-likeness, or "ruliness", of any such law must really stem from the fundamental laws. Otherwise you have overdetermination of physical behavior. Here's a rhetorical question taken from a paper on the problem: "What grounds the lawfulness of entropy increase, if not the underlying dynamical laws, the laws governing the world's fundamental physical ontology?" The question immediately reveals two assumptions associated with thinking of laws as rules: the lawfulness of a non-fundamental law must be "grounded" in something, and this grounding can only conceivably come from the fundamental laws.
So we get a number of attempts to explain the lawfulness of the Second Law by expanding the set of fundamental laws, Examples include Penrose's Weyl curvature hypothesis and Carroll and Chen's spontaneous eternal inflation model. These hypotheses are constructed specifically to account for lawful entropy increase. Now nobody thinks, "The lawfulness of quantum field theory needs grounding. Can I come up with an elaborate hypothesis whose express purpose is accounting for why it is lawful?" (EDIT: Bad example. See this comment) The lawfulness of fundamental laws is not seen as requiring grounding in the same way as non-fundamental laws. If you think of laws as descriptions rather than rules, this starts to look like an unjustified double standard. Why would macroscopic patterns require grounding in a way that microscopic patterns do not?
I can't fully convey my own take on the Second Law issue in a comment, but I can give a gist. The truth of the Second Law depends on the particular manner in which we partition phase space into macrostates. For the same microscopic trajectory through phase space, different partitions will deliver different conclusions about entropy. We could partition phase space so that entropy decreases monotonically (for some finite length of time), increases monotonically, or exhibits no monotonic trend. And this is true for any microscopic trajectory through any phase space. So the existence of some partition according to which the Second Law is true is no surprise. What does require explanation is why this is the natural partition. But which partition is natural is explained by our epistemic and causal capacities. The natural macrostates are the ones which group together microstates which said capacities cannot distinguish and separate microstates which they can. So what needs to be explained is why our capacities are structured so as to carve up phase space in a manner that leads to the Second Law. But this is partly a question about us, and it's the sort of question that invites an answer based on an observation selection effect -- something like "Agency is only possible if the system's capacities are structured so as to carve up its environment in this manner." My view is that the asymmetry of the Second Law is a consequence of an asymmetry in agency -- the temporal direction in which agents can form and read reliable records about a system's state must differ from the temporal direction in which an agent's action can alter a system's state. I could say a lot more here but I won't.
The point is that this sort of explanation is very different from the kind that most physicists are pursuing. I'm not saying it's definitely the right tack to pursue, but it is weird to me that it basically hasn't been pursued at all. And I think the reason for that is that it isn't the kind of grounding that the prescriptive viewpoint leads one to demand. So implicit adherence to this viewpoint has in this case led to a promising line of inquiry being largely ignored.