Decius comments on Natural Laws Are Descriptions, not Rules - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (234)
There's a widely acknowledged problem involving the Second Law of Thermodynamics. The problem stems from the fact that all known fundamental laws of physics are invariant under time reversal (well, invariant under CPT, to be more accurate) while the Second Law (a non-fundamental law) is not. Now, why is the symmetry at the fundamental level regarded as being in tension with the asymmetry at the non-fundamental level? It is not true that solutions to symmetric equations must generically share those same symmetries. In fact, the opposite is true. It can be proved that generic solutions of systems of partial differential equations have fewer symmetries than the equations. So it's not like we should expect that a generic universe describable by time-reversal symmetric laws will also be time-reversal symmetric at every level of description. So what's the source of the worry then?
I think it comes from a commitment to nomic reductionism. The Second Law is, well, a law. But if you really believe that laws are rules, there is no room for autonomous laws at non-fundamental levels of description. The law-likeness, or "ruliness", of any such law must really stem from the fundamental laws. Otherwise you have overdetermination of physical behavior. Here's a rhetorical question taken from a paper on the problem: "What grounds the lawfulness of entropy increase, if not the underlying dynamical laws, the laws governing the world's fundamental physical ontology?" The question immediately reveals two assumptions associated with thinking of laws as rules: the lawfulness of a non-fundamental law must be "grounded" in something, and this grounding can only conceivably come from the fundamental laws.
So we get a number of attempts to explain the lawfulness of the Second Law by expanding the set of fundamental laws, Examples include Penrose's Weyl curvature hypothesis and Carroll and Chen's spontaneous eternal inflation model. These hypotheses are constructed specifically to account for lawful entropy increase. Now nobody thinks, "The lawfulness of quantum field theory needs grounding. Can I come up with an elaborate hypothesis whose express purpose is accounting for why it is lawful?" (EDIT: Bad example. See this comment) The lawfulness of fundamental laws is not seen as requiring grounding in the same way as non-fundamental laws. If you think of laws as descriptions rather than rules, this starts to look like an unjustified double standard. Why would macroscopic patterns require grounding in a way that microscopic patterns do not?
I can't fully convey my own take on the Second Law issue in a comment, but I can give a gist. The truth of the Second Law depends on the particular manner in which we partition phase space into macrostates. For the same microscopic trajectory through phase space, different partitions will deliver different conclusions about entropy. We could partition phase space so that entropy decreases monotonically (for some finite length of time), increases monotonically, or exhibits no monotonic trend. And this is true for any microscopic trajectory through any phase space. So the existence of some partition according to which the Second Law is true is no surprise. What does require explanation is why this is the natural partition. But which partition is natural is explained by our epistemic and causal capacities. The natural macrostates are the ones which group together microstates which said capacities cannot distinguish and separate microstates which they can. So what needs to be explained is why our capacities are structured so as to carve up phase space in a manner that leads to the Second Law. But this is partly a question about us, and it's the sort of question that invites an answer based on an observation selection effect -- something like "Agency is only possible if the system's capacities are structured so as to carve up its environment in this manner." My view is that the asymmetry of the Second Law is a consequence of an asymmetry in agency -- the temporal direction in which agents can form and read reliable records about a system's state must differ from the temporal direction in which an agent's action can alter a system's state. I could say a lot more here but I won't.
The point is that this sort of explanation is very different from the kind that most physicists are pursuing. I'm not saying it's definitely the right tack to pursue, but it is weird to me that it basically hasn't been pursued at all. And I think the reason for that is that it isn't the kind of grounding that the prescriptive viewpoint leads one to demand. So implicit adherence to this viewpoint has in this case led to a promising line of inquiry being largely ignored.
The Second Law includes the definition of the partitions to which it applies- it specifically allows 'local' reductions in entropy, but for any partition which exhibits a local decrease in entropy, the complementary partition exhibits a greater total increase in entropy.
If you construct your partition creatively, consider the complementary partition which you are also constructing?
maybe you're thinking of partitions of actual space? He's talking about partitions of phase space.
I think we're using the word "partition" in two different senses. When I talk about a partition of phase space, I'm referring to this notion. I'm not sure exactly what you're referring to.
How can that be implemented to apply to Newtonian space?
The partition isn't over Newtonian space, it's over phase space, a space where every point represents an entire dynamical state of the system. If there are N particles in the system, and the particles have no internal degrees of freedom, phase space will have 6N dimensions, 3N for position and 3N for momentum. A partition over phase space is a division of the space into mutually exclusive sub-regions that collectively exhaust the space. Each of these sub-regions is associated with a macrostate of the system. Basically you're grouping together all the microscopic dynamical configurations that are macroscopically indistinguishable.
Now, describe a state in which entropy of an isolated system will decrease over some time period. Calculate entropy at the same level of abstraction as you are describing the system; (if you describe temperature as temperature, use temperature. If you describe energy states of electrons and velocities of particles, use those instead of temperature calculate entropy.
When I checked post-Newtonian physics last, I didn't see the laws of thermodynamics included. Clearly some of the conservation rules don't apply in the absence of others which have been provably violated; momentum isn't conserved when mass isn't conserved, for example.
The entropy of a closed system in equilibrium is given by the logarithm of the volume of the region of phase space corresponding to the system's macrostate. So if we partition phase space differently, so that the macrostates are different, judgments about the entropy of particular microstates will change. Now, according to our ordinary partitioning of phase space, the macrostate associated with an isolated system's initial microstate will not have a larger volume than the macrostate associated with its final volume. However, this is due to the partition, not just the system's actual microscopic trajectory. With a different partition, the same microscopic trajectory will start in a macrostate of higher entropy and evolve to a macrostate of lower entropy.
Of course, this latter partition will not correspond nicely with any of the macroproperties (such as, say, system volume) that we work with. This is what I meant when I called it unnatural. But its unnaturalness has to do with the way we are constructed. Nature doesn't come pre-equipped with a list of the right macroproperties.
Here's an example: Put a drop of ink in a glass of water. The ink will gradually spread out through the water. This is a process in which entropy increases. There are many different ways the ink could initially be dropped into the water (on the right or left side of the cup, for instance), and we can distinguish between these different ways just by looking. As the ink spreads out, we are no longer able to distinguish between different spread out configurations. Even though we know that dropping the ink on the right side must lead to a microscopic spread out configuration different from the one we would obtain by dropping the ink on the left side, these configurations are not macroscopically distinguishable once the ink has spread out enough. They both just look like ink uniformly spread throughout the water. This is characteristic of entropy increase: macroscopically available distinctions get suppressed. We lose macroscopic information about the system.
Now think of some kind of alien with a weird sensory apparatus. Its senses do not allow it to distinguish between different ways of initially dropping the ink into the water. The percepts associated with an ink drop on the right side of the cup and a drop on the left side of the cup are sufficiently similar that it cannot tell the difference. However, it is able to distinguish between different spread out configurations. To this alien the ink mixing in water would be an entropy decreasing process because its natural macrostates are different from ours. Now obviously the alien's sensory and cognitive apparatus would be hugely different from our own, and there might be all kinds of biological reasons we would not expect such an alien to exist, but the point is that there is nothing in the fundamental laws of physics ruling out its existence.
It seems likely to me that the the laws of motion governing the time evolution of microstates has something to do with determining the "right" macroproperties -- that is, the ones that lead to reproducible states and processes on the macro scale. (Something to do with coarse-graining, maybe?) Then natural selection filters for organisms that take advantage of these macro regularities.
No, you can't redefine the phase state volumes so that more than one macrostate exists within a given partition, and you can't use a different scale to determine macrostate than you do for entropy.
Of course, to discuss a system not in equilibrium, you need to use formulas that apply to systems that aren't in equilibrium. The only time your system is in equilibrium is at the end, after the ink has either completely diffused or settled to the top or bottom.
And the second law of thermodynamics applies to isolated systems, not closed systems. Isolated systems are a subset of closed systems.
We still seem to be talking past each other. Neither of these is an accurate description of what I'm doing. In fact, I'm not even sure what you mean here. I still suspect you haven't understood what I mean when I talk about a partition of phase space. Maybe you could clarify how you're interpreting the concept?
Yes, I recognize this. None of what I said about my example relies on the process being quasistatic. Of course, if the system isn't in equilibrium, it's entropy isn't directly measurable as the volume of the corresponding macroregion, but it is the Shannon entropy of a probability distribution that only has support within the macroregion (ie. it vanishes outside the macroregion). The difference from equilibrium is that the distribution won't be uniform within the relevant macroregion. It is still the case, though, that a distribution spread out over a much larger macroregion will in general have a higher entropy than one spread out over a smaller volume, so using volume in phase space as a proxy for entropy still works.
Fair enough. My use of the word "closed" was sloppy. Don't see how this affects the point though.
Now you've put yourself in a position which is inconsistent with your previous claim that diffuse ink can be defined to have a lower entropy than a mixture of concentrated ink and pure water. One response is that they have virtually identical entropy. That's also the correct answer, since the isolated system of the container of water reaches a maximum entropy when temperature is equalized and the ink fully diffuse. The ink does not spontaneously concentrate back into a drop, despite the very small drop in entropy.