I somehow missed that John Wentworth and David Lorell are also in the middle of a sequence on this same topic here.
Yeah, uh... hopefully nobody's holding their breath waiting for the rest of that sequence. That was the original motivator, but we only wrote the one post and don't have any more in development yet.
Point is: please do write a good stat mech sequence, David and I are not really "on that ball" at the moment.
This will make more sense if you have a basic grasp on quantum mechanics, but if you're willing to accept "energy comes in discrete units" as a premise then you should be mostly fine.
My current understanding is that QM is not-at-all needed to make sense of stat mech. Instead, the thing where energy is equally likely to be in any of the degrees of freedom just comes from using a measure over your phase space such that the dynamical law of your system preservers that measure!
Recall that every vector space is the finitely supported functions from some set to ℝ, and every Hilbert space is the square-integrable functions from some measure space to ℝ.
I'm guessing that similarly, the physical theory that you're putting in terms of maximizing entropy lies in a large class of "Bostock" theories such that we could put each of them in terms of maximizing entropy, by warping the space with respect to which we're computing entropy. Do you have an idea of the operators and properties that define a Bostock theory?
EDIT: I somehow missed that John Wentworth and David Lorell
are also in the middle of a sequencehave written one post on this same topic here.I will see where this goes from here!This sequence will continue!Introduction to a sequence on the statistical thermodynamics of some things and maybe eventually everything. This will make more sense if you have a basic grasp on quantum mechanics, but if you're willing to accept "energy comes in discrete units" as a premise then you should be mostly fine.
The title of this post has a double meaning:
Forget as much as possible, then find a way to forget some more.
Particle(s) in a Box
All of practical thermodynamics (chemistry, engines, etc.) relies on the same procedure, although you will rarely see it written like this:
For example, consider a particle in a box.
What does it mean to "forget everything"? One way is forgetting where the particle is, so our knowledge of the particle's position could be represented by a uniform distribution over the interior of the box.
Now imagine we connect this box to another box:
If we forget everything about the particle now, we should also forget which box it is in!
If we instead have a lot of particles in our first box, we might describe it as a box full of gas. If we connect this to another box and forget where the particles are, we would expect to find half in the first box and half in the second box. This means we can explain why gases expand to fill space without reference to anything except information theory.
A new question might be, how much have we forgotten? Our knowledge gas particle has gone from the following distribution over boxes 1 and 2
P(Box)={1 Box 1 0 Box 2
To the distribution
P(Box)={0.5 Box 1 0.5 Box 2
Which is the loss of 1 bit of information per particle. Now lets put that information to work.
The Piston
Imagine a box with a movable partition. The partition restricts particles to one side of the box. If the partition moves to the right, then the particles can access a larger portion of the box:
In this case, to forget as much as possible about the particles means to assume they are in the largest possible space, which involves the partition being all the way over to the right. Of course there is the matter of forgetting where the partition is, but we can safely ignore this as long as the number of particles is large enough.
What if we have a small number of particles on the right side of the partition?
We might expect the partition to move some, but not all, of the way over, when we forget as much as possible. Since the region in which the pink particles can live has decreased, we have gained knowledge about their position. By coupling forgetting and learning, anything is possible. The question is, how much knowledge have we gained?
Maths of the Piston
Let the walls of the box be at coordinates 0 and 1, and let x be the horizontal coordinate of the piston. The position of each green particle can be expressed as a uniform distribution over (0,x), which has entropy log2(x), and likewise each pink particle's position is uniform over (x,1), giving entropy log2(1−x).
If we have ng green particles and np pink particles, the total entropy becomes nglog2(x)+nplog2(1−x), which has a minimum at x=ngng+np. This means that the total volume occupied by each population of particles is proportional to the number of particles.
If we wanted to ditch this information-based way of thinking about things, we could invent some construct which is proportional to ngx for the green particles and np1−x for the pink particles, and demand they be equal. Since the region with the higher value of this construct presses harder on the partition, and pushes it away, we might call this construct "pressure".
If we start with x=1/3 and ng=2×np, we will end up with x=2/3. We will have "forgotten" ng bits of information and learned np bits of information. In total this is a net loss of np bits of information, which are lost to the void.
The task of building a good engine is the task of minimizing the amount of information we lose.
Conclusions
We can, rather naturally and intuitively, reframe the behaviour of gases in a piston in terms of information first and pressure later. This will be a major theme of this sequence. Quantities like pressure and temperature naturally arise as a consequence of the ultimate rule of statistical mechanics:
You can only forget, never remember.