The canonical example of quantum mechanics in action is the harmonic oscillator, which is something like a mass on a spring. In classical mechanics, it wobbles back and forth periodically when it is given energy, if it's at a position x, wobbling about x0 and moving with velocty v we can say its energy contains a potential term V proportional to (x−x0)2, and a kinetic term T proportional to v2, with an overall form:
E=12k(x−x0)2+12mv2
We could try and find a distribution over x and v, but continuous distributions tend not to "play well" with entropy. They're dependent on a choice of characteristic unit. Instead we'll go to the quantum world.
One of the major results of quantum mechanics is that systems like this can only exist in certain energy levels. In the harmonic oscillator these levels are equally-spaced, with a spacing proportional to the frequency associated with the classical oscillator. Since the levels are equally-spaced, we can think about the energy coming in discrete units called "phonons".
Our beliefs about the number of phonons N in our system can be expressed as a probability distribution P(N=n) over n∈N:
This is progress: we've reduced an uncountably infinite set of states to a countable one, which is a factor of infinity! But if we do our normal trick and try to find the maximum entropy distribution, we'll still hit a problem: we get P(N=n)=0 for all n∈N.
The Trick: Distribution Families
Thinking back to our previous post, an answer presents itself: phonons are a form of energy, which is conserved. Since we're uncertain over N, we'll place a restriction on E(N) of our distribution. We can solve the specific case here, but it's actually more useful to solve the general case.
Maths, Lots of Maths, Skippable:
Consider a set of states of a system s∈S. To each of these we assign a real numeric value written as s→xs∈R. We also assign a probability s→ps∈R+ constrained by the usual condition∑s∈Sps=1.
Next, define E(X)=∑s∈Spsxs and H(S)=−∑s∈Spslnps.
Imagine we perform a transformation to our distribution, such that the distribution is still valid and E(X) remains the same. We will consider an arbitrary transformation over elements {1,2,3}:
p1→p1+dp1,p2→p2+dp2,p3→p3+dp3
dp1+dp2+dp3=0
x1dp1+x2dp2+x3dp3=0
Now let us assume that our original distribution was a minimum of H(S), which can also be expressed as dH(S)=0.
dH(S)=−d(p1lnp1+p2lnp2+p3lnp3)=0
d(p1lnp1+p2lnp2+p3lnp3)=0
lnp1dp1+dp1+lnp2dp2+dp2+lnp3dp3+dp3=0
lnp1dp1+lnp2dp2+lnp3dp3=0
The solution for this to be equal to zero in all cases is the following relation:
lnps∝xs+const⟹ps=Aexp(−Bxs)
We can plug this back into our equation to verify that we do in fact get zero:
lnA(dp1+dp2+dp3)−B(x1dp1+x2dp2+x3dp3)=0−0=0
The choice of a negative value for B is so that our distribution converges when values of xs extend up to ∞, which is common for things like energy. We will then get a distribution with the following form:
P(S=s)=Aexp(−B×xs)
Where B parameterizes the shape of the distribution and A normalizes it such that our probabilities sum to 1. We might want to write down A in terms of B:
A=1/(∑s∈Sexp(−B×xs))
But we will actually get more use out of the following function Z=1/A:
Z=∑s∈Sexp(−B×xs)
First consider the derivative dZdB:
dZdB=∑s∈S−xsexp(−B×xs)
dZdB=−Z∑s∈Sxsps
Which gives us the remarkable result:
E(X)=−1ZdZdB=−dlnZdB
We can also expand out the value of H(S):
H(S)=−∑s∈Spslnps
H(S)=−∑s∈Spsln(exp(−B×xs)/Z)
H(S)=−∑s∈Sps(−B×xs−lnZ)
H(S)=∑s∈S(B×psxs+pslnZ)
H(S)=B∑s∈Spsxs+lnZ∑s∈Sps
H(S)=BE(X)+lnZ
H(S)=−BdlnZdB+lnZ
And get this in terms of Z too! We also get one of the most important results from all of statistical mechanics:
dH(S)dE(X)=B+E(X)dBdE(X)+dlnZdE(X)
Now use the substitution:
E(X)dBdE(X)=−1ZdZdBdBdE(X)=−1ZdZ=dE(X)=−dlnZdE(X)
To get our final result:
dH(S)dE(X)=B
So B is not "just" a parameter for our distributions, it's actually telling us something about the system. As we saw last time, finding the derivative of entropy with respect to some constraint is absolutely critical to finding the behaviour of that system when it can interface with the environment.
<\Maths>
To recap the key findings:
The probability of a system state s with value xs is proportional to exp(−B×xs)
This parameter B is also the (very important to specify) value of dH(S)dE(X)
We can define a function Z(B)=∑s∈Sexp(−B×xs)
E(X)=−dlnZdB
H(S)=−BdlnZdB+lnZ
Which we can now apply back to the harmonic oscillator.
Back to the Harmonic Oscillator
So we want to find a family of distributions over n∈N≡N. We can in fact assign a real number to each value of n, trivially (the inclusion N∋n↪n∈R if you want to be fancy). Now we know that our distribution over N must take the form:
P(N=n)=Aexp(−B×n)
But we also know that the most important thing about our system is the value of our partition function Z(B):
Z=∞∑n=0exp(−B×n)
Which is just the sum of a geometric series with a=1, r=e−B:
Z=11−e−B
lnZ=−ln(1−e−B)
Which gives us E(N) and H(N) in terms of B:
E(N)=e−B1−e−B
H(N)=Be−B1−e−B−ln(1−e−B)
T instead of B
Instead of B, we usually use a variable T=1/B for a few reasons. If we want to increase the amount of X in our system (i.e. increase E(X)) we have to decrease the value of B, whereas when B gets big, E(X) just approaches the minimum value of xs and our probability distribution just approaches uniform over the corresponding s. Empirically, T is often easier to measure for physical systems, and variations in T tend to feel more "linear" than variations in B.
Let's plot both E(N) and H(N) of our systems as a function of T:
E(N) converges on the line T−12. Rather pleasingly the energy of a quantum harmonic oscillator is actually proportional to N+12, not N. This little correction is called the "zero point energy" and is another fundamental result of quantum mechanics. If we plot the energy E instead of E(N), it will converge on T. H(N) converges on ln(T)−1.
These are general rules. E is in general proportional to T, and H is almost always
So far we've ignored the fact that our values of N actually correspond to energy, and therefore there must be a spacing involved. What we've been calling T so far should actually be called TEp where Ep is the energy of a single phonon. This is the spacing of the ladder of energy levels.
If we swap TEp into our equations and also substitute in the energy E=Ep(E(N)+12)(we will omit the E when talking about energy) we get the following equations:
E=Ep(e−Ep/T1−e−Ep/T+12)
H(N)=1Te−Ep/T1−e−Ep/T−ln(1−e−Ep/T)
Both functions now have a "burn-in" region around T=0, where the function is flat at zero. This is important. This region is common to almost all quantum thermodynamic systems, and it corresponds to a phenomenon when T≪Ep. When this occurs the exponential term e−Ep/T can be neglected for all states except the lowest energy one:
lnZ=ln(e−BEmin)≈−BEmin∴E=dlnZdB≈Emin
Showing E doesn't respond to changes in T. This is the same as saying that the system has a probability ≈1 of being in the lowest energy state, and therefore of having E=Emin.
True Names
T stands for temperature. Yep. The actual regular temperature appears as the inverse of a constant we've used to parameterized our distributions. B is usually called β in thermodynamics, and is sometimes called the "inverse temperature".
In thermodynamics, the energy of a system has a few definitions. What we've been calling E should properly be called U, which is the internal energy of a system at constant volume.
Entropy in thermodynamics has the symbol S. I've made sure to use a roman H for our entropy because H (italic) in thermodynamics is a sort of adjusted version of energy called "enthalpy".
In normal usage, temperature has different units to energy which is because, if written as energy, the temperature would be a very small number. It is also because they were discovered separately. Temperature is measured in Kelvin K, which are converted to energy's Joules J with something known as the Boltzmann constant kB. For historical reasons which are absolutely baffling, thermodynamics makes the choice to incorporate this conversion into their units of S, so S=kBH(system). This makes entropy far, far more confusing than it needs to be.
Anyway, there are two reasons why I have done this:
I want to avoid cached thoughts. If you already know what energy and entropy are in a normal thermodynamic context, you risk not understanding the system properly in terms of stat mech
I want to extend stat mech beyond thermodynamics. I will be introducing a framework for understanding agents in the language of stat mech around the same time this post goes up.
Conclusions
Maximum-entropy distributions with constrained E(X) always take the form e−Bx
This B represents the derivative dH(system)dE(X), if X represents energy, we can write B as β
B is inverse to T which, if X is energy, is the familiar old temperature of the system
We have learnt how to apply these to one of the simplest systems available. Next time we will try them on a more complex system.
The canonical example of quantum mechanics in action is the harmonic oscillator, which is something like a mass on a spring. In classical mechanics, it wobbles back and forth periodically when it is given energy, if it's at a position x, wobbling about x0 and moving with velocty v we can say its energy contains a potential term V proportional to (x−x0)2, and a kinetic term T proportional to v2, with an overall form:
E=12k(x−x0)2+12mv2
We could try and find a distribution over x and v, but continuous distributions tend not to "play well" with entropy. They're dependent on a choice of characteristic unit. Instead we'll go to the quantum world.
One of the major results of quantum mechanics is that systems like this can only exist in certain energy levels. In the harmonic oscillator these levels are equally-spaced, with a spacing proportional to the frequency associated with the classical oscillator. Since the levels are equally-spaced, we can think about the energy coming in discrete units called "phonons".
Our beliefs about the number of phonons N in our system can be expressed as a probability distribution P(N=n) over n∈N:
This is progress: we've reduced an uncountably infinite set of states to a countable one, which is a factor of infinity! But if we do our normal trick and try to find the maximum entropy distribution, we'll still hit a problem: we get P(N=n)=0 for all n∈N.
The Trick: Distribution Families
Thinking back to our previous post, an answer presents itself: phonons are a form of energy, which is conserved. Since we're uncertain over N, we'll place a restriction on E(N) of our distribution. We can solve the specific case here, but it's actually more useful to solve the general case.
Maths, Lots of Maths, Skippable:
Consider a set of states of a system s∈S. To each of these we assign a real numeric value written as s→xs∈R. We also assign a probability s→ps∈R+ constrained by the usual condition∑s∈Sps=1.
Next, define E(X)=∑s∈Spsxs and H(S)=−∑s∈Spslnps.
Imagine we perform a transformation to our distribution, such that the distribution is still valid and E(X) remains the same. We will consider an arbitrary transformation over elements {1,2,3}:
p1→p1+dp1, p2→p2+dp2, p3→p3+dp3
dp1+dp2+dp3=0
x1dp1+x2dp2+x3dp3=0
Now let us assume that our original distribution was a minimum of H(S), which can also be expressed as dH(S)=0.
dH(S)=−d(p1lnp1+p2lnp2+p3lnp3)=0
d(p1lnp1+p2lnp2+p3lnp3)=0
lnp1dp1+dp1+lnp2dp2+dp2+lnp3dp3+dp3=0
lnp1dp1+lnp2dp2+lnp3dp3=0
The solution for this to be equal to zero in all cases is the following relation:
lnps∝xs+const⟹ps=Aexp(−Bxs)
We can plug this back into our equation to verify that we do in fact get zero:
lnA(dp1+dp2+dp3)−B(x1dp1+x2dp2+x3dp3)=0−0=0
The choice of a negative value for B is so that our distribution converges when values of xs extend up to ∞, which is common for things like energy. We will then get a distribution with the following form:
P(S=s)=Aexp(−B×xs)
Where B parameterizes the shape of the distribution and A normalizes it such that our probabilities sum to 1. We might want to write down A in terms of B:
A=1/(∑s∈Sexp(−B×xs))
But we will actually get more use out of the following function Z=1/A:
Z=∑s∈Sexp(−B×xs)
First consider the derivative dZdB:
dZdB=∑s∈S−xsexp(−B×xs)
dZdB=−Z∑s∈Sxsps
Which gives us the remarkable result:
E(X)=−1ZdZdB=−dlnZdB
We can also expand out the value of H(S):
H(S)=−∑s∈Spslnps
H(S)=−∑s∈Spsln(exp(−B×xs)/Z)
H(S)=−∑s∈Sps(−B×xs−lnZ)
H(S)=∑s∈S(B×psxs+pslnZ)
H(S)=B∑s∈Spsxs+lnZ∑s∈Sps
H(S)=BE(X)+lnZ
H(S)=−BdlnZdB+lnZ
And get this in terms of Z too! We also get one of the most important results from all of statistical mechanics:
dH(S)dE(X)=B+E(X)dBdE(X)+dlnZdE(X)
Now use the substitution:
E(X)dBdE(X)=−1ZdZdBdBdE(X)=−1ZdZ=dE(X)=−dlnZdE(X)
To get our final result:
dH(S)dE(X)=B
So B is not "just" a parameter for our distributions, it's actually telling us something about the system. As we saw last time, finding the derivative of entropy with respect to some constraint is absolutely critical to finding the behaviour of that system when it can interface with the environment.
<\Maths>
To recap the key findings:
Which we can now apply back to the harmonic oscillator.
Back to the Harmonic Oscillator
So we want to find a family of distributions over n∈N≡N. We can in fact assign a real number to each value of n, trivially (the inclusion N∋n↪n∈R if you want to be fancy). Now we know that our distribution over N must take the form:
P(N=n)=Aexp(−B×n)
But we also know that the most important thing about our system is the value of our partition function Z(B):
Z=∞∑n=0exp(−B×n)
Which is just the sum of a geometric series with a=1, r=e−B:
Z=11−e−B
lnZ=−ln(1−e−B)
Which gives us E(N) and H(N) in terms of B:
E(N)=e−B1−e−B
H(N)=Be−B1−e−B−ln(1−e−B)
T instead of B
Instead of B, we usually use a variable T=1/B for a few reasons. If we want to increase the amount of X in our system (i.e. increase E(X)) we have to decrease the value of B, whereas when B gets big, E(X) just approaches the minimum value of xs and our probability distribution just approaches uniform over the corresponding s. Empirically, T is often easier to measure for physical systems, and variations in T tend to feel more "linear" than variations in B.
Let's plot both E(N) and H(N) of our systems as a function of T:
E(N) converges on the line T−12. Rather pleasingly the energy of a quantum harmonic oscillator is actually proportional to N+12, not N. This little correction is called the "zero point energy" and is another fundamental result of quantum mechanics. If we plot the energy E instead of E(N), it will converge on T. H(N) converges on ln(T)−1.
These are general rules. E is in general proportional to T, and H is almost always
So far we've ignored the fact that our values of N actually correspond to energy, and therefore there must be a spacing involved. What we've been calling T so far should actually be called TEp where Ep is the energy of a single phonon. This is the spacing of the ladder of energy levels.
If we swap TEp into our equations and also substitute in the energy E=Ep(E(N)+12)(we will omit the E when talking about energy) we get the following equations:
E=Ep(e−Ep/T1−e−Ep/T+12)
H(N)=1Te−Ep/T1−e−Ep/T−ln(1−e−Ep/T)
Both functions now have a "burn-in" region around T=0, where the function is flat at zero. This is important. This region is common to almost all quantum thermodynamic systems, and it corresponds to a phenomenon when T≪Ep. When this occurs the exponential term e−Ep/T can be neglected for all states except the lowest energy one:
lnZ=ln(e−BEmin)≈−BEmin∴E=dlnZdB≈Emin
Showing E doesn't respond to changes in T. This is the same as saying that the system has a probability ≈1 of being in the lowest energy state, and therefore of having E=Emin.
True Names
T stands for temperature. Yep. The actual regular temperature appears as the inverse of a constant we've used to parameterized our distributions. B is usually called β in thermodynamics, and is sometimes called the "inverse temperature".
In thermodynamics, the energy of a system has a few definitions. What we've been calling E should properly be called U, which is the internal energy of a system at constant volume.
Entropy in thermodynamics has the symbol S. I've made sure to use a roman H for our entropy because H (italic) in thermodynamics is a sort of adjusted version of energy called "enthalpy".
In normal usage, temperature has different units to energy which is because, if written as energy, the temperature would be a very small number. It is also because they were discovered separately. Temperature is measured in Kelvin K, which are converted to energy's Joules J with something known as the Boltzmann constant kB. For historical reasons which are absolutely baffling, thermodynamics makes the choice to incorporate this conversion into their units of S, so S=kBH(system). This makes entropy far, far more confusing than it needs to be.
Anyway, there are two reasons why I have done this:
Conclusions
We have learnt how to apply these to one of the simplest systems available. Next time we will try them on a more complex system.