this work doesn't seem to be anything fundamentally new. It just seems to be rephrasing of existing ideas. However if it can help people understand entropy then I think it's a quite valuable rephrasing.
Sean Carroll seems to think otherwise, judging by the abstract:
We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution.
[...]
We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality.
This seems to imply that this is a genuine research result, not just a didactic exposition. Do you disagree?
Sean Carroll et al. posted a preprint with the above title. Sean also has a discussion of it in his blog.
While I am a physicist by training, statistical mechanics and thermodynamics is not my strong suit, and I hope someone with expertise in the area can give their perspective on the paper. For now, here is my summary, apologies for any potential errors:
There is a tension between different definitions of entropy: Boltzmann entropy, which counts macroscopically indistinguishable microstates always increases, except for extremely rare decreases. Gibbs/Shannon entropy, which counts our knowledge of a system, can decrease if an observer examines the system and learns something new about it. Jaynes had a paper on that topic, Eliezer discussed this in the Sequences, and spxtr recently wrote a post about it. Now Carroll and collaborators propose the "Bayesian Second Law" that quantifies this decrease in Gibbs/Shannon entropy due to a measurement:
[...] we derive the Bayesian Second Law of Thermodynamics, which relates the original (un-updated) distribution at initial and final times to the updated distribution at initial and final times. That relationship makes use of the cross entropy between two distributions [...]
[...] the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat)
This last point seems to resolve the tension between the two definitions of entropy, and has applications to non-equilibrium processes, where an observer is replaced with an outcome of some natural process, such as RNA self-assembly.