You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on [LINK] The Bayesian Second Law of Thermodynamics - Less Wrong Discussion

8 Post author: shminux 12 August 2015 04:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 13 August 2015 02:35:19AM 12 points [-]

There is so much confusion surrounding the topic of entropy. Which is somewhat sad, since it's fundamentally a very well-defined and useful concept. Entropy is my strong suit, and I'll try to see if I can help.

There are no 'different definitions' of entropy. Boltzmann and Shannon Entropy are the same concept. The problem is that information theory by itself doesn't give the complete physical picture of entropy. Shannon entropy only tells you what the entropy of a given distribution is, but it doesn't tell you what the distribution of states for a physical system is. This is the root of the 'tension' that you're describing. Much of the problems in reconciling information theory with statistical mechanics have been that we don't often have a clear idea what the distribution of states of a given system is.

which counts macroscopically indistinguishable microstates always increases, except for extremely rare decreases.

The 2nd law is never violated, not even a little. Unfortunately the idea that entropy itself can decrease in a closed system is a misconception which has become very widespread. Disorder can sometimes decrease in a closed system, but disorder has nothing to do with entropy!

Gibbs/Shannon entropy, which counts our knowledge of a system, can decrease if an observer examines the system and learns something new about it.

This is exactly the same as Boltzmann entropy. This is the origin of Maxwell's demon, and it doesn't violate the 2nd law.

the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat)

This is precisely correct and is the proper way to view entropy. Ideas similar to this have been floating around for quite some time and this work doesn't seem to be anything fundamentally new. It just seems to be rephrasing of existing ideas. However if it can help people understand entropy then I think it's a quite valuable rephrasing.

I was thinking about writing a series of blog posts explaining entropy in a rigorous yet simple way, and got to the draft level before real-world commitments caught up with me. But if anyone is interested and knows about the subject and is willing to offer their time to proofread, I'm willing to have a go at it again.

Comment author: shminux 14 August 2015 05:56:58AM 1 point [-]

this work doesn't seem to be anything fundamentally new. It just seems to be rephrasing of existing ideas. However if it can help people understand entropy then I think it's a quite valuable rephrasing.

Sean Carroll seems to think otherwise, judging by the abstract:

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution.

[...]

We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality.

This seems to imply that this is a genuine research result, not just a didactic exposition. Do you disagree?