This month’s column builds upon the basic building blocks from last month, namely that despite the seemingly simple presentation that most textbooks afford to the idea of entropy there is an enormous amount of subtly and nuance in an idea that is well over a hundred years old.  As discussed in that earlier post, Robert Swendsen argues in his 2011 article How physicists disagree on the meaning of entropy (American Journal of Physics 79, 342), the primary area where things seem to break down is that different people presuppose an implicit set of assumptions not necessarily shared by anyone else.  To quote Swendsen

When people discuss the foundations of statistical mechanics, the justification of thermodynamics, or the meaning of entropy, they tend to assume that the basic principles they hold are shared by others.  These principles often go unspoken, because they are regarded as obvious. It has occurred to me that it might be good to restart the discussion of these issues by stating basic assumptions clearly and explicitly, no matter how obvious they might seem.

The one area that has triggered this realization was his recent work on (and subsequent debate over) the Gibbs paradox.

The Gibbs Paradox,  named after Josiah Willard Gibbs, is the derivation from classical statistical mechanics which leads to an entropy expression for the ideal gas that is not extensive.  The expectation that entropy is extensive amounts to saying that one expects to see that the entropy of a system doubles when the system itself doubles in size (keeping all other things equal).  Since the ideal gas is the standard textbook example of a nontrivial collection of matter perfectly designed for understanding thermodynamics finding a result that flies in the face of this expectation casts doubt on the underpinnings of statistical physics.  The usual way that this doubt is remedied is to patch up the classical analysis by appealing to quantum mechanics and the indistinguishability of particles.  The concept of indistinguishability among the particles is, of course, lies at the heart of the Fermi-Dirac and the Bose-Einstein statistics for fermions and boson, respectively.  The idea basically being that there are no ways of labeling, of painting, of hanging a number on individual particles and, therefore, that our basic ignorance must be included in the way we use in statistical mechanics.

Specifically, the classical analysis of an ideal gas made of distinguishable particles (using what Swendsen calls the traditional definition of entropy) leads to the following expression for the entropy (‘CD’ = classical, distinguishable)

\[  S_{CD} = k N \left[ \ln V + \frac{3}{2} \ln \frac{E}{N} + X \right] \; , \]

where $$X$$ is some constant.  The objection is that expression is not extensive due to the $$\ln V$$ term in brackets.  For example, scaling the system by some overall factor $$\alpha$$ ($$N \rightarrow \alpha N$$, $$E \rightarrow \alpha E$$, and $$V \rightarrow \alpha V$$  gives an entropy of

\[  S_{CD,\alpha} = k \alpha N \left[ \ln ( \alpha V)  + \frac{3}{2} \ln \frac{E}{N} + X \right] \; , \]

which simplifies to

\[ S_{CD,\alpha} = \alpha S_{CD} + k N  \alpha \ln \alpha \; . \]

On the surface, the lack of extensivity might not seem alarming but consider the following composite system comprised of two tanks of an identical gas placed side-by-side.  Each collection has the same density and average energy per particle and each has the same volume.  Further suppose that there is a sliding panel at the interface between the tanks.  By removing the partition, the tank size now doubles (i.e. $$\alpha = 2$$) and the entropy change is

\[S_{CD,new} – S_{CD,old} = 2 S_{CD} + 2\ln 2 k N – 2 S_{CD} = 2 \ln 2 k N \; . \]

At this point, Gibbs notes that something is quite wrong.  The removal of the partition is a reversible process (since the gas is thermodynamically the same on both sides the presence or absence of the partition shouldn’t make a difference), meaning that the entropy should not increase at all. 

The remedy found in most textbooks (e.g. Fundamentals of Statistical and Thermal Physics by Reif from which the following quoted expressions come) starts by arguing that when we remove the partition and allow the gas molecules in one tank to mix with those in another we are implicitly assuming them “individually distinguishable, as though interchanging the positions of two like molecules would lead to a physically distinct state of the gas.”   The argument concludes by directing us to correct for the overcounting that “taking classical mechanics too seriously” has foisted upon us.  The correction for over-counting involves dividing a term earlier in the derivation (the partition function) by $$N!$$, which corrects the entropy (now adapted to indistinguishable particles hence the change from ‘D’ to ‘I’) to read

\[ S_{CI} = k N \left[ \ln \frac{V}{N} + \frac{3}{2} \ln \frac{E}{N} + X’ \right] \; , \]

which is obviously extensive with the equally obvious implication that the problem is solved and nothing more needed to be done.

Here the story seems to have staled for some long period of time (decades), most likely due to the belief that quantum mechanics was the correct viewpoint (or at least more correct) for the world at large.  It seems to have been only fairly recent that a revived interest in putting classical statistical mechanics on firmer footing arose.  The result of this new effort has been the rediscovery of an old definition of entropy that Swendsen, who has been championing this viewpoint for nearly two decade, argues leads to more sensible results than a simple reflexive appeal to quantum mechanics.  And his most compelling argument to support this revived viewpoint is a substance that is likely to surprise:  simple homogenized milk.  However, that story, in all its glory, will have to wait until next month’s column.