The following article was originally published by Professor William B. Jensen in the Journal of Chemical Education in May 2004, vol. 81, pp. 639-640.

Entropy and Constraint of Motion

I would like to make several observations supplementing and supporting the article by Frank Lambert on entropy as energy dissipation, since this is an approach that I have also used for many years when teaching a qualitative version of the entropy concept to students of general and introductory inorganic chemistry (1).

I begin with the everyday interconversion of potential and kinetic energy as exemplified by either a bouncing ball or an .oscillating pendulum:

Potential Energy <—> Kinetic Energy         (1)

According to classical mechanics, this interconversion is totally reversible and the ball should go on bouncing and the pendulum should oscillate forever. The reason this does not happen, of course, is because once the potential energy is converted into kinetic energy, the kinetic energy is increasingly dispersed, and it is the lower probability of this dispersed kinetic energy reconcentrating into the coordinated motion of the ball or pendulum as a whole that ultimately introduces a directionality into the process:

Potential Energy —> Kinetic Energy —> Dispersed Kinetic Energy        (2)

The point here being that it is not just energy in general that is dissipated, but rather kinetic energy or energy of motion (whether translational, rotational, or vibrational).

This dissipation, dispersion, dilution, or spreading of the kinetic energy need not, however, necessarily correspond to a spreading in space or to a division among a greater number of moving particles, though these are possible mechanisms. A more general concept of dilution is required based on the fact that all kinetic energy is quantized, even in the case of macroscopic bodies in which the quantum spacings are small enough to approximate a continuum. It is the number of available quantum levels, or storage modes, to use Leff's terminology (2), used to store a given amount of kinetic energy that determines its degree of dilution or dissipation, and this, in turn, depends on the masses of the moving particles (whether colloids, micelles, molecules, atoms,or even, on occasion, electrons and nucleons) and on the number of constraints on their motion. These constraints may correspond to:

  1. Constraints on the number of independently moving particles; i.e., on whether the particles must move as an aggregate or can move separately.
  2. Constraints on the direction of motion.
  3. Constraints on the volume in which the motion is executed.

The relevance of the above factors in determining the spacing and degeneracy of the quantum levels is most easily demonstrated using the simple "particle in a box" model found in most introductory treatments of quantum mechanics. The fewer the number of constraints on a system's motion, the smaller the energy spacing between quantum levels and the greater the dilution or dissipation of the kinetic energy. In short, the fewer the constraints on how and where the component particles can move, the greater the entropy. The extent to which the mixing or disorder views of entropy, criticized by Lambert in earlier articles (3, 4), are or are not misleading depends on the extent to which they do or do not parallel changes in these constraints (5). This simple correlation between entropy and constraint of motion also allows one to rationalize the qualitative rules for predicting the net sign of the entropy change in simple chemical reactions given by Sanderson for use in introductory chemistry courses (6).

The above approach is based on a fusion of the energy dissipation approach to the second law first pioneered by Lord Kelvin in 1852 (7) and widely used in late 19th century British and American textbooks (8), with the insights since provided by quantum mechanics and elementary statistical mechanics, as so aptly summarized in the introductory texts by Nash and Bent (9, 10). Indeed, it is interesting to note that the first of these three ingredients formed the basis of the first English-language monograph to deal specifically with chemical thermodynamics, as distinct from the more limited field of thermochemistry. The book in question was published in 1885 by George Downing Liveing (1827-1924) of Cambridge University and bore the title Chemical Equilibrium the Result of the Dissipation of Energy (11).

Literature Cited

  1. Lambert, F. L. Entropy Is Simple, Qualitatively. J. Chem. Educ. 2002 , 79 , 1241-1246.
  2. Leff, H. S. Thermodynamic Entropy: The Spreading and Sharing of Energy. Am.] Phys. 1996 , 64 , 1261-1271.
  3. Lambert, F. L. Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms – Examples of Entropy Increase? Nonsense! J. Chem. Educ.   1999 , 76 , 1385-1388.
  4. Lambert, F. L. Disorder — A Cracked Crutch for Supporting Entropy Discussions. J. Chem. Educ. 2002 , 79 , 187-192.
  5. See, for example, the difference in the entropy change produced by the mixing of two gases, with and without a change in concentration, as detailed in Meyer, E. F. Thermodynamics of 'Mixing' Ideal Gases: A Persistent Pitfall. J. Chem. Educ. 1987 , 64 , 676
  6. Sanderson, R. T. Principles of Chemical Reactions. J. Chem. Educ. 1964 , 41 , 13-22.
  7. Thomson, W. On a Universal Tendency in Nature to the Dissipation of Mechanical Energy. Phil. Mag. 1852 , 4 , 256-260.
  8. Steward, B. The Conservation of Energy; Appleton : New York , 1874; Chapter 5.
  9. Nash, L. K. Elements of Statistical Thermodynamics; Addison-Wesley: New York , 1965.
  10. Bent, H. The Second Law; Oxford : New York , 1965.
  11. Liveing, G. N. Chemical Equilibrium the Result of Dissipation of Energy; Deighton, Bell : Cambridge , 1885.

William B. Jensen
Department of Chemistry, University of Cincinnati
Cincinnati, OH 45221-0172
iensenwb@email.uc.edu
Originally published: J. Chem. Educ. 2004 81 639-640.
Last revised and updated: July 2004


My brief reply to Professor Jensen was published with his Letter. It is reprinted here after the following supplementary paragraphs.

Jensen is helpful in pointing out that, most commonly in chemistry, energy dispersal involves molecular kinetic energy (my simplification: “motional energy” for rotational/translational in liquids and gases, for vibrational/librational in solids.) “Motional energy” is a rather important modification of “kinetic energy” in entropy discussions because it includes both kinetic and potential energy that is present in the vibrational energy of crystals. Always in describing the increase in entropy that occurs in heating a substance from 0° K to a given temperature, such as 298° K, I have said that this involves an increase in motional energy and phase change energy in the substance as a result of dispersal of motional energy to it from the surroundings. (Phase change energy is potential energy that is stored in the substance when bonds are broken or altered in warming solids or liquids.

Thus, in focusing on chemistry, Professor Jensen is looking at only a part of my generality: “in all spontaneous happenings, some type of energy flows from being localized or concentrated to becoming dispersed or spread out”! Localized EM radiation, electrical charge, et al certainly and spontaneously disperse their energy if not hindered, but not via the kinetic energy of molecular movement as in chemistry. The second law, of course, goes far beyond chemistry!

Secondly, I readily concede that my early writing about macro chemical thermodynamics involved the phrase “dispersal of energy to a larger space” in discussing entropy change in gas expansion, although I maintain that this is not improper or misleading in considering motional energy on the macro scale. However, contrary to a possible implication by Jensen, I consistently wrote as does he that the meaning of “energy spreading out” in molecular thermodynamics involves the number of accessible energy levels after a change, and that this determines the number of different ways the energy of the whole system can be distributed, i.e., the number of microstates in the Boltzmann entropy equation.

In the letter that follows, my emphasis on phase change (using melting as the example) is not trivial. Phase change is a special kind of process involving the dispersal of kinetic energy from the surroundings to a system because it causes intermolecular bond-breaking that is a reversible potential energy increase in the system. There is no increase in motional energy in the system during melting – a complete contrast to heating (which is the usual effect caused by dispersal of KE from surroundings to a cooler system). This lack of increase in motional energy is shown by the temperature being unchanged at an equilibrium melting point (or boiling point). What occurs so far as motion is concerned is that now – after bonds have been broken – rotation is possible and translation can take place over greater distances and among more locations than solely vibrating “in place” (or as a giant single molecule) as in a crystal. There is not more motional energy in the liquid than in the solid crystal at the same temperature. Rather, the same motional energy that was in the crystal is partitioned/allocated differently to rotation and to translation over more locations because bonds have been broken and these additional motions and additional locations have become accessible.

Professor Jensen's broad knowledge of the history of thermodynamics may be unequalled and thus his last paragraph poses two intriguing questions. Why didn't 20th century chemistry texts ever introduce entropy change as simply a combination of Clausius' and Kelvin's ideas? That Clausius' q(rev)/T represents energy that is dispersed or spread out in a process, as a function of temperature, seems obvious, once it is plainly stated. Then, after the 1950s when occurred the initial maturing of quantum mechanics and wide understanding of statistical mechanics as applied to Boltzmann's entropy equation, why wasn't the increase in accessible microstates seen by all text authors as a “no-brainer relationship” to the number of ways energy could be dispersed – the ‘Kelvin viewpoint'?


My (Lambert's) reply to Jensen, as originally published in JCE

William Jensen's presentation of entropy increase as solely due to kinetic energy dispersion is stimulating.

Searching for examples of kinetic energy dispersal causing increase in potential energy, I considered phase change. In fusion, for example, the enthalpy of fusion (KE) breaks or alters bonds in the solid to form liquid and thus the KE has become dispersed as potential energy (PE). However, this does not necessarily controvert Jensen's general point because this is a reversible surroundings-system equilibrium process.

Jensen's development of specific constraints that lead to more or less dispersion may indeed be more useful than my generality of "energy spontaneously disperses, if it is not hindered". I trust that in his use of them (and their application to Sanderson (1) is impressive), he does not allow them gen­erally to support "disorder" as an explanation for entropy increase!

Literature Cited

1. Sanderson, R. T. J. Chem. Educ. 1964 , 41 ,13-22.

 

entropy.lambert@gmail.com
Originally published: J. Chem. Educ. 2004 81 639-640.
Last revised and updated: July 2004

 

back to entropysite