Progress in Entropy Principle, as Disclosed by Nine Schools of Thermodynamics, and Its Ecological Implication

Progress in Entropy Principle, as Disclosed by Nine Schools of Thermodynamics, and Its Ecological Implication

Lin-Shu Wang

Department of Mech. Engineering, Stony Brook Univ., 100 Nicolls Rd., Stony Brook NY 11794, US

Corresponding Author Email: 
lin-shu.wang@stonybrook.edu
Page: 
359-372
|
DOI: 
https://doi.org/10.18280/ijdne.160403
Received: 
1 April 2021
|
Revised: 
11 June 2021
|
Accepted: 
21 June 2021
|
Available online: 
31 August 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The entropy principle has been commonly considered to be a selection principle. A history/philosophy-of-science analysis of development in thermodynamic thought was carried out based on a historical account of contributions to thermodynamics of nine Schools of thermodynamics plus that of Mayer/Joule (the Mayer-Joule principle), publication of A Treatise of Heat and Energy, development in maximum entropy production principle (MEPP), and process ecology formulated by Ulanowicz. The analysis discloses the dual nature in the entropy principle, as selection principle and causal principle, and that as well in thermodynamics: as equilibrium thermodynamics (Gibbsian thermodynamics) and as “engineering” thermodynamics in a general sense. Entropy-growth-potential (EGP) as the causal agent and the theory of engineering thermodynamics entail the concept of causal necessity, as suggested by Poincare. Recent development of the entropy principle into maximum entropy production principle (MEPP) is then critically analyzed. Special attention is paid to MEPP’s explanatory power of biological orders vs. that of process ecology: whereas MEPP asserts universal approach to physics and biology based on physical necessity and efficient causation, the case for “EGP as the causal agent and process ecology” allows biology to be different from physics by allowing the additional presupposition of causal necessity and efficacious causation.

Keywords: 

the mechanical theory of heat, Poincare range and causal necessity, engineering thermodynamics, the entropy principle as causal principle, maximum entropy production principle (MEPP)

1. Introduction

This study of thermodynamics focusing on evolution of the concepts of heat and entropy is, with the exception at the end of the article, mainly a study of these concepts based on the classical thermodynamic view on entropy.

Thermodynamics, a physical discipline originated from Sadi Carnot’s 1824 study of the economy of heat engines, is a uniquely engineering/technology-oriented physical discipline with hint of anthropomorphic elements. Bridgman in his The Nature of Thermodynamics noted, “…the laws of thermodynamics have a different feel from most of the other laws of the physicist. There is something more palpably verbal about them—they smell more of their human origin” [1].

Carnot’s engineering thermodynamics, however, was transformed into the mechanical theory of heat (MTH, see Sections 2-3), which became a physics-oriented physical discipline within the framework of mechanical philosophy.

MTH refers to the Thomson-Clausius synthesis, circa 1850-1854, of Carnot’s theory of heat and the Mayer-Joule principle, the synthesis that was identified with the Glasgow school of thermodynamics [2, 3] and the early stage of the Berlin school of thermodynamics [2, 4]. We shall call this the classical MTH. The term, MTH, also refers to the MIT school of thermodynamics of the 20th century under the leadership of Keenan. The modern MTH of the MIT school, i.e., modern engineering thermodynamics, is basically the classical MTH with the additional incorporation of the later entropy principle, formulated in 1865-1887. The incorporation by the modern MTH took the form of exergy analysis in the 20th century. This article carries out a critical interpretation of the history of thermodynamics, in particular, that of nine schools of thermodynamics; but we’ll begin with a story of our thermodynamics teaching experience.

In that experience, we have repeatedly given, to mechanical engineering graduate students, the same survey quiz questions (total seven questions with only three reproduced below):

  1. GS-4b_Heat cannot be converted entirely (100%) into mechanical work.

  2. Energy cannot be destroyed.

  3. GS-3.2_Entropy grows spontaneously and universally.

The first question-statement has been referred to as GS-4b [5] (GS is short for General Statement and the specific designations 4b and 3.2 [the third question-statement] are referred to the designation system adopted in Table II of Ref. [5]). What is remarkable about these three (out of the seven) is that the surveyed students, to a person, answered affirmative to all three questions. This proves that students of thermodynamics know the defining characteristic of energy to be its constancy (can be neither destroyed nor created) as well as that of entropy to be its inevitable growth. Their equally unshakable commitment to “truth” of GS-4b, however, is a monstrous misconception [5, 6].

In association with exposing this misconception as demonstrated by Wang and Shi [5], this article carries out a review/interpretation of contributions made to thermodynamics by nine Schools of thermodynamics plus that of Mayer/Joule (the Mayer-Joule principle), publication of A Treatise of Heat and Energy [6], and that of the American ecologist/thermodynamicist/philosopher Ulanowicz. The nine Schools are a subset of the “twelve founding schools of thermodynamics” presented by Thims [2], a figure in which is reproduced as Figure 1. We selected the subset of nine with the addition of the Mayer-Joule principle and contributions of A Treatise so that a better theme-focus can be maintained in the article of the early history of thermodynamics as shown in Figures 2 to 4—as well as contribution of maximum entropy production principle and that of process ecology of Ulanowicz so that our account gives a more meaningful picture of the current state of affairs as shown in Figures 5 to 7.

Figure 1. Twelve founding schools of thermodynamics (reproduced from Ref. [2]: http://www.eoht.info/m/page/Schools+of+thermodynamics)

The nine schools, as shown in the revised version of the original figure, Figure 2, are, chronologically: Ecole Polytechnique, Glasgow School, Berlin School, Edinburg School, Vienna School, Yale (Gibbs) School, Berkeley (Lewis) School, Brussels School, and MIT School. The presentation of major evolution in thermodynamic thought will be organized in five stages/sections, the first four of which are of roughly chronological order:

Sect.2. The classical mechanical theory of heat

Sect.3. Theory of equilibrium thermodynamics

Sect.4. The modern mechanical theory of heat: The entropy principle as selection principle

Sect.5. The dual nature of the entropy principle: publication of A Treatise of Heat and Energy

Sect.6. Engineering and biology: causation in maximum entropy production principle vs. causation in theory of “engineering” thermodynamics and process ecology.

Sections 3 to 4 are a review of the evolving understanding on entropy principle and thermodynamics based on mechanical philosophy or philosophical mechanism, i.e., the only kind of causality is efficient causality, and that of necessity, physical necessity. Sects. 5 and 6 advance the premise of the paper that the dual nature of the entropy principle, a new version of the entropy principle as developed in A Treatise [6], entails a new metaphysics, the metaphysics of physical necessity and causal necessity—as well as, in Sect. 6, that engineering and biological sciences become viable enterprise only with the adoption of the new metaphysics.

2. The Classical Mechanical Theory of Heat (MTH)

Scientists/engineers connected to Ecole Polytechnique School include Fourier, Sadi Carnot, Clapeyron, Regnault, and Poincare. Carnot, the founder of thermodynamics, formulated in 1824 a problem-based theory with the basic premise of mechanical work to be derived from the transfer of heat [7]: the defining problem is “how much mechanical work can be produced” from a given amount of transferred heat. Ulanowicz, the American theoretical ecologist, commented on the philosophical significance of Carnot’s contribution that, contrary to the usual account of the history of science attributing the demise of the Newtonian worldview to the advent of relativity and quantum theories, in fact, the earlier Carnot’s contribution already gave rise to the statistical, changing world of Boltzmann and Gibbs challenging the deterministic, unchanging block-universe of Newton [8].

In the period between 1842 to 1847, Mayer and Joule, using entirely different approaches, arrived almost simultaneously at the conclusion that heat and mechanical work were numerically equivalent: a given amount of work could be transformed into a quantitatively predictable amount of heat. This is known as the Mayer-Joule principle, or the mechanical equivalent of heat (MEH, i.e., the equivalence principle). The principle suggested the premise, alternative to that of Carnot, that mechanical work to be derived from the consumption of heat [7]. Notably, the equivalence principle further lent support to the mechanical conception of heat as a dynamic form of energy. As a problem-based theory, was mechanical work derived from the transfer of heat or the consumption of heat?

The stage was set for the involvement of the Glasgow School (Figure 2), the scientists/engineers connected to which include: Black and Watt in the 18th century and the Thomson brothers (James and William) and Rankine in the 19th century. Clausius of the Berlin School was another prominent contributor to this phase of development. In the hands of William Thomson (later, lord Kelvin) and Clausius, they achieved the Glasgow School synthesis of the two competing premises transforming MEH into the mechanical theory of heat (MTH). This version of MTH, the classical MTH, quickly supplanted the caloric theory of heat in the middle of the 19th century—which would lead to the great triumph of mechanical philosophy, or the presupposition of mechanical explanation of everything in physics. The cornerstone of the classical MTH was the equivalence principle with Carnot’s theory reduced to a “selection” principle—the final form of which at this stage was the energy principle as the second law of the classical MTH [9].

At this stage of development, the theory remained very much an engineering discipline and we may call this classical MTH the Glasgow School of engineering thermodynamics (see Figure 2).

The key claim in his formulation of the energy principle that energy degradation was universal was general conclusion 2 in Ref. [9], which Kelvin presented as an “unargued statement” (i.e., self-evident truth) as Uffink characterized it [10]. Universality in the energy principle (i.e., general conclusion 2) was rejected by Planck [11]. Details of Planck’s refutation have been worked out in Ref [6]; at this moment, we can say that refutation of general conclusion 2 is incontrovertible. Sadly, general conclusion 2 still lives on today as the acceptance of “heat cannot be entirely converted to work” to be a corollary of “entropy cannot be annihilated” is universal, evidenced by the universal acceptance of GS-4b (see demonstration of its falsity in Sect. 5.3).

The highlighted Schools, including Ecole Polytechnique (Carnot, Poincare), Glasgow (Black, Watt, James Thomson, William Thomson, and Rankine), Yale (Gibbs), and MIT (Keenan, Hatsopoulos, Tisza, Callen, Tribus, Bejan), represent Schools of engineering thermodynamics.

Figure 2. The Glasgow School of engineering thermodynamics

NOTE: Red circled ones—Polytechnique and Glasgow plus the Mayer-Joule principle and the Berlin School—are ones that made contribution to classical MTH. The Berlin (Helmholtz and Clausius) School’s contribution to thermodynamics is crucially significant for its scientific application rather than its engineering application; for that reason, it is not included among highlighted Schools of engineering thermodynamics.

In this development of classical MTH exemplified by the Glasgow School, the theory of heat and work and their interconversion was reformulated as a theory of energy conversion. In accordance with the energy principle, i.e., the universal degradation of mechanical energy [9], energy conversion in the spontaneous direction is the drive force for all processes: Conversion of energy in the spontaneous direction is limitless with all forms of energy eventually turning into heat. In the opposite (reverse) direction, mechanical work can be derived from the consumption of heat [7] (or, in general, consumption of energy)—with the caveat that there is a restriction on the amount of the energy that can be turned into mechanical work.

Since the problem of work production is a problem of such reversed energy conversion, the Carnot problem became “how much mechanical work can be produced” in association with reverse energy conversion of heat to work. While mechanical work can be converted entirely into heat, it seems to be obvious that heat cannot be entirely converted into mechanical work. Indeed, according to Carnot-Kelvin formula (see [6]) $W_{R e v}=Q_{H}\left(1-\frac{T_{L}}{T_{H}}\right)$, out of the high-temperature heat of amount $Q_{H}$ at $T_{H}$ only the part $Q_{H}\left(1-\frac{T_{L}}{T_{H}}\right)$ can be converted to work with the rest, $Q_{H}\left(\frac{T_{L}}{T_{H}}\right)$, to be rejected to the sink at $T_{L}$.

This was how the Glasgow School of thermodynamics, with the contribution of the early stage of the Berlin school through Clausius, understood classical MTH, as shown in Figure 2. The notable achievements were the introduction of the absolute temperature, T, the introduction of internal energy, U, the first law energy equation $d E=d(U+K E+P E)=\delta Q-\delta W$, and the Carnot-Kelvin formula (see above paragraph) as the principal example of a reversed energy conversion of heat to work. All these achievements were organized under the framework of the problematic energy principle.

The genius of Clausius and Kelvin was in realizing that synthesizing Carnot’s theory and the Mayer-Joule principle could be achieved by formulating two laws of thermodynamics as the dual core foundations to one theory, MTH. This theory, unlike Carnot’s innovation, did not challenge Newtonian metaphysic. In fact, it was received as a triumph of the Newtonian paradigm: as Boltzmann would later declare the 19th century to be “the century of the mechanical view of nature, the century of Darwin.” Herein, we have a great puzzle: Clausius-Kelvin were able to make a great advance in thermodynamics despite they totally ignored the profound challenge by Carnot to the Newtonian reversible deterministic world, which was an idealized fiction that is in contradiction to the Carnotein irreversible contingent world. Because of that, their success/triumph was tainted as analysis below will make the case in details.

Figure 3. The Berlin School of thermodynamics

Highlighted shows Schools of thermodynamics that are identified with the founding of the entropy principle (Berlin [Clausius, Planck, Nernst, Caratheodory]) and statistical mechanics (Berlin School, Vienna School, and Gibbs [Yale] School).

The contribution of Edinburg School (Maxwell) to statistical mechanics, though left out in the highlighted ones, is acknowledged by including it among the nine Schools of thermodynamics in our discussion as well as acknowledged for its role in the development of available energy in engineering thermodynamics as shown in Figure 4.

3. Theory of Equilibrium Thermodynamics

Between 1865 and 1887, Clausius and Planck of the Berlin School introduced the concept of entropy and the entropy principle. An important feature of the development was that entropy change was defined in terms of reversible processes, the invention by Carnot.

The formulation of the entropy principle was a watershed moment leading to the creation of a scientific stream of thermodynamics, equilibrium thermodynamics (see Figure 3), branched off from thermodynamics’ source stream, engineering thermodynamics of Ecole Polytechnique and the Glasgow School. In quick succession, as depicted in Figure 3, the entropy principle was supported by the new statistical mechanics formulated by the Vienna School and the Gibbsian (Yale) School, and the principle and its statistical mechanics foundation served as the bedrock for equilibrium thermodynamics and physical chemistry (Gibbsian School and the Lewis [Berkeley] School).

Equilibrium thermodynamics (or “Classical thermodynamics formalism” as it was referred to in [6]) solved one puzzle: since reversible processes are the theoretical construct by Carnot and that no real process can be made into a true reversible-limit-idealization, how can entropy change as defined in terms of reversible processes be empirically determined? Development in equilibrium thermodynamics came up with the suggestion that reversible processes could be replaced with quasi-static processes, which was defined by Callen to be “a dense succession of equilibrium states” [12]. The term "quasi-static process" was proposed in 1909 by C. Caratheodory. The replacement made it possible for the determination of all thermodynamic state variables (thermodynamic properties), without which the application of thermodynamics is not possible in physics/chemistry and in engineering. But an important clarification is required.

Dense succession of equilibrium states is not necessarily slow succession of equilibrium states. Analysis in the ref. [6] concluded that the applicability of quasi-static work and quasi-static heat, $\delta W=p d V$ and $\delta Q=T d S$, is dependent on the condition of quasi-static processes being dense and slow. The key of being “slow” is not slowness per se but that changes in succession of states are driven by nearly balanced forces. Which results in processes that are “internally reversible” —rather than equilibrium states just being close due to denseness in stoppers/constraints. That is, definition of entropy is dependent on quasi-static changes that meet the condition of internal reversibility [6]; internal reversibility is the necessary and sufficient condition for the definition of entropy, $d S \equiv \delta Q / T$.

These wholesale confusions with regards to quasi-static heat and quasi-static work sown by Classical thermodynamics formalism have persisted for one important reason: as Callen’s postulational treatment of equilibrium thermodynamics [12] made it clear, equilibrium thermodynamics, unlike the classical MTH or the Glasgow School thermodynamics, in fact, does not include heat and work to be within its scope of content. Therefore, neither the first law nor the second law in its full version is the required core-foundation of the theory of equilibrium thermodynamics. Only the extreme principle derived from the second law was required to be the postulational basis for postulational treatments such as Callen’s.

Figure 3 makes it clear that the scientific stream of thermodynamics—though emerging from the classical MTH source as a branch-off stream—for all intents and purposes, took fly from the Berlin School as an “independent” theory of equilibrium thermodynamics. It, together with statistical mechanics, became an elegant and powerful branch of physics, but it was no longer the MTH of the Glasgow School that dealt with heat and mechanical work.

4. The Modern Mechanical Theory of Heat: The Entropy Principle as Selection Principle

The story of heat, thus, had to return back to MTH. By comparison with equilibrium thermodynamics, the assimilation of the entropy principle into the classical MTH was a painfully slow process. The founders of the entropy principle did not formulate the principle for re-examining the classical MTH’s premise of the production of mechanical work as reverse energy conversion of heat to work. It left for engineers themselves to find relevance of the entropy principle in engineering thermodynamics.

The principle’s impact on engineering thermodynamics did not emerge until well into the 20th century. As depicted in Figure 4, the only connection between the two branches, the equilibrium thermodynamics and the engineering thermodynamics, was found in Gibbs’ scientific work, one part of which was the introduction of Gibbs free energy. The assimilation then took shape in what we now call the theory of exergy resulting from Keenan of MIT introducing into thermodynamic analysis of engineering problems the concept of Gibbs free energy. Ideally hereafter, engineering analysis should be based on both energy analysis and exergy analysis. In thermodynamic literature, the idea of free energy, or exergy, or available energy, is widely attributed to Gibbs.

But in fact, Gibbs’ important contribution was the formulation of one of the most important examples of available energy for direct energy conversion (while the Carnot heat engine is an example for indirect energy conversion), not the general concept of available energy. The origination of the concept should be attributed to Thomson (Kelvin) as it was pointed out in Ref. [6: Sect. 4.7]. This was clearly stated by Maxwell who, in his review of Tait’s Thermodynamics, [13] noted, “Thomson, the last but not the least of the three great founders [Clausius, Rankine, and Thomson], does not even consecrate a symbol to denote the entropy, but he was the first to clearly define the intrinsic energy of a body, and to him alone are due the ideas and definitions of the available energy and the dissipation of energy.” The idea of available energy was then transmitted by Maxwell (the Edinburg school, see Figure 4) to Gibbs as noted by Daud, “Although Gibbs never once mentioned Thomson in his work, he was indebted, I believe, to Thomson’s concept of dissipation of energy via the good offices of Maxwell and his Theory of Heat. Maxwell, in turn, was indebted to Gibbs in the revision of his treatment of available and unavailable energy in his Theory of Heat, thereby uniting the two traditions of entropy and dissipation” [14].

Figure 4. Schools of engineering thermodynamics of modern mechanical theory of heat (modern MTH)

The red triangle represents the interaction among Kelvin, Maxwell and Gibbs with regards to the evolution of the concept of available energy to Gibbs free energy (see discussion in the text in Sect. 4).

In Figure 4, therefore, a dotted-line is added to the original Figure 1 to represent the linkage between Thomson (Kelvin) credited for the introduction of the general concept of available energy and Gibbs credited for the introduction of the specific example of Gibbs free energy. The intersection among Thomson, Maxwell, and Gibbs is represented by the red triangle.

We call this development the modern MTH of the MIT School. The highlighted in Figure 4 shows modern MTH’s lineage of Ecole Polytechnique (Carnot), Glasgow (Black, Watt, James Thomson, William Thomson, and Rankine), Yale (Gibbs, and indirectly through Gibbs: Clausius and Planck of the Berlin school), and MIT (Keenan, Hatsopoulos, Tisza, Callen, Tribus, and Bejan).

By clearly identifying this lineage, we point out the core principle of modern MTH to be William Thomson’s important, but at the same time problematic, self-evident energy principle. That is, modern MTH in accepting the entropy principle developed by the Berlin School did not question the self-evident energy principle. As a result, in its assimilation of the entropy principle, engineering thermodynamics never frees itself of a fundamentally energetic approach, in which the second law serves only as a selection principle of nature [4]. Most importantly, in the application of the principle (being a selection principle), the selection mechanism is understood to be efficient causation or physical necessity. Therefore, in reverse energy conversion of heat to work, the amount of heat that can be converted to mechanical work is subject to strict limitation according to the selection principle.

Second law as selection principle is, however, not how Carnot saw his theory: “Heat alone is not sufficient to give birth to the impelling power: it is necessary that there should also be cold; without it, the heat would be useless” [15]. He saw in the process of “heat transfer from hot body to cold body,” rather than heat itself or the “substance” of energy, a causal-necessary principle of nature.

Modern MTH demonstrated that there are two thermodynamics: engineering thermodynamics of the Glasgow School/MIT School and equilibrium thermodynamics of the Berlin School/Gibbs/Lewis Schools. Analysis in this section also exposed that shortcoming in the Glasgow School synthesis of Carnot’s theory remained unresolved by the MIT School because it stopped short of assimilating fully the entropy principle by cleansing the new principle of the energy principle (see also Ref. [5] for necessity in cleansing the first law of the energy principle so that the first law and the second law are clearly demarcated). Carnot had hinted that the resolution necessitates to appreciate the causal nature of the entropy principle.

5. The Dual Nature of the Entropy Principle: Publication of a Treatise of Heat and Energy

First of all, Planck noted that the energy principle was not a universal principle because Kelvin’s general conclusion 2 should be rejected. In Sect. 5.10 of A Treatise [6], detailed demonstration of falsity of general conclusion 2 was worked out (see also Sect. 5.3 below). That is, whereas growth of entropy is universal, degradation of mechanical energy is spontaneous but not universal. The energy principle is subsumed to the entropy principle, which is the universal principle.

5.1 Two Ecole-Polytechnicians: From Carnot to Poincare

In addition to the aforementioned rejecting the energy principle, the starting point to the full assimilation of the entropy principle in A Treatise was a Poincare’s insight. Poincare, the mathematician par excellence, physicist and polymath and, another famous Ecole-Polytechnician—noted on the meaning of the two laws of thermodynamics, [These thermodynamic laws] can have only one significance, which is that there is a property common to all possibilities; but in the deterministic hypothesis there is only a single possibility, and the laws no longer have any meaning. In the indeterministic hypothesis, on the other hand, they would have meaning, even if they were taken in an absolute sense; they would appear as a limitation imposed upon freedom. (Poincaré, 1913, pp. 122-123 [16]; also see [6])

With the subsumption of the energy principle under the entropy principle, this insight provided by Poincare added the crucial element in how Carnot’s challenge to Newtonian metaphysic can be carried out.

Newtonian metaphysic rested on five presuppositions, as summarized by Depew and Weber [17],

  1. Closure [i.e., physical necessity as manifested in efficient causation]—Only material and mechanical causes are operant in nature.

  2. Atomism—Systems can be taken apart and the pieces studied individually. The behavior of the ensemble is the sum of the behaviors of the individual parts.

  3. Reversibility—The laws of nature are reversible. They appear the same whether time is played forward or backward.

  4. Determinism—Given some small tolerance, ε, the behavior of a system can be predicted to within some corresponding tolerance, δ.

  5. Universality—The laws of nature are valid at all temporal and spatial scales.

Newtonian world was reversible. Carnot by inventing reversibility idealization (a theoretical construct in which reversibility was used not that all processes are reversible according to laws of physics but in the sense of “engineered” reversible event) disclosed that Carnotein world was irreversible, thus directly challenged presupposition 3.

What Poincare added was: In deterministic dynamical systems, including those of the statistical kind, physical necessity [18] prevails and, as a result, there is only a single possible event. In the context of statistical mechanics and equilibrium thermodynamics, these events of single possibility are events approaching equilibrium in each case. Accordingly, the laws of thermodynamics, the first law and the second law in its full version, have no meaning—only the extreme principles derived from the second law were required to be the postulational basis of equilibrium thermodynamics, e.g., in Callen’s treatment [12]. The crucial suggestion by Poincare was the rejection of determinism, thus, rejecting presupposition 4—his innovation here was in relating the issue of determinism vs indeterminism to the issue of multiple possibilities (in fact, infinite possibilities) in how a non-equilibrium system approaches equilibrium. That is, relating the issue of determinism vs indeterminism, presupposition 4, to the issue of what kind of causes operant in nature that limits to a single possibility vs. gives rise of multiple possibilities, presupposition 1. A new kind of cause was hinted by the suggestion of “a property common…” to all these possibilities.

Failure to see multiple possibilities in Newtonian world also results from the lack of heterogeneity in physical laws. Elsasser, the physicist, was interested in learning from the success in physics to the application to biology. He noted, “The logic of the two sciences [physics and biology] is different: physics uses homogeneous classes (which lend themselves readily to mathematical formulations [and their efficient causality]), whereas the preferred tools of the biologist are heterogeneous, and finite, classes” [19]. Heterogeneity in biology leads to an immense number of combinatoric possible outcomes. As a result, “there can be billions or trillions of combinations of a given heterogeneous system that are capable of satisfying exactly…the fundamental laws [of physics]. The laws are not violated and they continue to constrain possibilities, but they cannot discriminate among a plurality of system configuration…” noted Ulanowicz [20]. Similar kinds of “heterogeneity” exist in engineered systems instead of homogeneity in idealized physical objects/systems studied in physics.

5.2 Entropy growth potential (EGP)

Since Carnot, we have known that an irreversible spontaneous event can be made into a reversible event. The two book-end events—the single possibility of spontaneous event (in accordance with physical necessity) and the reversible event (in accordance with the second law)—define the range of possibilities. In view of the 1913 Poincare insight, we called the range of these possibilities the Poincare range [6].

Consider the generalization of Carnot’s “heat transfer from hot body to cold body” to be the entropy growth of a system (if the system is an isolated system), or the entropy growth of a system and its interactive-surroundings (if the system is in interaction with its interactive surroundings-reservoir). We denote the spontaneous entropy growth as $\left[\left(\Delta_{G} S\right)_{\text {universe }}\right]_{\text {spon }}$ in both cases, the case of isolated systems and the case of interactive system-surroundings. The conceptual advance made in A Treatise was the identification of this entropy growth as entropy growth potential: By defining

$\left[\left(\Delta_{G} S\right)_{\text {universe }}\right]_{\text {spon }}=\left(\Delta_{P} S\right)_{\text {universe }}$    (1)

in which, $\left(\Delta_{P} S\right)_{\text {universe }}$ is called entropy growth potential (EGP). EGP becomes the driver of every event in a given Poincare range [6, 21-23] in the following sense (see Sects. 5.3 and 5.4).

5.3 EGP-enabled extracted heat is always entirely converted to mechanical work

In terms of EGP, the driver of a Carnot cycle is $-\frac{Q_{H}}{T_{H}}+\frac{Q_{H}}{T_{L}}$. This EGP enables the extraction of heat from the heat reservoir at $T_{L}$ by the amount of $T_{L} \cdot\left(-\frac{Q_{H}}{T_{H}}+\frac{Q_{H}}{T_{L}}\right)=T_{L} \cdot\left(\Delta_{P} S\right)_{\text {univ }}$ converting it to mechanical work, $W_{\text {rev }}$. Note that, in this case, EGP is dependent on the temperature, $T_{L}$, of the reservoir [6]. The Carnot cycle is an example of the case of interactive “system-$T_{0} \cdot$ surroundings” with the general reversible mechanical work formula as,

$W_{\text {rev }}=T_{0}\left(\Delta_{P} S\right)_{\text {universe }}$    (2)

This entropy-centric treatment has advantage in accordance with the principle of parsimony: the same treatment applies to isolated systems. Rather, while a Carnot cycle can be analyzed either in terms of energetic treatment as in Sect. 2 or in terms of entropic treatment here, isolated systems can be analyzed only in terms of entropy. Such systems are overlooked in the theory of exergy because exergy is defined as “the energy that is available to be used.” Isolated systems, in their changes toward internal thermodynamic equilibrium, involve no change in energy. By “definition,” therefore, isolated systems are not the kind of systems of interest to the theory of exergy. This is, of course, totally erroneous energetic way to looking at the problems.

Given an isolated system, whether it is an example of free expansion [5, 6, 22] or a pure diffusion process [6] or a heat transfer between internal components of a thermal composite system [5, 6, 22], reversible work can be derived from its EGP, in accordance with the principle of increase of entropy, of the amount,

$W_{\text {rev }}=T_{\text {reservoir }}\left(\Delta_{\boldsymbol{P}} S\right)_{\text {universe }}$    (3A)

In these cases, EGP is independent of the reservoir temperature $T_{\text {reservoir }}$; so, the reversible work can be derived from any available heat reservoir of arbitrary temperature.

In every isolated system, its EGP enables reversible extraction of heat, of amount equaling the product of EGP and $T_{\text {reservoir }}$ (of an available reservoir), converting it entirely (100%) to mechanical work. That is, every isolated system offers evidence in the monstrous misconception in GS-4b.

5.4 Common property of a Poincare range, and the dual nature of the entropy principle

Each event in a Poincare range result in different entropy growth [6], but all events in the range are driven by the same EGP. That is, EGP is the common property of all events in a Poincare range and it is in this sense that EGP is the driver.

We can at this point conclude that the Berlin School’s formulation of the entropy principle as a selection principle, which led to the misconception of GS-4b, is an incomplete formulation. GS-4b is a misapplication of the entropy principle (see Ref. [5]), but an entropy principle that is correctly formulated can and should avoid such a misapplication. In the context of engineering thermodynamics, therefore, the entropy principle is to be understood by its dual nature as entropy-growth-selection principle of physical necessity as well as entropy-growth-potential principle of causal necessity.

Causal necessity is defined in Glossary of Ref. [6] as:

The idea that agents can start new causal chains that are not pre-determined by the events of the immediate or distant past in accordance to physical necessity. That is, causation is not limited to efficient causation alone; nor, necessity limited to physical necessity. Causal necessity, therefore, is a new metaphysical presupposition breaking away from the metaphysical presupposition of physical necessity.

We are not prepared to discuss what is “agents”. Instead, we simply note thermodynamics’ uniquely anthropomorphic human origin [1], and argue that, in view of that undeniable fact, thermodynamics needs a new set of metaphysical presuppositions (see [5]) including the presupposition of causal necessity—without them the laws have no meaning and the theory is incoherent. A coherent theory of engineering thermodynamics is to be formulated based on the new Carnot-Poincare-Ulanowicz presuppositions:

  1. Closure is rejected—Both material and mechanical causes and efficacious causes are operant in nature.
  2. Atomism—Feynman is right, still the “the most powerful assumption of all” [24]; but atomism is not reductionism and being limited to reductionism is a mistake and we need to accept systems approach/network approach as well (see Sect. 6).
  3. Reversibility in the Universe was rejected.
  4. Determinism is rejected not just on probability ground but because of efficacious causation.
  5. Universality—The laws of nature are valid at all temporal scales—true, but that laws cannot be violated does not mean that laws govern deterministically (see Ulanowicz’s comment in Sect. 5.1 above, and Sect. 6.4 below).

Based on the set of CPU presuppositions, a new theory of engineering thermodynamics will adopt the reformulated first law and second law of thermodynamics as its fundamental premises:

  1. the first law is the energy conservation principle that manifests universal connection rather than universal interconversion [5];
  2. the second law is the entropy growth selection principle as well as the EGP causal principle (A Treatise [6]).

The significance of the dual nature of the entropy principle is found in the understanding that when the principle is considered to be a selection principle solely, the selection mechanism is understood to be efficient causation or physical necessity. Whereas, when the principle is considered to be both a selection principle and a causal principle, the selection, quoting the reviewer of an earlier submission of the paper, is “implemented by local (non-universal) laws that operate on historical contingencies.” Contingencies is the operative word; in contrast, with physical necessity, the outcome is necessary and determined rather than contingent. Elsasser also stressed the selection as the key issue in biology: “We introduce a principle of selection. It asserts in essence that living things become defined only by a selection being made by nature whereby the actually occurring states are distinguished from the immense multitude of possible ones” [19].

6. Engineering and Biology: Causation in Maximum Entropy Production Principle (MEPP) vs. Causation in Theory of “Engineering” Thermodynamics and Process Ecology

This section considers the wider implication of the proposed “engineering” thermodynamics. Traditional physics is based on the metaphysical presupposition of physical necessity and efficient causation. The argument that thermodynamics—i.e., engineering thermodynamics as in contrast with equilibrium thermodynamics—needs a new set of presuppositions on necessity and causation shines a light on its possibly wider implication on fields outside technology.

Figure 5. The Brussels School: Thermodynamic theories of self-organized complex systems

Dotted red line asking the rhetorical question whether biology/ecology as spontaneous self-organization phenomena share, in fact, stronger affinity with engineering than with physics. Note that both MEPP and Process ecology are much more recent developments than 1934; they obviously took place after “Prigogine 1967-77”.

It is perhaps not out of place to include a discussion in this context on biology and ecology, which may also need a new set of presuppositions on necessity and causation. We see similarity in these needs between engineering and biology. We are also motivated by the longstanding enigma of how the phenomenon of life is possible in view of the entropy principle as a selection principle that is often conventionally viewed through the lens of inevitable creation of disorder. Life vs. entropy has attracted the fascination of scientists and theologians over the world ever since Clausius made the bold claim, “The entropy of the universe tends to a maximum.” There has been almost universal acceptance in the inevitable production of entropy being synonymous with the inevitable creation of disorder.

Boltzmann and Lotka were the first two to bring up this discussion of life phenomena in terms of energy and entropy. Boltzmann saw the potential connection of his scientific work in physics with biology and evolution theory [25]. But it was Schrodinger, another prominent figure from the Berlin School, who made the actual connection in terms of two necessary conditions for life: order from order and order from disorder [26]. Suggestion of the first necessary condition, order from order for the propagation of life, was followed up by the discovery of DNA and the development of molecular biology, a breathtakingly prescient suggestion in linking biology with physics.

We want to address Schrodinger’s second suggestion, that of order from disorder, that also linked biology with physics—and the second suggestion’s aftermath. On the face of it, it was an audacious (but captivating) idea to say that the principle proclaiming that things fall apart can be the foundation for explaining emergence of order, especially when that principle is taken to be a selection principle imposing strict limits on what is possible. Schrodinger made the captivating argument, however, that self-organization, local emergence and maintenance of order, is possible by “exporting” local systems’ entropy (as a result of their internal entropy production) to the external environment—leading to greater global disorder. He did not provide a detail explanation of the connection between the exporting steps and the development in self-organization.

6.1 From dissipative structures to maximum entropy production principle (MEPP)

A concrete link of that kind offering a systematic explanation was provided by Prigogine and co-workers of the Brussels School (see Figure 5) [27, 28]. They called the systematic explanation theory of dissipative structures, which describes how systems at far from their thermodynamic equilibrium states develop organization or order spontaneously.

A paradigmatic dissipative-structure is Bénard convection, an experiment involving a shallow vessel with a source of heat below it. Initially the fluid and vessel are isothermal at the same temperature with its surroundings with the source of heat being infinitesimal; that is, the fluid system is infinitesimally near at thermodynamic equilibrium. The experiment proceeds by gradually increasing the temperature of the heat source, inducing a temperature gradient vertically across the system. This temperature gradient drives the system away from equilibrium. At low gradients, energy is transferred/dissipated by conduction heat transport. Microscopic motion increases at the bottom of the fluid, and this increased motion is transmitted upward through the fluid eventually heating the air above it. During the conduction phase, the fluid's viscosity confines each molecule to its layer keeping fluid stationary macroscopically. Continuing with raising the temperature of the heat source thus fluid temperature gradient, once a critical threshold of gradient is crossed conduction steady-state becomes unstable (viscous force is overcome) and a new mode of energy transport emerges: convection. The form convection takes depends upon the boundary conditions of the system. In the most celebrated version of Bénard's experiment the vessel is cylindrical and open at the top. In this case, the convective structure that emerges is a lattice of hexagonal convection cells.

The “dissipative structure” of Bénard convection is a self-organized complex pattern developed in a system in association with prodigiously moving energy through the system. Theory of dissipative structures thus offers an explanation of the maintenance of local order or organization by prodigiously transporting energy and entropy through the local system, thereby, resulting in greater entropy gain in the external surroundings of the system—i.e., local order at the expense of global order. Even though, this scenario begs the question how local orders can prosper in the face of increasing global disorder. Nonetheless, Prigogine’s examples on emergence of orders, even if they were only local orders, became widely popular and the idea of dissipative structures has been embraced in many disciplines, [29, 30] especially in climate sciences and ecology literature [31].

6.2 Maximum entropy production principle (MEPP) in climate and other physical sciences

In climate sciences, the approach found success by viewing the manner of increasing overall entropy production taking the form of maximum entropy production principle (MEPP): non-equilibrium thermodynamics systems are organized in steady state such that the rate of entropy production is maximized. It was first hypothesized by the climate scientist Paltridge [32, 33]. Lorenz, a planetary scientist, applied MEPP to Saturn’s moon Titan and to Mars, “successfully predict[ing] the heat flows and zonal temperatures of Mars and Titan” [34].

Since these pioneer works, R. Dewar has proposed a theoretical foundation for MEPP based on the Jaynes informational approach of statistical mechanics [35, 36]. Questions have been raised on the theoretical justification of the MEPP principle. [30, 37, 38] The general nature of the principle and its justification are put by Martyushev this way:

“Note first that a principle like MEPP cannot be proved. Examples of its successful applications for description of observed phenomena just support this principle, while experimental results (if they appear) contradicting the principle will just point to the region of its actual applicability. The balance of the positive and negative experience will eventually lead to the consensus of opinion on the true versatility or a limited nature of MEPP” [39].

The success of MEPP in climate sciences and other physical sciences [30] has led to suggestion that the principle can provide the organizing principle that potentially unifies biological and physical sciences. [29, 30, 39, 40].

It is important to note that the ability of “exporting their entropy as a result of their internal entropy production to the external environment,” according to Schrödinger, is endowed only for the animate, and the inanimate has no such ability. He added, “new laws to be expected in the organism.” In refining Schrodinger’s proposal, therefore, Prigogine made a significant change by removing the animate and inanimate distinction making it possible for dissipative structures, and correspondingly MEPP, to be the framework for emergence of physical and biological orders. Whereas, by saying “new laws to be expected in the organism,” Schrodinger might be interpreted to allow the possibility of new necessity and causation, Prigogine took the physicalist position that the only necessity and causation are physical necessity and efficient causation.

We already made the case for broadening the metaphysical foundation for engineering thermodynamics. Here in this section in Subsect. 6.3 and Subsect. 6.4, we suggest the same broadening for biology and ecology.

6.3 Order from disorder, taking the road less travelled

The fundamental mystery is the emergence of biological, ecological, and social orders: while MEPP explains the emergence of local order, the unanswered question remaining is how local orders can flourish in the environment of global chaos that their existence creates.

This part of discussion is limited to ecological systems. Our view on causation and emergence of order is taken from our experience as engineers witnessing the emergence of social and infrastructural/technological orders. The author has no expertise in biology and ecology and, on ecological matter, this discussion is based on the article by Meysman and Bruers [41].

M and B investigated the application of Schrodinger and Prigogine’s ideas in three specific versions to ecosystems and performed on empirical testing of the three versions (they called correctly these versions hypotheses). The three versions are:

Hypo-1. The idea of ‘increased entropy production as a sign of life’—the hypothesis of Schrodinger (1944), as reformulated and sharpened by Ulanowicz & Hannon [42].

Hypo-2. MEPP—The standard version, state selection principle, details how the system will behave under constant external boundary conditions. When a system can attain multiple steady states, the stable state will be the one that shows the highest entropy production rate.

Hypo-3. MEPP—The gradient response principle [43, 44] details how the system will behave when the external boundary conditions are changed. When the thermodynamic gradient increases, the system’s new stable state should be accompanied by a higher entropy production rate.

Only Hypo-2 and Hypo-3 are MEPP hypotheses that are physicalist hypotheses and Hypo-1 is not.

The findings of M and B, as reported in Summary of [41], are as follows (quoted nearly in full):

Overall, from our analysis, we conclude that Schneider & Kay (1994) have forwarded a too simplistic analogy between the thermodynamic operation of ecosystems and Rayleigh–Bénard convection. The consequence of this is that state selection and gradient response principles are not generally applicable to ecosystems. Because of trophic interactions across more than one level, the stable state of the ecosystem is not necessarily the one that has the highest entropy production rate, thus, invalidating the state-selection hypothesis…Similarly, the total entropy production does not necessarily increase when the primary thermodynamic gradient increases, thus invalidating the gradient response hypothesis. From an ecological point of view, this implies that a more complex ecosystem (defined as having more trophic levels) must not necessarily be associated with an increased entropy production rate. However, the hypothesis of Schrodinger (1944), as reformulated and sharpened by Ulanowicz & Hannon (1987), which states that living communities augment the rate of entropy production over what would be found in the absence of biota, holds for all the food webs tested here [41].

In view of Martyushev’s aforementioned comment, “experimental results (if they appear) contradicting the principle will just point to the region of its actual applicability,” [39] these M&B findings at least opened the possibility that MEPP falls short as a universal physicalist organizing principle. Claims that MEPP is such an organizing principle often cited Ulanowicz (1987 [42]) as supporting evidence [40, 45]. This is problematic since Ulanowicz takes a clear stand against physicalist philosophy. In the following, Ulanowicz’s own post-1987, non-physicalist treatment of ecological systems is summarized.

Ulanowicz began his treatment with the Boltzmann–Gibbs formulation of statistical entropy [8],

$H=k \sum_{i} p_{i}\left(-\ln \left[p_{i}\right]\right)$    (3B)

where, H is the statistical representation of entropy, pi is the probability of event i, which is properly normalized to fall between zero and one, and k is a scalar constant. The formula has been grouped in this particular fashion to emphasize the fact that the statistical entropy is the average value of the term in parenthesis, $s_{i}=k\left(-\ln \left[p_{i}\right]\right)$, where si has been called the ‘surprisal’ of possible outcome i. That is, statistical entropy is,

$H=\sum_{i} p_{i} s_{i}=\sum_{i} h_{i}$    (4)

where, $h_{i}=p_{i} s_{i}$, which may be referred to as “specific’ statistical entropy.

Because the logarithm is a monotonically increasing function of its decreasing argument, $p_{i}$, the surprisal $s_{i}$ becomes a measure of the event i’s nonexistence, i.e., a very large $s_{i}$ means that event i does not occur most of the time $\left(s_{i} \rightarrow \infty\right.$ as $\left.p_{i} \rightarrow 0\right)$. Note further that, in (4) where $h_{i}=p_{i} s_{i}$, we find probable presence of an event, $p_{i}$, is multiplied by the corresponding measure of its nonexistence, si. Therefore, $h_{i}$ becomes, de facto, a gauge of that event’s indeterminacy.

“To see this, one notes that when $p_{i}$ ≈ 1, the event is almost certain, and $h_{i}$ ≈ 0; when $p_{i}$ ≈ 0, the event is almost surely absent, and again $h_{i}$ ≈ 0. It is only for intermediate, less determinate values of $p_{i}$ that $h_{i}$ becomes appreciable, achieving its maximum value at $p_{i}$ = (1/e). It should not, therefore, be too surprising that Eqn. (4) is germane to change and evolution” [8]. Specific statistical entropy’s ($h_{i}^{\prime} s$) are measures of potential for change.

When consideration is limited to the distributions of atoms or molecules of ideal gas, i.e., gas consisting of point-sized particles having no interactions with one another, Boltzmann of course showed that the change, beginning at initial state of low statistical entropy, is unidirectional towards increasing entropy corresponding to maximum dispersion or maximum disorder—as the sole endpoint.

With the consideration of conditional probability and joint probability, $p\left(a_{i} \mid b_{j}\right)$ and $p\left(a_{i}, b_{j}\right)$, Eq. (3) can be written as,

$H=-k \sum_{i} p\left(a_{i}\right) \log \left[p\left(a_{i}\right)\right]$

$=k \sum_{i} \sum_{j} p\left(a_{i}, b_{j}\right) \log \left[\frac{p\left(a_{i} \mid b_{i}\right)}{p\left(a_{i}\right)}\right]$

$-k \sum_{i} \sum_{j} p\left(a_{i}, b_{j}\right) \log \left[\frac{p\left(a_{i}\right) p\left(a_{i} \mid b_{i}\right)}{p\left(b_{i}\right)}\right]$    (5)

Ulanowicz then made the crucial point that if the statistical entropy, (3), (4) and (5), is instead applied to a system whose elements have the capacity to interact with each other, its measure of potential for change may manifest other possible endpoint(s). Introduce, henceforth, a joint-probability-based statistical entropy, H’, i.e., statistical entropy of a collection of processes,

$H^{\prime}=-\frac{k}{2} \sum_{i} \sum_{j}  p\left(a_{i}, b_{j}\right) \log \left[p\left(a_{i}, b_{j}\right)\right]$    (6)

Decomposition corresponding to (5) may be applied to (6), resulted in

$H^{\prime}=I+\Phi$    (7)

where,

$I=\frac{k}{2} \sum_{i} \sum_{j} p\left(a_{i}, b_{j}\right) \ln \left[\frac{p\left(a_{i}, b_{j}\right)}{p\left(a_{i}\right) p\left(b_{j}\right)}\right]$    (7A)

$\Phi=-\frac{k}{2} \sum_{i} \sum_{j} p\left(a_{i}, b_{j}\right) \ln \left[\frac{p\left(a_{i}, b_{j}\right)^{2}}{p\left(a_{i}\right) p\left(b_{j}\right)}\right]$    (7B)

Note that for independent elements or events the first part on RHS of (5) becomes zero and the second part reduces to the Boltzmann/Gibbs statistical entropy. Ulanowicz called I in Eqn. (6) “mutual information,” and Φ “conditional entropy.” Again, for independent elements or events, i.e., $p\left(a_{i}, b_{j}\right)=p\left(b_{j}\right) p\left(a_{i} \mid b_{i}\right)=p\left(a_{j}\right) p\left(b_{j}\right)$, mutual information I becomes the first part of RHS of (5) being zero and conditional entropy Φ becomes the second part of RHS of (5) reducing to the Boltzmann/Gibbs statistical entropy.

In general, mutual information I is non-zero representing a measure of mutual information of interactive network systems (autocatalytic configuration of processes). An autocatalytic configuration of processes that induces the attraction of materials and resources into its circuit is called a configuration with centripetality property or function. Centripetality is a manifestation of causal necessity and efficacious causation, which is a critical function for organisms. At this point, Ulanowicz introduced the definition of “degree of order” of an autocatalytic configuration, α, as

$\alpha=I / H^{\prime}$    (8)

as well as the definition, somewhat arbitrarily, of “robustness” in terms of degree of order,

$R=-\alpha \ln (\alpha)$    (9)

As an example of application of α, in Eq. (7), to data on real ecosystems, Zorach and Ulanowicz [46] collected weighted networks of trophic exchanges in 48 ecological communities. They calculated values of α and R for these systems as plotted against each other. The plot is reproduced here as Figure 6—in the figure that the points coincide precisely on the curve is merely a matter of identity (8). The meaning of the empirical results is found in the empirical range of degree of order and the fact that the degrees of order are not clustered around α=0  or $\alpha=1$. Instead, $\alpha^{\prime} s$ are found in the middling values of the theoretical range of (0,1).

Figure 6. Reproduced Figure 4 [8]—The degrees of order and their corresponding magnitudes of robustness for the 48 sample-weighted ecosystem flow-networks reported by Zorach and Ulanowicz [46]

Figure 7. What do the degree of orders of the 48 samples tell us?

α=0 corresponds to, theoretically, ecosystems in total dissolution and α=1 corresponds to perfectly efficient ecosystems. The fact that real ecosystems as a group are away from the α=0 endpoint means that ecological systems exhibit order of higher degree than MEPP orders.

Significantly, there is limit to moving towards perfect order: “systems too near α=1 are so fully constrained that they no longer can evolve,” [8] thus, they loss redundancy (as measured by conditional entropy) and resilience.

Centripetality is a critical function, obviously. If external materials and resources are removed from the system (system is deprived of its centripetality) matters will devolve towards the polar extremes of equilibrium α=0 or equilibrium α=1. Under Boltzmann’s assumption, the only possible end-state or endpoint is α=0. Under Ulanowicz’s assumption, as shown in Figure 6, however, besides the endpoint of “heat death” of thermodynamics equilibrium there is another theoretical endpoint of α=1 for system to survive perfect-efficiently. This reminds us of Carnot heat engine/heat pump being perfectly reversible.

Of course, real engines are not perfectly reversible nor real organisms are perfectly efficient. Efficiency and redundancy for resilience are both crucial requirements (see Figure 7) for organisms and ecosystems. Here, biology has a lesson for engineering and economics (including operation of health/medical institutions during normal times as well as during the COVID19 pandemic), for which efficiency and resilience are both indispensable.

6.4 MEPP vs. process ecology

The second law asserts the unidirectional nature in entropy growth but makes no demand on the rate of the growth: the rate of entropy growth can speed up but it can also slow down.

By asserting the rate of entropy growth always increases towards a maximum, MEPP makes a bold new assertion. Being a variational principle in the best tradition in physics, the principle is potentially a landmark discovery. In his comment on the nature of the principle, Martyushev added, “Other principles, such as laws of thermodynamics and Newton's law, developed along similar lines” [39]. But there is an important difference between MEPP and those true universal laws of physics: Those laws are universal precisely because they do not govern deterministically in all domains, i.e., they govern deterministically only in the domain of homogeneous classes of physical systems but allows centripetality to function in the domain of heterogeneous classes. Whereas, MEPP assumes a causal power over domains of all classes.

We have a contradiction: spontaneous self-organization is associated with the idea of physical necessity implying no efficacious causation; yet, MEPP’s association with the entropy principle claims to possess principle’s causal power. Claiming its causal power makes MEPP more powerful but at the same time makes it impossible to be a universal principle—such as Newton’s law.

It is suggested that we should avoid using the term spontaneous self-organization for autocatalytic configurations in biology and ecology because of the term’s association with physical necessity in homogeneous classes, which tends toward maximum dispersion or maximum disorder. Both Elsasser and Ulanowicz argue for selection to be the discriminating cause of complex configurations of heterogenous classes: “there can be billions or trillions of combinations of a given heterogeneous system that are capable of satisfying exactly…the fundamental laws [of physics]. The laws are not violated and they continue to constrain possibilities, but they cannot discriminate among a plurality of system configuration…” as Ulanowicz noted as reported in Sect. 5.1. In this section, this paper reports how Ulanowicz formulated a quantitative measure of orders [8] and that his ecosystem findings [46] confirmed the emergence of orders away from the disorders of maximum dispersion and disorder.

In the left corner of Figure 7, in which triangles, which are shown within the MEPP circle, represent the kind of orders derived from MEPP spontaneous self-organization. In the Boltzmann-Gibbs treatment, the arrow represents the tendency towards the endpoint of zero degree of order (α=0). In the MEPP treatment, the systems gain orders spontaneously as shown schematically by triangles, but the spontaneous orders are still ones within the circle of orders of limited degree.

These are not the kind of biological/ecological orders as represented by the 48 squares in Figure 7, which show the possibilities of moving away from the MEPP circle towards greater degrees of order. Significantly, there is limit to moving towards perfect order: “systems too near α=1 are so fully constrained that they no longer can evolve,” [8] thus, they loss redundancy (as measured by conditional entropy) and resilience. Like a Carnot cycle, perfect reversibility and perfect perpetual-harmony are ideals, which are impractical and beyond reach.

7. Conclusions

Thermodynamics began as Carnot’s theory of heat (1824). Three decades later, Kelvin discovered the self-evident energy principle, the idea of universal degradation of energy, the less-rigorous precedent of the entropy principle. Clausius and Planck established the entropy principle, the principle of the inevitable growth of entropy, under which the energy principle is to be subsumed. But the Berlin School of thermodynamics, in establishing the entropy principle, pivoted the theory of thermodynamics to a new scientific stream away from its original theory of engineering thermodynamics. Significantly, in that pivot, thermodynamics, which had originated as an engineering subject, was captured by mechanical philosophy. With the availability energy concept of Kelvin and Gibbs, the MIT School of thermodynamics pivoted back to engineering thermodynamics restoring the fundamental role of the two laws in the theory of engineering thermodynamics. But the new engineering thermodynamics, the modern MTH (modern mechanical theory of heat), accepted the Berlin School’s take on the entropy principle as a selection principle without challenging the mechanical philosophy of objective science that views selection mechanism through physical necessity solely. The result was a deeply flawed theory of thermodynamics with the understanding that the entropy principle infers the inevitable growth of chaos and disorder.

The thesis of the paper is the inference of the inevitable growth of chaos and disorder as a corollary of the inevitable growth of entropy to be a consequence of the metaphysics of physical necessity. Therefore, abandonment of the metaphysics dissolves the inference overturning the entropy principle to be the principle of degradation of orders [47].

The first step against the metaphysics of physical necessity was taken by Poincare, who pointed out that the meaning of the two laws is found only if we abandon the determinism of physical necessity. That is, only if we embrace both physical necessity and causal necessity. Following Poincare’s insight, the entropy principle was reformulated in A Treatise [6] as the principle of entropy growth selection and entropy growth potential, restoring the dual nature of entropy that was implicit in Carnot’ theory of heat. Free from the metaphysics of physical necessity as the sole necessity, the new theory-system rejects the “inevitable entropy growth” (universally true) → “inevitable growth of disorder” (not true) inference (as well as inevitable entropy growth → inevitable accumulation of heat, see paper [5]). In fact, entropy growth potential (EGP) associated with the “inevitable entropy growth” is the universal cause for, with selection based on physical necessity and causal necessity, the emergence of orders—including biological order.

Analysis in the paper was further applied to the audacious development in entropy principle with the assertion that the rate of entropy growth also increases in biological processes, known as maximum entropy production principle (MEPP). The new principle, which was discovered in climate science, was inspired by Schrodinger’s 1944 What is Life? and Prigogine’s (the Brussels School) theory of dissipative structures. There is in fact a distinction between Schrodinger, who allows distinction between animate and inanimate, and Prigogine, who does not. Our analysis suggests that MEPP, in asserting universality by its practitioners, is a physicalist principle based on mechanical philosophy that owns closer lineage with the Brussels School rather than Schrodinger’s idea, which allows non-physicalist thinking.

The analysis suggests that orders explained by MEPP are not biological/ecological orders and favors the theory of process ecology [8, 20, 42, 46] as an explanatory theory for ecological orders. In formulating this argument, it is noted that process ecology finds greater affinity with engineering than with Cartesian mechanism in the sense that the true key to biological organization is its causal necessity (centripetality) sharing with engineering (efficiency and resilience) rather than its spontaneity (spontaneous self-organization) sharing with physics (efficient causation).

Nomenclature

EGP

entropy growth potential

H

statistical entropy, J∙K-1

H’

joint-probability based statistical entropy, J∙K-1

I

“mutual information”, J∙K-1

k

Boltzmann constant, J∙K-1

p

pressure, kPa

$p_{i}$

probability of event i

Q

heat flow, kJ

R

“robustness”

S

entropy, kJ∙K-1

T

temperature, K

V

volume, $m^{3}$

W

work, kJ

Greek symbols

a

degree of order

Δ

ΔS is the entropy growth of an event

Φ

“conditional entropy,” J∙K-1

Subscripts

0

heat reservoir a system interacts with

G

growth

P

potential

reservoir

condition of a heat reservoir that an isolated system is in interaction with during the reversible event

  References

[1] Bridgman, P.W. (1961). The Nature of Thermodynamics. Harvard University Press.

[2] Thims, L. (2020). Twelve Founding Schools of Thermodynamics. Accessed on February 5, 2020 from http://www.eoht.info/m/page/Schools+of+thermodynamics. In which, Libb Thims listed twelve founding schools of thermodynamics, including Glasgow and Berlin schools.

[3] Smith, C., Wise, M.N. (1989). Energy and Empire: A Biographical Study of Lord Kelvin. Cambridge University Press.

[4] Ebeling, W., Hoffman, D. (1991). The Berlin school of thermodynamics founded by Helmholtz and Clausius. European Journal of Physics, 12(1): 1. https://doi.org/10.1088/0143-0807/12/1/001

[5] Wang, L.S., Shi, P. (2020). Against the energy-conversion-doctrine: Why energy conservation manifests universal connection rather than universal interconversion? Renewable Energy—EnerarXiv.

[6] Wang, L.S. (2019). A Treatise of Heat and Energy (Cham, Switzerland: Springer-Mechanical Engineering series). 

[7] Coppersmith, J. (2010). Energy, the Subtle Concept. Oxford Univ Press. 

[8] Ulanowicz, R.E. (2009). Increasing entropy: Heat death or perpetual harmonies? International Journal of Design & Nature and Ecodynamics, 4(2): 83-96. https://doi.org/10.2495/DNE-V4-N2-83-96

[9] Thomson, W (1852). On a universal tendency in nature to the dissipation of mechanical energy. Mathematical and Physical Papers of William Thomson, 4(25): 304-306. Cambridge: Cambridge Univ Press.

[10] Uffink, J. (2001). Bluff your way in the second law of thermodynamics. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 32(3): 305-394. https://doi.org/10.1016/S1355-2198(01)00016-8

[11] Planck, M. (1969). Treatise on Thermodynamics, 3rd edition. New York: Dover Publications.

[12] Callen, H. (1985). Thermodynamics and an Introduction to Thermostatistics, 2nd edition. Wiley.

[13] Maxwell, J.C. (1878). Tait's “Thermodynamics” II. Nature, 17(432): 278-280. https://doi.org/10.1038/017278a0

[14] Daub, E.E. (1970). Entropy and dissipation. Historical Studies in the Physical Sciences, 2: 321-354. https://doi.org/10.2307/27757310

[15] Carnot, S. (1960). Reflections on the Motive Power of Fire. [Reprinted from Reflections on the Motive Power of Fire and Other Papers, edited by E. Mendoza. (New York: Dover Publications (1960)]

[16] Poincare, H. (1913). Science and Hypothesis. The Science Press, Lancaster, PA.

[17] Depew, D.J., Weber, B.H. (1995). Darwinism evolving: systems dynamics and the genealogy of natural selection. Cambridge, Massachusetts: MIT Press.

[18] Curd, M., Cover, J. (1998). Philosophy of Science: The Central Issues. W.W. Norton & Company.

[19] Elsasser, W.M. (1981). A form of logic suited for biology? In R. Rosen (Ed.), Progress in theoretical biology, 6: 23-62. 

[20] Ulanowicz, R.E. (2016). Process ecology: Making room for creation. Sophia, 55(3): 357-380. https://doi.org/10.1007/s11841-016-0529-x

[21] Wang, L.S. (2013). Exergy or the entropic drive: Waste heat and free heat. International Journal of Exergy, 12(4): 491-521. https://doi.org/10.1504/IJEX.2013.055077

[22] Wang, L.S. (2014). Entropy growth is the manifestation of spontaneity. Journal of Thermodynamics, 2014: 387698. https://doi.org/10.1155/2014/387698

[23] Wang, L.S. (2017). The second law: From Carnot to Thomson-Clausius, to the theory of exergy, and to the entropy-growth potential principle. Entropy, 19(2): 57. https://doi.org/10.3390/e19020057

[24] Toker, D. (2015). The Most Important Sentence. The Berkeley Science Review.

[25] Francis, M.R. (2016). The Hidden Connections Between Darwin and the Physicist Who Championed Entropy. SMITHSONIANMAG.COM (Dec. 15, 2016).

[26] Schrodinger, E. (1944). What is Life? http://www.whatislife.ie/downloads/What-is-Life.pdf.

[27] Prigogine, I. (1967). Introduction to the Thermodynamics of Irreversible Processes. New York, NY: John Wiley. 

[28] Nicolis, G., Prigogine, I. (1977). Self-organization in Nonequilibrium Systems. New York, John Wiley & Sons. 

[29] Kleidon, A., Lorenz, R.D. (2004). Non-equilibrium thermodynamics and the production of entropy: Life, earth, and beyond. Springer Science & Business Media.

[30] Martyushev, L.M., Seleznev, V.D. (2006). Maximum entropy production principle in physics, chemistry and biology. Physics Reports, 426(1): 1-45. https://doi.org/10.1016/j.physrep.2005.12.001

[31] Kleidon, A., Malhi, Y., Cox, P.M. (2010). Maximum entropy production in environmental and ecological systems. Royal Society, 365: 1297-1302. https://doi.org/10.1098/rstb.2010.0018

[32] Paltridge, G.W. (1975). Global dynamics and climate‐A system of minimum entropy exchange. Quarterly Journal of the Royal Meteorological Society, 101(429): 475-484. https://doi.org/10.1002/qj.49710142906

[33] Paltridge, G.W. (1979). Climate and thermodynamic systems of maximum dissipation. Nature, 279(5714): 630-631. https://doi.org/10.1038/279630a0

[34] Lorenz, R.D., Lunine, J.I., Withers, P.G., McKay, C.P. (2001). Titan, Mars and Earth: Entropy production by latitudinal heat transport. Geophysical Research Letters, 28(3): 415-418. https://doi.org/10.1029/2000GL012336

[35] Dewar, R. (2003). Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states. Journal of Physics A: Mathematical and General, 36(3): 631. https://doi.org/10.1088/0305-4470/36/3/303

[36] Dewar, R.C. (2005). Maximum entropy production and the fluctuation theorem. Journal of Physics A: Mathematical and General, 38(21): L371. https://doi.org/10.1088/0305-4470/38/21/L01

[37] Bruers, S. (2007). A discussion on maximum entropy production and information theory. Journal of Physics A: Mathematical and Theoretical, 40(27): 7441. https://doi.org/10.1088/1751-8113/40/27/003

[38] Grinstein, G., Linsker, R. (2007). Comments on a derivation and application of the ‘maximum entropy production’ principle. Journal of Physics A: Mathematical and Theoretical, 40(31): 9717. https://doi.org/10.1088/1751-8113/40/31/N01

[39] Martyushev, L.M. (2010). The maximum entropy production principle: two basic questions. Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1545): 1333-1334. https://doi.org/10.1098/rstb.2009.0295

[40] Skene, K.R. (2017). Thermodynamics, ecology and evolutionary biology: A bridge over troubled water or common ground? Acta Oecologica, 85: 116-125. https://doi.org/10.1016/j.actao.2017.10.010

[41] Meysman, F.J., Bruers, S. (2010). Ecosystem functioning and maximum entropy production: A quantitative test of hypotheses. Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1545): 1405-1416. https://doi.org/10.1098/rstb.2009.0300

[42] Ulanowicz, R.E., Hannon, B.M. (1987). Life and the production of entropy. Proceedings of the Royal society of London. Series B. Biological Sciences, 232(1267): 181-192. https://doi.org/10.1098/rspb.1987.0067

[43] Schneider, E.D., Kay, J.J. (1994). Life as a manifestation of the second law of thermodynamics. Mathematical and Computer Modelling, 19(6-8): 25-48. https://doi.org/10.1016/0895-7177(94)90188-0

[44] Schneider, E.D., Sagan, D. (2005). Into the Cool: Energy Flow, Thermodynamics, and Life. University of Chicago Press.

[45] Martyushev, L.M. (2013). Entropy and entropy production: Old misconceptions and new breakthroughs. Entropy, 15(4): 1152-1170. https://doi.org/10.3390/e15041152

[46] Zorach, A.C., Ulanowicz, R.E. (2003). Quantifying the complexity of flow networks: how many roles are there? Complexity, 8(3): 68-76. https://doi.org/10.1002/cplx.10075

[47] Hirsh, A. (2020). The Idea of Entropy Has Led Us Astray. Nautilus, issue 86. http://nautil.us/issue/86/energy/the-idea-of-entropy-has-led-us-astray