Genetic entropy is real

Entropy as a moose test Objectivistic physics in danger of skid On objectivism in the natural sciences 1. 146 The non-objectivistic understanding of sciencea) 147 The crisis of objectivismb) 150 Entropy in phenomenological thermodynamics2. 154 Entropy and heat engines a) 155 Entropy as a thingb) 160 Entropy in statistical physics 3. 167 Entropy and probabilitya) 168 Information-theoretical entropy and subjectivismb) 173 Entropy and the second law of thermodynamicsc) 178 Philosophical implications of the concept of entropy4. 185 Entropy and timea) 185 Entropy and orderb) 195 About objectivism in the natural sciences Entropy is a fancy term that is used in many discussion contexts and that is easy to cross off the lips of many people. In truth, this is an extremely bulky physical concept. In order to be able to grasp and fathom its actual meaning, our thinking has to exhaust the strategy that it has devised and perfected in the course of the development of modern natural science to the utmost. Using the example of efforts to correctly interpret the concept of entropy, it is therefore not only possible to observe how this strategy works, but also to recognize where its limits lie and where thinking that does not reflect enough about these limits repeatedly comes into play Spin device. If one wants to characterize the mentioned strategy with a single word, one can call it objectivism. The majority of natural scientists thought and thinks objectivistically insofar as for a long time all attention was focused on the objects to be analyzed, while at the same time their references to the scientific subject and their actions were ignored. Since one finally grasps this subject with a great delay of 1. 146, one only wants to see it as a special form of the object. This has been done to this day, because the strategy of objectivism has proven its worth for centuries by enabling a clear definition and delimitation of research topics and thereby ensuring a radical simplification of highly complex problem areas in the run-up to the actual research process. For several reasons, however, objectivism fell into a deep crisis in the course of the twentieth century, which is currently more likely to intensify than to defuse. The Non-Objectivistic Understanding of Science Before I go into the crisis of objectivism in more detail, I would like to briefly sketch the outlines of a non-objectivistic understanding of science that takes into account the role of the subject in the cognitive process. This approach to natural science, which is linked to transcendental philosophy, pragmatism and critical theory, among other things, is based on the following three assumptions1: (1) The meaning and truthfulness of all scientific statements about a specific object can only be determined against the background of their contribution to orientation and securing success clarify the action involved in the subject matter. (2) The real meaning of a scientific statement about a certain object lies in the possibilities of action related to the object in question, which can be derived from that statement. (3) Its truthfulness, on the other hand, is measured by the chances of success of the action, which is based on the action options that can be derived from it. So only those statements are true that prove themselves in this action, while all judgments that lead to the failure of the practice based on them have to be considered refuted. In contrast to the mainstream of research, in which one deals too little with the question of what is actually to be understood by an object, which is preceded by all empiricism, the a) 1 provides detailed explanations of the aforementioned understanding of science, including the reasons for the underlying My three-volume series of studies on "Epistemological Foundations of Physics" provides initial assumptions; see (2), (3) and (4). A short version of the philosophical position expressed in these explanations can be found in section 1.b of "The facts of life". 1. On the objectivism in the natural sciences 147 outlined assumptions outgoing non-objectivist epistemological criticism said question at the center of their reflections. The latter lead to a look behind the object facade of the research objects, which reveals something astonishing. The relevant analyzes of the methodology of scientific concept formation and theory formation show that scientific statements always model the events in the respective subject area to be described according to the example of human interactions. In this sense, every object is viewed from the outset as a virtual actor. And then all the forces and laws that determine the behavior of this virtual actor can be understood as analogies to human motives for action and the social laws that control them. Furthermore, those investigations make it clear that in its analogizing concept formation, physics also reproduces important changes in the social structure that serves as a model.2 Of course, one should not imagine that corresponding changes in the conceptual framework of research activity are made with full intent and in a very targeted manner. In physics, as in any other natural science, the transition to new terms and paradigms is motivated by internal logical problems and must also prove itself in this context. On the other hand, it is anything but coincidental when, when dealing with certain empirical and theoretical problems, one chooses from a large number of conceivable concepts precisely those that are compatible with the respective social environment. In this sense the object of classical mechanics was still an isolated thing called a 'body', which as such was modeled on the equally isolated commodity producer who determined what happened in the markets in the early phases of capitalism. When capitalism then creates increasingly complex networked social structures with the formation of controlled and power-structured markets as well as the development of state intervention and security systems, physics follows suit with a corresponding change in terminology. Your virtual subject projected onto nature is no longer the isolated body, but the 'system' that is networked with other systems and within which the individual body only functions as one of many elements. Despite this further development of the concept, which began in the nineteenth century, the tendency to understand all research objects based on the pattern of tangible objects determines the thinking of physicists far into the future into the twentieth century. This tendency towards reification, as the 'rawest' form of objectivism, so to speak, allows physicists to see things where it would be better to use other models. Against the background of the previously established practical relevance of all scientific statements, it is easy to understand why natural science has to form analogies of this kind, and why it is required to understand profound social change also in the formation of terms: The objects are stylized as virtual actors so that the Actors oriented towards scientific statements can also practice the interaction patterns they have practiced within society in their contacts with the nature around them. Therefore, if those social interaction patterns change, then it is of great advantage if the behavior of the virtual subjects projected onto nature also changes accordingly. A second reason for the undeniable correspondence between social structures and the images of nature that emerge in their context lies in the basic pattern of all cognition: cognition is never anything other than the return of the unknown to the known. And what we know best is always our own interactions. Objectivistic thinking overlooks the just outlined constitutive references of all scientific paradigms and axioms to social action and is lulled into the illusion that they are direct images of corresponding physical or logical-mathematical structures of the object domain. In contrast, the non-objectivistic philosophy of science, which is committed to the transcendental-philosophical tradition, has recognized that in the constitution of our image of nature, the givens of the object domain, which are independent of us, are inextricably linked with the patterns of social action accessible to our epistemological self-reflection, which is why it is in principle impossible to recognize how they are in themselves (i.e. independent of our cognitive activities) may be designed. In essence, the non-objectivist position only updates the age-old Socratic criticism of the self-understanding of everyday life and the world: its representatives believe they know, but their critics know that they only believe. The illusion of direct depiction just mentioned is both central and difficult to see through for science caught in the objectivistic worldview. I would therefore like to present another aspect of this self-deception in a little more detail. 1. On the objectivism in the natural sciences 149: Whoever sits on said illusion is subject to the error that knowledge is only loosely linked to action. He thinks that the latter can only be based on certain findings when they are already available. Actually, however, action is much more closely intertwined with knowledge. Because it is already pushed between the subject and its respective object during the execution of every perception - and not only in the form of the already mentioned analogies between all our natural models and the patterns of interpersonal interaction. In order to use an example to make it understandable what this is about, I invite the readers to carry out the following thought experiment: Imagine a person who has placed his hands on the side of an object and now feels pressure on his palms. The person concerned will only be able to meaningfully interpret this printing experience if they not only pay attention to the object in question, but also to how they themselves approach it: If they actively press with their hand against that object, they become the one on the palm of their hand interpret the perceived pressure as the result of the resistance of its inertial mass against the self-exerted force. If, on the other hand, your hand only touches the object in a passively feeling way, then you will interpret the pressure experienced as the result of a force emanating from it and consequently assume that it is slowly moving towards you. The present example points to a fundamental complementarity of the process of experience.3 This forces the subject to keep an eye on the mode of his own action in relation to the object in question in all his object perceptions, whereby all knowledge of nature experiences an inner 'refraction'. It never just represents nature as such, but always also certain aspects of our actions in relation to it. The Crisis of Objectivism The crisis of objectivism mentioned at the beginning has two main reasons. When you mention them, it becomes immediately clear why it is so important today to think about alternative strategies of knowledge. The first of these beib) 3 For a detailed analysis of the complementarity of our experience of force and matter demonstrated in this example, see Chapters 4 ("Force and Matter") and 5 ("The Development of the Force-Matter Paradigm") in (2) . Entropy as a moose test 150 of the reasons is that science has been using the methodology developed in its examination of nature since the beginning of the twentieth century also in its now beginning concern with the previously hidden subject, i.e. with humans and their society. In doing so, she succeeds in drawing an ever more detailed and precise picture of the biological, psychological and social structures of the human condition. The structure of this picture, however, according to the objectivist method of its creation, is largely determined by pure factual laws and thus fundamentally misses what it means to be an individual subject and thus to function as a member of a collective that is also capable of being a subject. From a sociopolitical point of view, this has extremely worrying consequences. Because a science of people and society, which presents everything that happens as a processing of factual laws, ultimately leads to the self-empowerment of people and thus cemented the end of utopias. Missing the subject is not only reflected in the relatively young sciences of man and society, 4 but also hinders the further development of natural science. This then shows the second of the two main reasons mentioned above for the worsening of the crisis of objectivism. It concerns above all physics and consists in the fact that this science, in the wake of the development of ever more precise observation instruments, groped its way into those micro-areas of our world in which the energies mobilized during observation activity are of a similar order of magnitude as those involved in the respective observed processes Energies. The interaction between the observer and his object, which is basically present in every observation process, experiences such a strong increase in importance in this micro-area that it is reflected in the observation result in a way that cannot be eliminated. In quantum theory, the deficits in the reflection on this new quality of the subject-object relationship in the cognitive process are combined with up to 4 The criticism of the lack of the subject has accompanied the social sciences since its inception. A classic in this regard is Marx's mockery of the commodity fetishism of the bourgeois economists: "To regard the gross materialism of economists, the social relations of production and the determinations that things are given ... as natural properties of things is just as great an idealism yes, fetishism, which ascribes social relationships to things as immanent determinations and thus mystifies them. "(50) p. 579; emphasis by K. Marx. 1. On objectivism in the natural sciences 151 deficiencies in the understanding of the principle of complementarity that continue to exist today Problems with quantum physical complementarity are ultimately rooted in the fact that one already overlooks the complementarity of perception and action mentioned at the end of the previous section, which shapes our understanding of the macro world. Subsequently, all these reflection and understanding deficits result in philosophical P aradoxies such as the notions of a "dual being" of matter (as wave and particle) and the existence of "smeared" states of being.5 In the occurrence of these inconsistencies, the aforementioned tendency towards reification also plays an important role in addition to the deficits mentioned above. It leads quantum physicists to believe that their objects are basically always particles, but that before they come into contact with the subject they are in a smeared state of being from which they are only released through the subject ' become. Because this then makes them first observed waves or particles by choosing a certain registration method and in the second case ends their smeared being by assigning them a certain place in the act of observation. All these assumptions cause the physicists to overlook the much broader and paradox-avoiding hypothesis that we are only ever dealing with energy waves that appear to their observer as particles only under very specific observation conditions. The premature reification of the quantum objects is in turn closely related to the reification of an object that had become the focus of physicists' attention even before the particle world was dealt with. What is meant is the energy which physics had already made an issue in the course of the nineteenth century. Just like all fundamental concepts in physics, the concept of energy arose through the formation of an analogy to a social phenomenon that is part of our societal horizon of experience.6 What is projected onto nature in the case of energy is the commodity value of the products traded in the context of market-based production relationships. The classical economists up to Marx understood this value as that in 5 A detailed description of the problems mentioned here only briefly and my suggestions for their mitigation or avoidance can be found in (4). A brief summary of the considerations in this regard can be found in (5) on my website. 6 Cf. on the following (7) p. 117. Entropy as a moose test 152 abstract work stored in the goods. Correspondingly, the physicists defined energy as the work stored in a natural system. And they equate this stored work with the ability of that system to do work in turn, which means something like the transfer of that stored energy to another system.An outwardly closed connection of such natural systems thus becomes the perfect image of a market economy that persists in a state of simple reproduction. Because in such a physical system context, energy circulates without being lost or multiplied, just like the labor value in the simply reproducing market economy. This analogy between the energy that constitutes natural systems and the social work that creates commodities implies a reification of energy. You turn your ability to interact into a storable thing. And if you then talk about this thing being delivered or transmitted in contact with other systems, just as you deliver a package to the recipient, then you reduce the variety of concrete and living work processes to the mere delivery of that energy thing . Such exemplary reification brings important advantages in dealing with energy and work, because it makes both easily predictable and simplifies the visual representation of a multitude of different processes. But it harbors treacherous traps if you forget that you are dealing with a model constructed by humans and not with a simple decal of what actually happened. The stylization of energy into a thing is carried to extremes by the fact that many physicists do not tie their conception of energy to the analogy just sketched with abstract work but to money, i.e. to the labor value objectified as an independent commodity. In this sense, for example, for Einstein when calculating the path of movement of a body, the total energy consisting of kinetic and potential energy is comparable to an amount of money "whose value is always kept at the same level, but based on a well-calculated exchange rate it is constantly converted from one currency to the other, let's say from dollars to pounds sterling and vice versa, is exchanged. "7 And that brings me to the concept of entropy, which is closely related to the concept of energy. As we shall see, problematic implications result from the reifying idea 7 (21) p. 64. 1. On the objectivism in the natural sciences 153 of energy not only in the aforementioned interpretation of the relationship between wave and particle but also in the interpretation of the concept of entropy . In order to rule out a possible misunderstanding regarding the basic concern of the following criticism of these implications, I would like to emphasize that I am not concerned with a fundamental rejection of the cognitive strategy developed by objectivism. Rather, my reflections have the following two goals: On the one hand, they want to draw attention to how extensively this strategy is being used, as many physicists are not aware of this because they practice it as a matter of routine. On the other hand, the following considerations want to point out where the objectivistic cognitive strategy has its limits, because it leads to inconsistencies and / or restricts the freedom of thought when creating theories and models. Entropy in phenomenological thermodynamics In its beginnings, going back to the 17th century, thermodynamics was concerned with the processes of heat flow in gases and liquids that were accessible to direct observation. From the second half of the 19th century, attempts were made to explain these phenomenologically describable processes through the underlying movement of particles (atoms, molecules). In contrast to Newton's classical mechanics, one did not focus on laws that describe the motion of the individual particle, but looked for statistical laws that affect the totality of all particles present in a fluid (gas or liquid). A distinction is therefore made between phenomenological thermodynamics that only deal with macro processes and statistical thermodynamics that also focus on the particle level. Insofar as the latter is primarily based on the movement of gas particles, one also speaks of the kinetic gas theory. The concept of entropy was already developed by phenomenological thermodynamics. However, the statistical-kinetic approach subsequently succeeded in significantly deepening the understanding of this important concept. 2. Entropy as a moose test 154 Entropy and the heat engine The development and development of phenomenological thermodynamics was based on the practical concern of better understanding the physical process taking place in the steam engine in order to be able to optimize it. Entropy is one of the key terms developed in the process. In order to be able to explain its significance, it is first necessary to outline the problem that thermodynamics dealt with in its attempt to physically understand the steam engine. Building on previous models by Denis Papin and Thomas Savery, Thomas Newcomen installed the first steam engine in a mine in 1712 to pump out the pit water. Although the machine did justice to its task, it required a lot of fuel and was therefore improved by James Watt around 50 years later to such an extent that it was now more than twice as efficient. This efficiency, designated in physics with the symbol η (Eta) and defined as the ratio between the work done W and the heat Q supplied, was just over 1% even with Watt's steam engine. In concrete terms, this means that when using 100 units of thermal energy, the machine only converts a little more than one unit of energy into the work intended by its designer, while most of the rest was somehow lost.8 The gradual increase of this still extremely low efficiency to up to 10% and the construction of other types of heat engines with efficiencies of at least 45% 9 was only possible because physics subsequently succeeded in getting an exact picture of the process taking place in heat engines. The main reason why these efficiencies are still quite low even after all the improvements is that the process is a cyclical process in which the machine keeps returning to its starting position. In order to achieve this, she has to do internal work on herself in addition to external work on the equipment connected to her. Even in the ideal case of eliminating all friction processes, this would have to lead to a continuous increase in their operating temperature, if not a considerable part of the temperature. 9 This value applies to steam turbines. Otto and diesel engines have efficiencies of up to 35% and 40% respectively; see (148). 2. Entropy in phenomenological thermodynamics 155 heat introduced would be removed again in the course of continuous cooling. This can be made clear to oneself without recourse to the term entropy by means of a simple thought experiment in which one disregards all external work of the machine. For this purpose, let us think of a horizontal cylinder filled with gas, in the middle of which there is a sliding piston. The cylinder is thermally insulated, but there is a possibility of targeted heat supply. It is an ideal construction in which there are no friction processes whatsoever. - In the first step we add heat to the gas on the left hand side of the piston, whereupon it expands. It pushes the piston in front of it, which in turn compresses the gas to the right of it. The temperature of the compressed gas rises, while the temperature of the expanding gas remains constant, as it converts all the heat supplied to it into work on the gas located to the right of the piston. - In the second step we stop the supply of heat. Now the gas to the right of the piston expands again until the piston returns to its initial center position. The temperature of the gas to the left of the piston increases because it is now compressed, while the temperature of the gas to the right of the piston drops until the temperature and thus the pressure on both sides of the piston are the same. But because the temperature of the gas to the left of the piston is now higher than before its expansion, the temperature is now higher in both halves of the cylinder than before the supply of heat. The supplied thermal energy moved the piston back and forth once, with each of its two partial movements being the result of the system's work on itself. The first step of this inner work was carried out by the working gas located to the left of the piston and resulted in a significant increase in the energy content of the working gas located to the right of the piston. This then carried out the second step of inner work while consuming part of that additional energy, by now also increasing the energy content of the working gas on the left side. The overall result of the internal work of the machine on itself is an equally strong increase in the energy content of the working gas on both sides of the piston. This additional thermal energy, which is now stored in the machine, cannot be used for the further work of the machine, since the system is now at a higher energy level, but has thermal equilibrium, so that the start of a further work cycle can only be started by adding more Thermal energy could be triggered. Science and technology devised ways and means to precisely quantify this inevitable loss of energy through the machine's internal work on itself, and were ultimately able to create a precise energy balance for heat engines as the basis for optimizing the process that takes place in them. The most important prerequisite for such an accounting was the development of a schematic model of that process. It was published in 1824 by the French physicist Nicolas Léonard Sadi Carnot, which is why it has been referred to as the Carnot cycle (or: Carnot process) ever since. This Carnot cycle describes what happens in the heat engine as a cycle divided into four stages, in the course of which a gas alternates with a heat reservoir of constantly high temperature (for absorbing heat) and a cold reservoir of constantly lower temperature (for cooling by releasing it of heat) is in contact, whereby it is alternately compressed by the application of mechanical work and expands again with the release of mechanical work. Carnot recognized in the function of heat in this process an analogy to the role of flowing water in the operation of water wheels. From his point of view, the working capacity of the heat introduced into the machine increased with the height of the gradient between the inlet temperature at which it was fed to the working gas and the outlet temperature at which it was disposed of in the course of cooling. The precise mathematical formulation of this relationship between the working capacity of the thermal energy used and the relationship between the input and output temperature was only achieved by the German physicist Rudolf Julius Emanuel Clausius in 1865 by creating the variable 'entropy' with the abbreviation 'S'. The named variable should take into account the fact that in the cyclical thermal power process only part of the thermal energy supplied to the working gas can be converted into external work. The rest of the energy flows into the inner work of the system on itself, as the previous thought experiment showed, and increases its energy content in the process. This thermal energy accumulating in the system has lost its usefulness for the performance of external work and is therefore something like transformed energy for Clausius. While Clausius describes the external work performed by the machine as the "external work", for him the work done by the system on itself in the course of each cycle is the "internal work". He describes the thermal energy transformed into the inner work as the "work content". Finally, the entropy is the measured value for the extent of the continuous conversion of the supplied thermal energy into undesired additional heat of the machine. For Clausius, this measured value represents the "transformation content" of the working gas10. This explains the word 'entropy', borrowed from the Greek. Because it means nothing else in German than 'change content'. The control of the undesired energy conversion is about keeping the operating temperature of the machine constant. Clausius therefore constructs his variable S, which indicates the extent of that transformation, as a relationship between the amount Q of the heat supplied or removed in each case and the respective temperature T at which the heat transfer takes place. He determines this relationship as the quotient Q / T and thus defines: Entropy = S = Q / T, where Q is measured in the Joule unit, which is common for all forms of energy, while T is measured starting from absolute zero (−273.15 C. ) is the absolute temperature to be specified. The size of the entropy is therefore measured in joules per Kelvin degree (J / K ).11 I will only explain the deeper meaning of the quotient Q / T in the following section 2.b. Here we just have to make the following clear: If a small amount of heat ΔQ is added to the system, the value of the variable S increases by a comparatively small amount ΔS = ΔQ / T, if a small amount of heat is dissipated through work or cooling, it is reduced he himself accordingly. If the operating temperature of the machine at the end of a work cycle is to be the same as at the beginning, then all increases and decreases of S that take place in the course of the work cycle concerned must compensate each other so that the initial value of S is reached again at the beginning of the cycle. In the course of a working cycle, the steam engine is first supplied with an amount of heat Q1 at a high operating temperature TH. The operating temperature then drops because the system now gives off part of its thermal energy in the form of external work W. The dissipation of the amount of heat Q2 in the course of the final cooling therefore takes place at a significantly lower temperature TN. Clausius recognized that the variable S in an ideal heat engine that works without friction-related heat losses, as modeled by the Carnot process. 10 Cf. (15) § 14. 11 Cf. (113). Entropy as a moose test 158, in which the heat supply assumes the same value as in the case of the heat leakage that takes place in the course of cooling. For such an ideal machine, the following applies: SH = Q1 / TH = Q2 / TN = SN. Why this has to be the case is easiest to understand against the background of a central insight of the kinetic gas theory, which will not be discussed in more detail until Chapter 3. It says that the temperature of a fluid corresponds to the mean kinetic energy per particle, while the total content of kinetic energy is equal to the product of this mean energy per particle times the number of particles N. If the mean energy of the particles of a gas corresponds to the temperature differs from its total content of kinetic energy merely by the factor N, then this mean kinetic energy of the working gas (with the same number of its particles) must decrease in the course of the external work performance of the machine to the same extent as the total amount of its kinetic energy. This means that TN is related to TH in exactly the same way as Q2 is to Q1. And that again means that Q1 / TH = Q2 / TN In practice, this equality of the values ​​of SH and SN means - that in principle a large amount of heat supplied at a high operating temperature can be compensated for by a smaller amount of heat to be dissipated, - and that This difference will be greater, the more the operating temperature T is lowered by the external work taking place between the heat supply and removal. If TN is much lower than TH, then Q2 / TN will be the same as Q1 / TH even if Q2 is very small. On the basis of the equality of SH and SN just explained in the case of the frictionless heat engine, it was now possible to calculate with mathematical precision how the efficiency of such a machine is related to the ratio of the two temperature levels between which it works. This argumentation step is again easy to understand: - The maximum achievable useful work W in the smoothly working machine is as large as the difference between the heat energy supplied and removed (W = Q1 - Q2). - Since SH is equal to SN in this case, the amount of heat to be dissipated Q2 can be determined from the equation Q1 / TH = Q2 / TN: Q2 = Q1 · (TN / TH). 2. Entropy in phenomenological thermodynamics 159 - This results in the maximum achievable useful work W: W = Q1 - Q2 = Q1 - Q1 · (TN / TH) = Q1 · (1 - (TN / TH)). - The following applies to the efficiency of this machine (η = W / Q1): η = 1 - (TN / TH) For all real machines, the efficiency must be below this maximum value, since more or less large friction losses have to be taken into account are. However, exactly in accordance with Carnot's conjecture, the greater the difference between the inlet and outlet temperature of the heat, the higher it will be, since the quotient (TN / TH) becomes smaller and smaller as this difference increases. In practical terms, this means that a heat engine works more efficiently the higher the input temperature of the heat supplied.With regard to the degree of conversion of a given amount of thermal energy into the internal work of the machine itself, which restricts the efficiency, Clausius was able to use the newly formed variable 'entropy' to show that this degree of conversion of the supplied thermal energy becomes smaller the smaller higher is the temperature at which it enters the machine. Entropy as a thing As well as the variable 'entropy', newly formed by Clausius, served for the mathematical specification of the efficiency of heat engines, it was still difficult to assign a clear meaning to it. The real cause of this problem only becomes clear when considering the importance of entropy in the overall network of all variables that are relevant for the analysis of the Carnot process. The variables mentioned are divided into two main groups. The first main group includes the so-called state variables. They describe the current state of the working gas at a specific moment in the cycle, regardless of the way in which the quality in question came about. Important examples are the pressure of the gas (p), its volume (V), its temperature (T) and its internal energy (U). The second main group of variables describes the processes that lead to changes in the state of the working gas. The variables in question are called process variables. These are the two quantities already mentioned above in the context of the definition of the efficiency, i.e. the work performed by the system (W) and b) entropy as an elk test 160 the incoming or outgoing heat (Q). The process variables characterize the respective process step between two states of the system. They therefore do not refer to the properties of the system, but describe an interaction between two systems or between a system and its environment. So-called equations of state are formed in order to be able to comprehensively describe the state of the machine at any point in its processing. One of the state variables takes the position of a state function, while the other state variables that are dependent on it appear as state variables. In addition to these interrelationships between the various state variables of the Carnot process, it is also important to keep an eye on the respective relationships between the state and process variables. For the process variable 'work' (W), the following relationship with the state variables V (volume) and p (pressure) plays an important role: At a given pressure p, the amount of work dW performed by or on a gas corresponds to its volume change dV . The following applies: dW = p · dV. This shows that at a given pressure the ability of the machine to do work depends on the volume of the working gas. When cooling the steam engine, however, it is not about the work rate dW and its relationship with a given pressure p, but about the heat rate dQ and its effect on the temperature T of the system to be kept constant throughout the cycle. It was therefore necessary to find a state variable which, in the relationship between dQ and T, takes on exactly the mediation function that dV plays between dW and p. The problem was that there is no directly observable state variable that can play this role. In this situation, Clausius created with 'entropy' an 'artificial' variable12, as it were, which takes on the aforementioned mediating function. It has no direct equivalent in the observation, but is purely mathematically defined as the quotient of the two quantities to be conveyed dQ and T: dS = dQ / T. This procedure is possible because the tautological equation A = (A / B) · B can be formed for any two variables A and B, from which the definition of a new switching variable (V = A / B) follows, A with B. connected. In this sense, when defining the variable 'entropy', the tautological equation Q = (Q / T) · T is first formed from the variables Q and T and then the mediation variable S = Q / T is 'distilled' from this tautology. 12 See above and below (92). 2. Entropy in phenomenological thermodynamics 161 The difficulties with the quantity 'entropy' result from the fact that, in contrast to the other state variables, it is not based on a qualitative view that always precedes all physical investigations, but merely a functional blank in Carnot's mathematical model Process fills. For every other state variable, the qualitative view of its quantification is preceded by a variable defined within the framework of the theoretical model and gives this quantification clear meaning. In the present case, the opposite route had to be taken. The starting point here was the variable defined as the quotient Q / T within the Carnot model, which then enables the systematic observation and technical control of the development of that quotient. The observation of the development of a quotient of two theoretically defined variables can never replace the achievement of meaning by the qualitative view on which the other state variables are based, and so the understanding of variable entropy is always impaired by a deficit in view that cannot be eliminated in principle. The present text now wants to point out that additional problems of understanding arise, which make the already difficult access to this size even more difficult if one fills the mentioned gap with inappropriate images, which are due to the objectivistic approach of physicists to their subject area. Already in Chapter 1 it was shown how dubious it is when one turns energy, as a faculty of interaction, into a mere thing that can be absorbed and passed on. The analogous procedure in the case of entropy is even more problematic. Because, in contrast to energy, this is not a process variable in which the images of picking up and giving away are meaning-reducing but not completely wrong. Rather, it is about a state variable that is defined as the quotient of two variables. And the conception of such a relation as a thing must almost inevitably lead thinking into worlds of ideas that create problems rather than solve them. The first false trace of the reification of said relation is already laid by Clausius himself when he called the entropy, as mentioned above, the 'transformation content' of the respective fluid. With this terminological decision, he consciously and deliberately turned what is merely a relation into a content and thus a material counterpart to the internal energy U of entropy as an elephant test 162 fluids, which had already been reified in the preliminary thought.13 This seems to have been clear since Clausius that entropy is something that is absorbed or given off together with heat.14 In fact, however, this is in no way the case with the processes involved. All changes that take place here relate exclusively to the relationship, measured by the entropy, between the amount of heat involved and the respective system temperature. If heat is given off, the value of the relevant quotient decreases, if heat is absorbed, it increases. This thingness, which Clausius brought into the world, not only adheres to the entropy itself, but also radiates back onto the thermal energy associated with it. It is a pseudo-coding regardless of the entropy. Due to the close relationship to entropy, however, its material character now becomes more apparent. Because entropy is not just any content of the fluid, but rather the content of transformation, it has the ability to transform the energy thing connected with it. It turns part of him into so-called anergy, which is useless for further work. At the same time, the remaining energy becomes the workable exergy. In the end, the energy thing, like any other neat thing, consists of different components. The part of the energy that is supposedly transformed into anergy is the energy that remains in the system as a result of its internal work on itself in the form of an increased operating temperature. In fact, of course, that part of the internal energy of the system has not changed in any way. Energy remains energy. What has changed is only their functional status with regard to the system in question: the latter can no longer do any external work with it. The work potential of that energy 'transformed' into anergy is in no way affected by this and could be activated again at any time if it were fed to other systems. 13 "If one looks for a descriptive name for S, one could say, similar to what has been said of the size U, that it is the content of heat and work in the body, and of the size S, that it is the content of transformation of the body." (15) Section 14; Italics by R. Clausius. 14 Physicists indulge their urge to reification most unrestrainedly when they want to make themselves understandable to a lay audience. Then the heat supplied to the heat engine quickly becomes a lot of large red balls once, while the entropy flows into the machine in the form of a lot of small blue balls. See (84). 2. Entropy in Phenomenological Thermodynamics 163 Now back to entropy, to which further wondrous properties grow through its reification. This transforms them into a highly mysterious entity, and that in turn has a not entirely undesirable side effect for the physicist concerned with entropy: It gives him, as the expert familiar with the secrets of this supernatural being, something of the shine in the eyes of the astonished lay audience of a magician. The properties that make up the actual magic of the thing 'entropy' are constituted against the background of an aspect of the Carnot process that is now to the center of our attention. As mentioned, the latter is a mere ideal model of what happens in the heat engine that disregards all possible friction processes. Said idealization implies the assumption, which has not yet been explicitly mentioned, that apart from the inflow and outflow of heat, all processes taking place in the machine take place in a 'reversible', i.e. reversible manner. This means that, in contrast to 'irreversible', i.e. irreversible processes, they progress extremely slowly and are associated with minimal force effects. It is clear that everything that happens on earth, especially the behavior of gases and liquids investigated by thermodynamics, deviates from this ideal model of a process. The relevant real processes are therefore consistently irreversible, or at best approximately reversible. And because that's the way it is, you have never seen, for example, apart from watching a film that was rewound backwards, that someone was able to bring the water spilled out of a glass back into the glass the same way it left the glass. The fact that the laws of classical mechanics only aim at reversible ideal processes, in which only a few bodies are involved, with all their interactions and movements easily comprehensible, only became an issue for the physicists as a result of their preoccupation with the problems of thermodynamics. Because only now did you have to deal with problems that relate to an unmistakably large number of tiny particles, which in detail are subject to completely uncontrollable interactions and accordingly perform uncontrollable movements. The idea of ​​a reversible process, on the other hand, presupposes that all of its components are technically controllable, at least in principle, and therefore, under these new conditions, became the practically irrelevant ideal of an absolute limit case. Entropy as a moose test 164 Important examples for the irreversible changes of state that are now moving to the center of attention are - the heat flow in or between bodies, liquids or gases that occurs as a result of a temperature difference, - the mixing of gases or liquids that occurs when partial pressure differences are balanced, - the at Conversion of kinetic energy into heat that occurs due to friction. Since processes of this kind are ultimately involved in all natural processes taking place on earth, physics came one step closer to everyday reality through its preoccupation with thermodynamic problems than in the initial phase of classical mechanics, which was determined by astronomical issues. With regard to entropy, the question arose immediately whether this variable develops differently in irreversible real processes than in their reversible idealization. The considerations in this regard were based on a thermally insulated system that is in thermal equilibrium. This means that there is neither an external influx of heat nor internal pressure and temperature differences with the resulting internal flows of matter and thermal energy. In the absence of flowing heat, the entropy, defined as the quotient of inflowing heat and temperature, cannot change under these conditions. Here, dS = 0. In all reversible processes within such a system, the entropy also remains unchanged because these processes ideally run so 'gently' that the system only experiences 'quasi-static' changes in state, which move it from a state of equilibrium to next state of equilibrium 'slides'. Quite different with irreversible processes. In order for such to take place in a thermally insulated system, there must be internal temperature differences that lead to heat and matter flows between different parts of the system. The entropy quotients of all these subsystems change, whereby the entropy decreases where heat flows away, while it increases where heat flows in. The crucial point is that with each of these internal heat flows, the decrease in entropy in the heat-emitting subsystem is weaker than the increase in entropy in the heat-absorbing subsystem. Because the first of the two had a higher temperature when the heat flow started than in the second. The quotient Q / T is therefore always lower in the heat-emitting subsystem than in the heat-absorbing subsystem. With each of the internal heat flows, the entropy of the overall 2. Entropy in phenomenological thermodynamics 165 system also increases, whereby this process only comes to a standstill when all internal temperature differences are balanced. The entire system is then in thermal equilibrium and its entropy has reached its maximum value. For irreversible processes in thermally insulated systems, the following generally applies: dS>