Foreword: matter, states and scales.
I. Scales: thermodynamics.
II. Transition to quantum scale: violet catastrophe.
III. Transition to gravitational scale: thermodynamics of black holes.
IV. Ages and symmetries: state matter.
Thermodynamics studies the human ∆º scale of physical systems, and as such once the philosophical errors of describing reality with a single arrow of time, entropy, are solved, the laws of epistemology dictate that as the closest -hence better observed- scale of physical systems, its laws have the highest accuracy in the description of reality of all physical ones, and should be the example to model other systems, notably, the quantum systems of the lower scales, where perception is limited and errors of philosophical nature many.
In that regard perhaps the clearest expression of this ‘epistemological law’ was a comment of Einstein, considering that as human knowledge of those scales increases the homology between thermodynamics and quantum physics will become evident:
“The statistical quantum theory would … take an approximately analogous position to the statistical mechanics within the framework of classical mechanics”. Einstein
This is indeed the case providing for the corrections of the main errors derived of using a single time arrow and a single space-time continuum, and the variations that each ‘scale’ of the fractal universe experiences in its ‘evolution of reality’:
In the graph, the absolute, lineal arrow of time, is social evolution that stretches upwards into wholes and downwards into faster more informative parts systems of Nature. The relative arrow of time is made of the sequential order of the three arrows into a world cycle.
Once those corrections are introduced into physical systems, thermodynamics will become the clearest expression of the laws of physics for any species of Nature. And two disciplines will fit immediately as partial views of the two arrows of timespace:
- ∆: Statistical mechanics will be concerned with the relationship between the ∆-1, quantum scale of atoms and molecules, and the ∆º human scale of temperature parameters and offer many insights in the general relationships between both scales.
- ST: Physics of States, will deal with the three ages/states of matter, the gas=entropy state, liquid=reproductive and solid=informative.The 3 temporal dimensions or ‘states’ of matter, Spe-gaseous, energetic states, ST, liquid balanced states and Tƒ informative solid states, form together the 3rd isomorphism of molecular matter in its ∆+1 social scale. In the graph, again we can see how the introduction of scales and time ages solves the third main error of physics, one for each scale, in this case our thermodynamic ∆º scale considered wrongly to have only the entropic-death-weapon state, which as the graph shows is the entry to creation and death from the lower ∆-1, scale of Plasma states in which particles are not organically put into a single ‘entity’ as in those solid states.
- •: Mathematical physics and philosophy of science regarding the three arrows of time, will ‘encase’ all those events and forms within the mathematical, logic mirror, upgraded by ¬Æ vital, i-logic mathematics to ‘make sense’ of the ‘magic’ enclosed into the mathematical mirror, while philosophy will correct the entropy-only error of dying Universe.
So as we do for each fundamental ∆ºst system of the Universe, and each ‘scalar stience’ of the 5th dimension we can sub-divide the discipline in an ∆§±1 scalar analysis and an S-ST-T analysis of the three ages of time of the system and its symmetry with the topological forms of the supœrganism in space; and express it in the most proper ‘language’ of perception of the scale, which is vital, i-logic mathematics.
RECAP. The human scale of physics, the scale of matter, nicely fit IN two subdisciplines, ∆º±1: Thermodynamics PROPER and S-gas>st-liquid>Tiƒ: solid ‘state physics.
They are part of what we term in coloquial language, ‘chemistry’, as they deal with the ‘molecular’ scale of the Universe, (∆±2) and its social aggregations or ‘Geology’. So obviously the theme is as extensive as the planet of matter we live in, which is the only entity that has all the information about itself. I am though winding down in my ambitions of doing an extensive and intensive blog on all stiences, as I have been unable to gather a team of specialists to help me in the task, so we shall just reduce our analysis to the fundamental ‘first principles of thermodynamics and state physics. And time permitted, life duration always short, we will expand some themes on the 4th line.
Introduction. The usual book of physics starts its description of physical systems historically in mechanics, which is the lower ∆i scale of the gravitational, galactic world that affects humans externally. It is more precise to start as in all other stiences by the scale in which the human inner world co-exists as an ∆ºst being; which in the case of physical phenomena is the thermodynamic, heat related scale that coordinates the atomic, molecular and cellular level of the human being and its matter environment.
As we can according to the isomorphic method of stience observe more information in the closer range of thermodynamic effects, which is the scale, in the ∆º±1 ternary symmetry, of human ‘momentum and energy’ (as the quantum ∆º scale will be of information and the ∆+1 gravitational scale of pure entropic motions).
Indeed, we are ‘hot’ when we are ‘activated’, and our ‘actions’ are not described as much by its ‘weight’ (though we use those verbal homologies, specially when matching the external nature of gravitational forces on us), but in terms of heat (the internal scale).
The so much despised verbal thought, as in in-form-ation, is often far more telling of the fractal organon we exist within than the arid maths of it.
So what is the essence of ‘simplex physics’ with its perfect maths and faulty concept regarding thermodynamics?
The fascinating maths of heat, the first to be understood in terms of ∆nalysis (fourier) which is thermodynamics at ∆+1 scale, and its relationship with the maths of entropy, ∆º, the atomic scale, the worst understood concept of physics, a cultural hang-up of the germ(anic) cult(ure) to weapons and lineal swords, origin of the faulty philosophy of physics (big-bang, death of the Universe, etc.)
In that regard, the fundamental themes of thermodynamics as in all other stiences deal with the 5 elements-dimensions of reality:
- S: Space; T: Time; st: spacetime; ∆: scales, º: mind-singularities across scales.
So traditionally there have been in-roads of thermodynamics in all its parts, of which the key concepts we can extract are:
- State physics dealing with the ternary space-time ages/topologies: S-gas<St-liquid>T-solid/crystal, which we shall study in depth in the 4th line, ‘3 ages subposts’ of molecular, matter and geological scales (∆-1, ∆º, ∆+1)
- The laws of heat and entropy, which deal with the relationship between the ∆-1: statistical mechanics scale and the ∆-thermodynamic heat scale, which we shall study in this post.
- The always ‘esoteric’ ∆º level, for the anthropomorphic human, who denies ∆º minds to all systems of nature except I, me and myself, which however do exist in thermodynamics in a factual sense, on the study of crystals, and how they ‘reverse the time entropy of systems’ starting to build information, from the work of Mehaute (‘l’espace-temps brisse’), which proves in chemical systems that when motion stops with cold, there starts fractal order; to the myths of Arab bedouins, which consider the core of dunes, quartz crystals, to be rightly the soul of the dune, to any other crystal that stores its memorial information in the ‘veins’ (nervous paths) of its electromagnetic quantum ordering of atomic networks; to the ‘Maxwell’s demons’, which embody the concept of order – to be found in crystals, NOT in gaseous disordered states, blown up by ‘entropy-only philosophers of science’ to cosmic proportions.
So as usual we find we can encase in the ∆ºST 5 elements of all systems all the sub-disciplines of the ∆º scale. Let us then consider the most important, besides S-gas, ST-liquid, T-solid physics, element, classic thermodynamics. As state physics is quite correct – no need to be rocket scientist, or rather just be rocket scientist to understand them. But classic entropy has enormous conceptual errors and fascination maths.
I.: The concept of Entropy: thermodynamic parameters, order and emergence
The relationship between ∆-1, the molecular state and ∆º, the temperature state uncovers basic relationships and parameters of ∆ºst molecular, matter systems of physics.
In the translation of sciences to stiences, we always depart of a theoretical minimum GST knowledge of the ternary ∆ºst±1 symmetries of the being and its parts, which is what we shall find described in a non-orderly way in science. So happens in thermodynamics, which describe the ternary parts of physical systems in both levels through…
Boltzmann’s principle and the concept of entropy.
In Boltzmann’s definition, entropy is a measure of the number of possible microscopic states (or ∆-1: molecular microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties (or ∆º Temperature macrostate).
To understand what microstates and macrostates are, consider the example of a gas in a container.
At a microscopic level, the gas consists of a vast number of freely moving atoms, which occasionally collide with one another and with the walls of the container.
Then the ∆-1 microstate of the system is a description of its Γenerator equation in terms of:
∆-1: Spe: positions in the field of the system & momenta≈ St-wave>T-particle parameters of all its ∆º=∑∆-1 atoms.
In principle, all the physical properties of the system are determined by its micro state, because physicists without knowing are indeed using the position in the entropic field and the momenta (st: body wave>tiƒ: particle head), which explains the whole ternary elements of each molecular ‘unit’ of the ∑∆-1:∆º atomic ensemble.
So what thermodynamics will do is to ‘translate’ the parameters of ∆-1 micro states into the parameters of ∆º macro states to be useful for human-scale ab=use of the happy, free atoms in its chaotic micro-state.
And here, humans will find a natural resistance of ∆º atomic entities to be ‘herded, ordered and extracted of its vital energy’ to do ‘work’ for human ginormous a$$holes – to put some irony to the human inverse pov – atoms do NOT want to enslave and behave and give us all their energy and drop dead, alas! that is a sure sign the Universe is imperfect, entropic and it will die.
Not so – merely atoms will try to conserve as humans under an orderly dictator a minimum freedom and just get ‘hot’, disordered as long as they can, while humans will try to do as all farmers: Limit the entropy of its herd, encircling into an external Spe-membrane which puts some pressure on them to become ordered. And this game of entropy at micro-state vs. Pressure/encircling order at macro-state is what thermodynamics studies (of course without any vital, organic Maxwellian demons involved).
Now for a herder to work, because the number of atoms or any ∆-1 ensemble of finitesimals is so large, for the whole to work, the ∆º being will consider all its individual finitesimals indistinguishable, which is a pre-requisite to organise huge herds as generic ‘ fields’ susceptiable to be ordered by the ∆º-mind being with ‘huge’, simple changes in the parameters of S, St, T (Entropy, energy, form) of the herded mass.
In brief, the ∆º element orders wholesale with general homogenous changes on the S, T, and ST parameters that affect the whole. This we can observe in history in the 800 cycles of nomadic, entropic weapon-makers destroying fertile crescent cultures when the earth which might be macro-managing its evolution into the age of metals or mechanocene produces huge heat changes of climate that affect homogeneously the whole; it is how physical systems manage herds of atoms with changes in magnetic domain ‘walls’ that influence ensembles of one million atoms.
So the first element for the herders whole-singularities is TO encircle with a closed wall the ensemble – the EARTH we forecast and has been recently found manages with its singularity core and ‘chimneys’ to the surface, the weather cycles and glaciations that define its evolution; the electric charge singularity controls with the magnetic encircling walls the structure of its atomic ensembles, and humans control heat-energy with pressure walls that encircle an ensemble of atoms. And voila! suddenly we have the ‘self-similar’ ∆º,¹ macro-state equivalents to ST>t momenta and Spe: position parameters to do ‘equations of thermodynamics’, which will use the ∆nalysis maths (diffusion equations from the ∆º perspective, entropy equations from the ∆-1 perspective) to compare both scales.
And what makes this fascinating for ∆ºst systems of any kind is that we can extract by the isomorphic method general laws of GST from thermodynamics, as it is mathematically the most profound analysis of an ∆º±1 exchange of flows of Sp-entropy, ST-energy and T-form once we correct the conceptual understanding of its equations of micro and macro states.
The macro-state parameters.
It follows from all of this that for the ginormous man or human mechanism trapping little darling atoms, the details of the motion of those individual atoms is mostly irrelevant to the behavior of the system as a whole… provided the system is in thermodynamic equilibrium, that is the behaviour of each atom is rather indistinguishable, with a similar mean ‘energy=temperature’ in its body-wave actions.
Then the system can be adequately described by a handful of macroscopic quantities, called “thermodynamic variables”:
The total energy E=st, which will be equivalent to the product of its volume V=vital space, and pressure P of the vital space on its outer membrane… and finally its temperature, T. So what those 4 elements mean in ∆ºst terms? Remember we need three only to describe a being in a space-time ternary symmetry as long as we keep it ‘simple’ in a single plane.
So indeed, the three first elements define what ‘ENERGY body-waves’ ARE always: the vital space or open ball (topologically speaking) of the system that does NOT include the singularity which ‘traverses’ the system along ∆±1 co-existin scales as the soul-brain that ‘perceives’ it all in scales, as a ‘scalar’ wholeness. Neither the encircling Spe-membrane.
Yet we do measure ENERGY = PRESSURE X VOLUME, because the volume is the vital space of the system and the pressure, the energy manifestation of that energy in the herder’s wall. So he can extract work, from the vital energy pressing on the wall (not included, so we are still measuring energy, the vital space, but absorbing that pressure). Thus we have successfully parametrised the micro state into a macro state in ‘equilibrium’ with the wall of the herder through pressure.
And so it is only left to describe the ‘y’ singularity, which connects the ∆-1 and ∆º scales, as a whole and so must be a ‘scalar parameter’ or ‘quantitative value’ that explains the whole. And alas! this is indeed the scalar fourth parameter of ‘temperature’.
The macrostate of the system is then a description of its four thermodynamic variables.
As we said we can use those crystal clear (: now 🙂 concepts to other less ‘observable’ scales, and we can indeed apply them to mechanics, where the ∆º±1 scalar parameter will be mass, and its geometric point-locus will be the centre of mass, which maintains its fixed-singularity stable position, equivalent to the concept of ‘thermodynamic equilibrium (measuring the same temperature in the whole ensemble) through any motion, including rotary motions of the whole, and as long as it is in balance, (centre of gravity below torque) the whole system will be in gravitational equilibrium – themes those retaken in the ∆+1 post on mechanics and gravitation.
While all what we have said applies to the quantum state, regarding the ‘full description’ we can do of a quantum system by considering its position and momenta, which encloses (with minor corrections on abstract quantum copenhagen bullshit) all the information about the being (position= field; momenta: wave-particle duality). And the equivalent ‘four vector’ formalism, which is a modern homologous way to describe all kind of physical systems, first born on Relativity:
In the graph, a 4 vector: In relativity, space-time coordinates (external field position) and the energy/momentum of a wave-particle (internal com≈position) are often expressed in four-vector form. They are defined so that the length of a four-vector is invariant=in equilibrium under a coordinate transformation.
Ultimately all those different formalisms of physics are homologous and always describes the 3 ST or 4 ∆ºST±1 elements of a being: its field position, wave-particle duality and scalar ∆º ‘wholeness that balances in equilibrium across ∆±1 scales’ the being (scalar parameter). The 4-vector is just the ‘geometric’ version according to the math duality of temporal, numerical algebraic solutions with symmetric spatial, topological ones.
Back to thermodynamics out of the isomorphic, homologic method.
Let us stress again that the ∆-1 (position-momenta)>∆º (e,p,v,t) equivalence simplifies the higher information (5D metric) of the ∆-1 microstate, for which we need to write down an impractically long list of numbers, whereas specifying a macrostate requires only a few numbers (E, V, T, P.), AS larger wholes have paradoxically less information (see above 5D graph).
However, thermodynamic equations only describe the macrostate of a system adequately when this system is in equilibrium – has a mind-point=scalar parameter (temperature, centre of mass) that balances it. Non-equilibrium situations can generally not be described by a small number of variables. And this works for all systems of nature, which need an Tiƒ parameter of balance for them to organise – yet entropy-only physicists consider thermal equilibrium the ‘death of the system’, from the human observer, as obviously an ordered system cannot die=release entropy for a larger ∆º system to ab=use it. On the contrary the existence of Tiƒ-scalar self-centred numbers/points, thus prove the sentient, vital, orderly, fractal, organic nature of the Universe, in all its scales, which always tend to balance the s,st,t parameters across all its citizens-cells-atoms (S,B,P systems)
As a simple example, consider adding a drop of food coloring to a glass of water. The food coloring diffuses in a complicated matter, which is in practice very difficult to precisely predict. However, after sufficient time has passed the system will reach a uniform color, which is much less complicated to describe. Actually, the macroscopic state of the system will be described by a small number of variables only if the system is at global thermodynamic equilibrium.
Thirdly, because there is more information in ∆-1 complex assemblies of micro points, more than one microstate can correspond to a single macrostate.
In fact, for any given macrostate, there will be a huge number of microstates that are consistent with the given values of E, V, P, T.
But this again is a feature of all systems, whose ‘languages, micro-points and ∆-1 states have more freedom, variations than the ∆+i states’ and it is one of the most beautiful proofs of the existence of a scalar god, on top of the previous graph-pyramid of ∆-scales, which I have been writing for 30 years to the chagrin and mockery of scientists (when I cared to try to enlighten them:), as 0-mind of the Universe, since at the end in all systems there will be a ‘whole, Tiƒ’ with as little information as a scalar number/ratio/constant that resumes what all the other scales have in common – and ultimately is the GST of this ‘unification theory.com’
Entropy in classic physics… is thus related only laterally to entropy=lineal expansive motion=disorder, which is the definition of GST (taken from the wider vague concept of philosophical entropy of physicists and its arrow of time).
But to the fascinating concept of ‘all the possibilities’ of evolution of a system into the future, which basically are 3±0, moving along the 3 arrows of s, st, t of a single plane, and/or the ∆±1 arrows of emergence and dissolution out of a given ∆º plane.
So when we put together the 2 concepts of entropy in mathematical physics, entropy of thermodynamics and entropy of theory of information, we realise in the midst of its philosophical mantras, physicists are tinkering with the ‘time garden of bifurcations’ (Borges beautiful tale) and/or possible choices of future, seeking to ‘eliminate’ entropy=find a deterministic future path to their inquires.
Now, how from this rather ‘scholastic’ concept akin to how many ‘angels fit dancing on a pin’ ‘heated’ arguments of Sorbonne’s first scholar≈University dogmatism, which brought Middle Age Aristotelian christian thinkers to sword and dagger debates, physicists have come to enthusiastic battles of the absolute (big bang entropy universes, disorder arrows, multiple quantum path solutions of parallel Universes, etc.) is a theme which frankly does not interest me more than the ‘∞ angels’ fit on the pin.
But those poor souls do not have much more to deal with in the fog of their misunderstanding of the thoughts of God. So we shall clarify their statements.
We are now ready to provide a definition of entropy in classic physics and properly interpret it. The entropy S – a MACROSTATE, ∆º parameter – is defined in terms of the micro-state parameters, as:
S=k ln Ω
where k is Boltzmann’s constant (never mind it was found by my admired colossus Mr. Planck) and Ω the number of microstates consistent with the given macrostate.
So the interest here in GST terms is the realisation ‘once more’ that we can either:
∆-1>∆: reduce the possible paths of future, of an ensemble of ∆-1 micro states to its smaller future whole information. As the set of sets is larger than the whole set (Cantor paradox homology).
THUS, a social group of ‘numbers=events’ (Ω), which represent the paths of future of a series of ‘space-time quanta≈actions (k) that have more spatial population=informative-time events in the ∆-1 scale than the whole; can be reduced to its ∆º states, by means of a slow growing logarithmic curve (ln), inverse to its ‘exponential function of growth’ , which will be the opposite perspective from ∆ to ∆º:
∆<∆-1 decay from wholeness into its parts, is thus the inverse famous decay exponential equation, showing in this manner an essential symmetry between ∆-1 and ∆ parts and wholes, in terms of its ‘quantity of possible future formal paths and degrees of freedom’.
So this is important and valuable for what it is (: more than the angels dancing on the pin, as this is somewhat more real 🙂
But all the rest of hyperbolic philosophy of entropy sponsored by retarded (conceptually speaking) physicists is nonsense and should be erased along the big-bang theory from text books.
And keep the ‘bare bone’ facts, which we find in any wikipedia-like text, and it is what matters about entropy (and inversely in theory of information).
Now the mathematical beauty of it is this: as we said the paths of the future are 3 for a single plane ‘approximately’, in fact are e, whereas e is the ‘ignoramus’ little secret of the euler number which is exactly 3=e+e/10!! (2 chess kudos for this serendipitous finding):
Alas, here we have true GST ∆-magic: e is 10 when we ad to its 10 parts an 11th ‘hour/part’ as the Tiƒ element is both the 10th part of the tetraktys (the system in ∆) and the 1th of the ∆+1 wholeness, so the number e has an e/10 element ‘twice’ doubling as the ‘black ball’ soul of the system and its ‘wholeness’ single unit ∆+1 existential form, as it co-exist like your mind does in the cellular and whole outer world scales.
The graph shows what we mean, 10 is 11, as the ego-tiƒ doubles ‘across the ∆º±1 scales. So the whole system is inversely 3 minus e/10 when we measure it with energy body-wave parameters, as the soul tiƒ is sucking in one tenth of the vital form of the system, given to the ‘future’ ∆+1 whole state that ‘warps’ as charges and masses do, the lower field or body wave resting it entropy/energy to emerge in the upper being, as your mind sucks in your body energy.
Just get the feeling of it. The mathematics then are obvious: A system diminishes and grows in infinitesimal 1/n, 1/10 parts increasing or decreasing in its wholeness and order and the function that does it is the e/ln dual function of ‘motion from ∆+1 into ∆: exponential growth of information’ and the inverse logarithmic reduction of ∆+1 into ∆.
Thus it turns out that S is itself a thermodynamic property, just like E, P, T or V. Therefore, it acts as a link between the microscopic world and the macroscopic. AND SO NOW WE DO HAVE THE FIFTH ELEMENT for a full description of the thermodynamic system, as we can consider ‘entropy’ to be the ∆±1 ‘partner’ parameter of ‘temperature’, which gives to thermodynamics, as we said at the beginning of this post, the ‘wholeness’ of close range observation, with the full 5 parameters of the 5 relative dimensions of any ∆º±ST reality:
Volume(Spe), Energy (st), Pressure (Time closed cycle), Temperature (scalar parameter of whole order: ∆-1>∆) and entropy (scalar parameter of whole disorder, ∆-1<∆).
To notice on the side of quantitative terms: since Ω is a natural number (1,2,3,…), S is either zero or positive (ln(1) = 0, ln Ω ≥ 0.)
FOGGY physicists who don’t understand either the meaning of ±numbers tend to consider this a proof of the Universal growing disorder as they don’t either understand entropy and the ternary e-growing trifurcations of the future paths of any system. It really means only what we said: that the ∆-1 scale has always more information than the ∆-scale and so if we rest the final deterministic single path of the whole from the many entropic paths of the parts, we get a positive entropy number.
A second consideration more technical, which we shall tackle whenever we widen this barebones article concerns the 2 uses of entropy in thermodynamics, statistical and Boltzmann’s entropy, which consider a less number of ‘variations’ of ∆-1 micro states. Thus statistical entropy reduces to Boltzmann’s entropy when all the accessible microstates of the system are equally likely.
It is also the configuration corresponding to the maximum of a system’s entropy for a given set of accessible microstates, in other words the macroscopic configuration in which the lack of information about the future is maximal.
As such, according to the second law of thermodynamics, it is the equilibrium configuration of an isolated system. Boltzmann’s entropy is the expression of entropy at thermodynamic equilibrium in the canonical ensemble; which is so useful as all systems do tend to have an isomorphic internal indistinguishable ensemble of ∆-1 cells/citizens/atoms for the whole Tiƒ system to treat them wholesale with their ∆º parameters.
This postulate, which is known as Boltzmann’s principle, may be regarded as the foundation of statistical mechanics, which describes thermodynamic systems using the statistical behavior of its constituents. And so from here on, even if I never, lazy cow, return to this article you can just understand the whole discipline; which as all physics is fascinating if physicists stick to their guns: mathematical physics and ask humble advice to us, philosophers of science, regarding what they do (-: that would be the day 🙂
The laws of thermodynamics.
We can now tackle in reverse order the three laws of thermodynamics, as expressions of the three states of matter:
Tiƒ: Solid crystals. 0 entropy, pure till mind-information:
The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero, or 0 kelvin is zero. This means that in a perfect crystal, at 0 kelvin, nearly all molecular motion should cease in order to achieve ΔS=0. A perfect crystal is one in which the internal lattice structure is the same at all times; in other words, it is fixed and non-moving, and does not have rotational or vibrational energy. This means that there is only one way in which this order can be attained: when every particle of the structure is in its proper place.
The mind is indeed a o-mapping of all reality where motion is ‘expelled’ for the form to be absolute and reflect by the ‘determined’ actions of the being, which sees its mind as the deterministic still universe – what it is.
ST: Energy: The first law, we agree states that present energy is conserved.
Spe: the Second Law of Thermodynamics, corresponds to the ‘entropy-disorder’ state, hence it is tautological:
The total entropy of a thermodynamic system tends to increase over time, approaching a maximum value. It is the one we just commented; as it merely refers that as time passes, the branching of ternary e-states or paths of the future increase, and the information from ∆º of the system becomes more confuse and less deterministic (in a ceteris paribus analysis that does not consider the order inflicted by the membrane and the singularity).
Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems; which again is a confusing term, which really means fields or open balls that do not have into account its interaction with the central singularity of order, or the enclosing membrane that constrains and further orders the system. For example, the Earth is not an isolated system because it is constantly receiving entropy>energy in the form of sunlight and it constantly receives order from the cycles of heat and cold weather and magnetic and continental drift ‘programming’ of history and evolution and further on it receives order from all the mind-points of its neural network aka life-beings on the surface.
And in any case what physicists talk about here is their confusions of order vs. disorder, that is they should say, ‘A system tends to isomorphic equilibrium in its wave-entropy, St<S elements, as a pre-condition for the singularity-mind and its tiƒ ‘upper, ∆º+1 wholeness scale, to ‘order’ them and extract its e/10 ‘share’ to ’emerge’ in the ∆+1 conscious of the whole, unit-scale.
Now, equipped with all those confusing motions, the physicist comes as an amateur to the field of philosophy of the wholeness, and affirms that the universe may be considered an isolated system (why? does he have a ginormous googleian size-number to see it all? or an infinitesimal, smallness to perceive its potential tiƒ)…, so he comes then to state that its total entropy is constantly increasing… as he fits an ever increasing number of angels on a pin.
The scalar magnitudes.
Time only passes towards the ‘future’ when things change, and change in the sense of a growth of information. THIS FOLLOWS immediately of our definition of the three arrows of time. In biology it means to get older, and grow your information faster than your energy, once passes the first age of entropy, after your seed of information emerges in the upper scale, ∆+1 of you world cycle of existence.
In physical systems it means that time clocks accelerate towards the future, as they shrink in size and increase its attractive force (of all ∆+§). We can then understand the constant emergence of ‘new clocks of time’, of the three specific arrows (hence lineal time clocks, wave-like time clocks and curved, vortex-like time clocks towards the future). How can we trace this ‘evolution of timespace clocks of physical systems on its three arrows?
The study on how those first time clocks of the gravitational dark space and entropy (the limit of perception in the lower scales) evolve through all the scales of existence to become vortices of black holes (the limit of perception of the upper scales) could be considered the meaning of physics in GST, and in the process it follows the same laws and isomorphisms of all other species.
Of course in this adventure of ‘living physical systems’, performing its actions of existence in huge herds as they move from lower to future tighter scales, there are many deviations, contours and sub-species, which do not make it, elliptic clocks and functionals which in mathematical physics are described as herds of herds across several scales of the ∆-dimensions of the being.
But in all of them we shall find three arrows of time-space with its form and function, topological and bio-logical organic description of phenomena, which when plugged into ‘vital ¬Æ mathematics further reveals many details and beautiful, darwinian process among the quanta of space-energy (whole being measured in instantaneous space as an energy amount, in detail as a ternary topology with a finite time and space size connected by the 5D metric of the being):
All this said an interesting element to explain before going further is the application/meaning of the 3 scalar magnitudes of physics, temperature in ∆º, frequency in ∆-1 and mass in ∆+1. As they must be judged to belong to the ternary elements and the ternary ∆º±1 scales:
-Temperature (∆º scale, ST-open ball) measures the ‘wave-body’ equilibrium of the thermodynamic human scale, and as such it happens in all the delocalised vital space of the wave body.
-Frequency, (∆-1 scale, cyclical clock-like membrane) measures the smallish quantum wave frequency of the cyclical or sinusoidal ‘surface-membrane’ (electron, wave packet envelope) of the smallish physical scale, in as much as we are the larger observer so we see better from the lower smallish being its ‘outer cyclical-clock like closed membrane). So that is the scalar we measure.
– Mass, (∆+1 gravitational scale, Tiƒ magnitude) is inversely the Tiƒ measure of the largest galactic or Earth’ system in as much as it is the huge ‘being’, in which we are enclosed; and so we feel the mass curvature of our space-time as we are part of its ginormous system (and ultimately of the galaxy’s singularity black hole, which determines the G-constant of curvature for each specific galaxy, as Mach had it).
So the reader starts to see how we do fit and order, departing from the ternary structure of reality in space-time (equivalent to the ternary ‘Cartesian’ res-extensa+vortex+mind structure) and in scales (equivalent to Leibniz’s triad elements of reality : the tiƒ-monad, ST: ∑ 1/n finitesimals parts & ∫∫dtsdt integral/derivatives into wholes… to quote two founding fathers of ‘serious’ philosophy of science that mirrored it).
Needless to say if we were an atom we would see other parameters from other ‘monad’s mind perspective’ (ab. Pov).
This said what the equations of energy in those three scales mean is now more clear. As usual they will represent a function of present energy in each of those scales with:
-maximal detail (∆º thermodynamic scale) or
-limiting detail (c-limit of speed perception of information in ∆+1), or
– uncertainty, given the fact that we must ‘absorb’ some h-quanta (Heisenberg uncertainty) of angular momentum≈present information to ‘learn’ about the quantum observable (as e/10 is the toll we rest from 3 to emerge as an 11 tiƒ point in ∆+1 wholeness).
So as we have understood energy, common to the three, and the scalar, as in a puzzle we just explain the ‘third element’, the ratio constants, h, k and v or c (limit), with slightly different meaning to match the symmetry with the T, ƒ, m elements just described.
In the larger scale of gravitation, as Mass is the ‘scalar’ tiƒ element (vortex of quarks, centre of gravity, black hole of stars) and we are in the middle of the system, as the ‘momenta-energy’ element it follows that v is the field-potential-related Spe-motion; and the second version, e=mcc must be read as ‘entropy-disorder’ expansion and destruction of mass into entropy, in its maximal possible motion; that of the structure ∆-i final scales of galactic substrata, light space-time.
In the lower scale, however we are talking of a complete ‘reversal’ of perspective at all levels, from topology – elliptic in gravitation and relativity, hyperbolic or lineal in quantum – to function/form (essential concept in GST, paradoxical always, coexisting in multiple elements, with an ∆º perspective that changes the parameters we measure; and certainly with a much more complex logic than the human obsession for absolute one-dimensional truths/perspectives).
So H is a constant of the external membrane (still view) angular momentum (dynamic view) of the particle/wave we observe. It is not the unobservable tiƒ centre; and the interest of it, is its multifunctional roles, a theme studied in our posts on quantum.
Now it is important also to bear in mind constantly the fact, so little understood even if it is accepted since Einstein and Planck, that the Universe is more about motion=events in time, not form in space.
So we are talking of actions, of ‘present momentum’, of world cycles, and merely state that H is the ‘minimal world cycle’ of ‘energy’ of the quantum scale.
As energy and entropy are so often confused in physics since they are relatively similar (energy is conserved, entropy tends to disorder; energy though has specially in kinetic energy the most used concept a tendency towards entropy more than towards form, which would be better represented as in-form-ation, a present-future state vs. energy a present-past, forming both the dual components of the wave, so for example the magnetic and electric field can be treated as the relative energy vs. information duality married through the wave speed, and its µ, k, constants: c²=k(t-curvature)/µ(s-gravito magnetic constant).
It is then a key concept of the Universe of multiple clocks of time that an energy ‘conservative world cycle’ which represents the whole existence of an ∆-i scale, becomes for an ∆+i scale a ‘quanta of time’, perceived in a ‘synchronous moment(um) of space’, as fast cycles become ‘fixed forms of space’ for a slow observer (see key article on synchornicities).
this is specially truth of standing waves
What this means basically is that the H-planck event is absorbed as a quanta of spatial energy for a slow being as we are. energy in that sense is a ‘memorial tail’ of time world cycles, ‘frozen’ as a space piece of ‘planckton’ or an ‘entropy Bolt’ for an ∆+i slow informative being.
Yes the Universe has its complexity in its repetitions and synchronicities that transform space forms into time functions and vice versa.
But we can consider of the many perspectives (quantum as ∆-i, has more information, less perception and so it is deservingly the more complex of all forms of human knowledge) its dimensions of angular momentum, which ad to the cyclical π motion of the particle’s momentum (p) the radius, r, or distance to the Tiƒ, hence it is the best way to observe without ‘seeing it’, both the coordinated relationship between the Tiƒ and the membrane, which are so often in constant relationship through invaginated paths (as in cells, where the DNA -center connects through the golgi membranes with the eternal membrane).
So h, we might state is an excellent key parameter as it includes information on the Tiƒ x Spe membrane-informative nuclei of the quantum system. And so as S x T = St (meaning entropy x information = present energy) we really have in such simple formula, e=hƒ=h/t->h=ext a packed information on a system, whose energy – the parameter that emerges in our scale of existence, and we have already related to the world cycle of existence of a being – is quantised in as much as h represents the world cycle’ of existence of the light space-time at the minimal scale of the ‘observable human Universe.
We could say that h is the minimal quanta of space-time, the minimal being, the ‘plankton’ of the gravitational sea, reason why in our texts we call it ‘planckton’ the minimal unit of life of the galactic, light space-time universe encoding in its 3 parameters r (ST) x m (tiƒ) x v (Spe) the needed ternary structural information of it. As we can define a carbohydrate with the CNO(+h) elements. How many variations of h-species there are is the zoo description of quantum physicists, the details…
So in brief in the gravitational scale, m is the tiƒ fixed scalar and v the entropy element, with the momentum being the present parameter of us, the beings within the open ball/vital space of the system; in quantum we revere elements, now the fixed element is not the tiƒ but the membrane (as perceived by us in quantised h-quantities of angular momentum) and so the variation is not an v-lineal speed but a Tƒ, frequency.
And by the same rules of ternary symmetries in our ∆º intermediate scale, the concept that shall dominate is neither of those spe/tiƒ extremes outside our ‘equilibritum’ but the present wave, which is in the thermodynamic equation, E=pv (resolved) = KnT, the true meaning of Temperature as a measure of a ‘heat wave’, specifically the ‘amplitude of the vibration’.
But for the understanding of it, as this is a post on ∆º thermodynamics we can finish here the introduction and work with a bit more of sophistication on the basic concepts we have learned.
∆º SCALES +ST LAWS: THERMAL EQUILIBRIUM
For a full understanding of the workings of matter systems we have though to combine scale laws and ternary s-st-t laws and treat THE SYSTEM, regardless of size and complexity, when it is a ‘whole’ as a supœrganism which will balance the three parts, S-past operandi T-future≈st-present, of the system, within the ‘limits of the ∆º thermodynamic scale’. And so we find as the most important laws of thermodynamics, the…
Equipartition theorem on Thermodynamic equilibrium.
The name “equipartition” means “equal division,” in Latin.
The original concept of equipartition was that the total kinetic energy of a system is shared equally among all of its independent parts, on the average, once the system has reached thermal equilibrium, as all supœrganisms are in energetic balance between its limbs/fields, body-waves and heads-particle, where the st-body ‘absorbs’ the energy that it will share in equal parts with the limbs/fields and body/head system.
HENCE THE ENORMOUS RANGE OF EQUATIONS FOR ALL SYSTEMS OF THE FORM: S± T = ST, which in physics tend to be written in terms of potential Tƒ energy due to position/form and kinetic, moving energy.
This is then the origin of the viral equation, and in the quantum scale the schrodinger equation, basically an expression of ST (WAVE left side o the equation) equal to Potential + Kinetic energy (left side).
The viral equation however refines the concept establishing that the potential ‘tƒ’ energy of the system (often energy means in human conceptual fog, motion, so entropy but that is irrelevant now) is 1/2 of the kinetic energy, which comes to say that the limb/field system tends to consume twice the energy of the head-system: E (ð) = 2 E ($), a quantitative relationship that holds surprisingly for many systems of the Universe in all its scales.
Since, the virial theorem also holds for quantum mechanics, as first shown by Fock, giving further ground to Einstein’s dictum that both statistical mechanics and quantum mechanise are the same…
An interesting isomorphism of scales: Viral theorem in the quantum, thermodynamic and human scale.
Let us consider this insight in more detail, as a key concept to fusion all scales of physics is to return to Broglie->Einstein->Bohm realist formulation of quantum physics.
In the graph, when Bohm wrote in polar coordinates Schrodinger’s ‘present st-wave’ equation voila! the particle appeared feeding in a quantum potential field faster than light – the underlying gravitational field, dQ/dt (t), or guiding equation.
Yet since Schrodinger’s equation merely writes ST (WAVE state) i
The left-hand side of this equation is just dQ/dt according to Heisenberg’s equation of motion. So the viral theorem for quantum physics tell us that the $-field which guides and ‘feeds of entropic motions’ the complementary body-particle, kinetic-potential energy system shares its extracted entropy between the kinetic body wave of the system and the particle-head-potential form in the same proportion that all other systems of nature: twice for motion, one for perception. Does this law rule also human metabolic systems? Do your brain uses 1/2 of the energy of your body or limbs?
This of course would seem far stretched as the head/particle of the system is very small, but as it is faster in time-energy it uses much more of it, and as it is internally connected and synchronised (Broglie’s inner clocks in quantum, nervous system in physiology) with the rest of the body-limbs it controls, it indeed uses a lot of energy.
It is indeed well established that the brain uses more energy than any other human organ, accounting for up to 20 percent of the body’s total haul.
So in a ‘classic’ human/animal being, which is ‘all the time running’ we can consider a simple proportion: Spe (40%) + ST (40%) + tƒ (20%), which will be the expression of the viral theorem for biological systems, whereas the balance between limbs and bodies obviously is extremely variable as it is conditioned by the external world and actions of the being, it cannot control, but the 20% proportion of brain systems in fully developed (no longer evolving towards higher information) species, as humans, the ‘summit’ of life evolution before we transfer our information to robots, is, is stable as it is used in internal energy tasks, homeostatic and relatively shielded from the external world by the equilibrium and membrane of the inner ∆-1 world of the mind.
Then we find that of that energy again a ‘classic dual or ternary quantitative equipartition’ takes place:
Until now, most scientists believed that it used the bulk of that energy to fuel electrical impulses that neurons employ to communicate with one another. Turns out, though, that two thirds of the brain’s energy is used to help neurons or nerve cells “fire” or send signals to control the body-limbs. So the equipartition here is even more precise as 1/3rd goes for the brain and then again ±1/3rd should go to the body and ±1/3rd to the limbs…
Equipartition laws are thus fundamental structural laws of ternary and dual nature, which establish the harmonious working together of the 3 ‘GENERATOR’ subsystems of any entity of the Universe, and as such are part of the ‘core’ equations of GST.
OF COURSE, All those elements of GSThermodynamics can be expressed as usual with different equations as EACH OF the ∆ºst perspectives have a ‘slightly biased’ form of expressing its laws. Hence the need to reference them to the ‘simpler, streamlined’ partial equations of the Generator.
So the extension of ∆º physics is immense, and we just shall consider a few samples of theorems and translate them to the laws of balance of GST. A very interesting part of it are the equations that combine ∆-issues of scaling and S-st-T balances and symmetries between lineal, cyclical and wave-like motions. They represent the laws of balance and harmony between the parts of a system applied to the molecular scale.
The first of those laws is the concept that a supœrganism tends to an homeostatic, ‘just’ distribution of its present energy among all the elements of the system. While the system might be in a predatory relationship with the external world loosing or gaining energy within it will tend to find a thermodynamic equilibrium, which is in thermodynamics the fundamental law that translates the balances between the three elements of the supœrganism.
The original idea of equipartition is thermal equilibrium: the energy (read motion and form, e x i) of the system is shared equally among all of its various forms. This means the average kinetic energy per degree of freedom in the translational motion of a molecule should equal that of its rotational motions, which is just an expression of the balance between the complementary ‘Tƒ-particle-rotary motion states’ ≈ ‘$p-field-lineal motion states”: |≈0.
The interest of those laws all turning around thermal equilibrium is that they combine ∆ and st elements, allowing quantitative predictions, for all the ∆§cales of growth between the two limits of quantum and gravitation, where the laws break, as we enter into a ‘Lorentzian’ discontinuum where the Sp x Tƒ=Konstant 5d metric reshuffle itself in two opposite paths (of larger information weight for larger motion). So the equilibrium breaks to be reinstore under the slightly changed metric of ∆+1 gravitational systems or ∆-1 quantum ones.
The equipartition theorem is therefore very good to make detailed quantitative predictions, for each decametric ∆§cale and molecular system in the ‘lineal zone of balanced metric’.
Like the virial theorem, it gives us the total average kinetic and potential energies for a system at a given temperature, which will tend to become balanced, o≈|.
But, equipartition also gives the average values of individual components of the energy, such as the kinetic energy of a particular particle or the potential energy of a single spring. For example, it predicts that every atom in a monatomic ideal gas has an average kinetic energy of (3/2)kBT in thermal equilibrium, where kB is the Boltzmann constant and T is the (thermodynamic) temperature.
It follows from this fundamental Sp≈Tf balance that the equipartition theorem can be used to derive the ideal gas law, as both are expressions in the ∆-1 and ∆o (ideal gas law) of the same 5D metric balance.
How far it can be stretched in scaling of molecular systems – or in other terms how far the thermodynamic scale of ‘matter’ stretches is shown in the fact that the law can also be used to predict the properties of stars, even white dwarfs and neutron stars, since it holds even when relativistic effects are considered. It is precisely at those ‘upper, gravitational and lower, quantum’ levels when the transition between scales distorts the balances between ‘kinetic vs. potential, | vs. O parameters’, where we can observe the most interesting dynamic ‘events between two waters.’
So we observe as we move upwards in scaling and the system becomes ‘cooler’ (neutron stars) how the ‘upper scale’ of gravitation ‘sips in’, predates the thermal energy and finally cools down near 0, where the ‘lower part/scale of thermodynamics’ fades away, ‘encased’ in the gravitational tensor of energy-matter stress which becomes the new ‘parameters’ of the ∆+1 scale.
And viceversa, although the equipartition theorem makes very accurate predictions in certain conditions, it becomes inaccurate when quantum effects are significant, in the inverse ∆-1 plane, also at low temperatures, when thermodynamics fades away:
When the thermal energy k T is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition.
Such a degree of freedom is said to be “frozen out” when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition.
Such decreases in heat capacity – equipartition’s failure to model black-body radiation— the ultraviolet catastrophe— as we have seen led Max Planck to suggest quantum physics.
∆+1 enclosure, ∆-1, eaten inside out…
It IS for us more telling to express the duality of this ‘fading away’ of a thermodynamic state-being, when we move upwards or downwards into the gravitational or quantum scale in topological scalar terms:
When we grow in excess of size, the effects of ‘pressure’ due to gravitational attraction slow down the thermodynamic motions of the system, while the curved distortion of space-time finally ‘encases’ through ‘gravitomagnetic forces’, the orderly transformation of thermal energy into a vortex-like structure with angular momentum≈membranes that ‘freeze out’ expansive thermodynamic heat into its inverse implosive gravitational force. So we suffer an inversion of roles as we emerge upwards: ∆-1 thermodynamic $pe kinetic energy-> gravitational ‘potential energy’.
So the larger gravitational whole ‘extracts’ the thermal energy and transforms it through the dual singularity/curved membrane of a mass system.
The process of ‘freezing out’ and predating on thermal energy at the lower quantum scale, which we have treated above when considering Planck’s analysis, from a s-st-t perspective appears now from the scalar view as an ‘inside out’ ’emerging’ process, where the disorder of thermal energy and Kb quanta is now caused by the ’emergence’ as ‘chaotic, individual, free units’ of the H-quanta of action, which no longer are a smooth, inner continuous ‘invisible’ support for the Kb quanta in harmonic collective behaviour but appear as distinguishable ‘quantifiable units’, and hence SUPRESS-DISORGANIZE THE UPPER THERMAL SCALE, which simply disappears…. as K=∑h and T=∑ƒ, that is ‘Bolts’ (Boltzman constants of action-entropy) become ‘Plancktons’ and temperatures ‘frequencies’.
This somehow physicists intuitively understand as they translate temperatures into frequencies, reason why they talk of thousands of million ‘impossible’ degrees in particle collisions (using the frequency translation).
In both cases though the result is the same, temperature no longer is the parameter to measure time frequencies (masses or wave frequencies are) and K bolts stop being the measure of action (speed or h-planckton is).
It must be noticed also that the Tƒ potential energy due to position or ‘informative energy’ will be easier to ‘transform’ into the lower scale of informative, angular momentum – Plancktons – than the kinetic energy, of motion, naturally related to the upper scale of higher entropy-motion.
So we observe that ‘mass feeds on motion’ and ‘quanta’ feeds on position, since from our human pov, according to 5D metric lower scales process more information and upper scales have more energy of motion (kinetic energy).
Hence the difference of both equations: kinetic energy is equal to 1/2(mass)(velocity)², whereas mass and velocity can be transferred into each other on the c-limit of scaling between both planes.
And since temperature is basically ‘motion of molecules’, we see the mechanism of transference of ‘temperature into mass’ at work in its simplest terms. I.e, when comparing two similar entities, the heavier atoms of noble gas xenon have a lower average speed than do the lighter atoms of noble gas helium at the same temperature – mass is taking over speed, but it does so eliminating fast the thermal motion of the being – the key point being that the kinetic energy is quadratic in the velocity.
So NOT all the motion of the being transfers upwards into mass (loss of efficiency of thermal machines), as neither all the energy transfers to the lower ‘planckton’ scale, (zeroth law of thermodynamics), ultimately meaning that neither motion nor the ∆-scales of reality ever disappear in the immortal Universe.
Now, the limits of the upper bound and lower bound should be obvious in its inverse nature to the reader, as physicists do translate the loss of temperature into an increase of frequency in the quantum real, which goes up to infinity, while on the black hole realm, temperature do really freezes to zero; it is obvious both processes are really inverse: the loss of the thermodynamic ‘scale’, which ‘feeds’ on the lower world of plancktons means a ‘liberation’ of the frequency≈temperature and the release of the stored capacities of quantum systems. On the other extreme however it is the thermal scale the one that ‘dies’ away to feed the grow of the black hole.
Let us then consider the quantum case first – the loss of temperature related energy as it is transferred to the emerging h-quanta, and then the upper bound case – the thermodynamics of black holes, which also reduce to zero temperature, converting thermodynamic energy into gravitational one.
II. Transition to quantum scale
Radiation – Planck’s law
How the inverse processes happen is the law of radiation. As TEMPERATURE dissipates into radiation, it does so in Tˆ4 power law, showing how the system ‘goes down’ two ∆-scales to de as entropy (e=hv radiation); a theme treated elsewhere. It is interesting to notice though that nature is always limited. Beyond 10.000 degrees, 10ˆ4 we talk really of plasma, an electromagnetic quantum phenomena, hence no longer of temperature. Below 0 degrees we talk of mass… So temperature limits also are limits for the discontinuum between the scales. 0 temperature as in black holes and quarks of perfect order brings a ‘still, tiƒ mind like parameter of ∆+1 mass which emerges to become protagonist. Beyond 10.000 degrees in opposite fashion we enter a regime of ∆-1 quantum dominant electric/magnetic phenomena and temperature makes no sense (in fact physicists translate the ƒ parameter of e=hƒ into degrees; but curiously on the other extreme do NOT use negative temperature numbers as a measure of mass-order, as negative and imaginary numbers that reverse scales or st-ages/topologies are not understood).
In the graph, Planck and Einstein, the 2 only colossus of XX c. physics – an overrated discipline.
All this brings us a landmark of physics which GST can also enlighten a bit further in meanings, the colossus adventure of Mr. Planck which shall connect further down (no longer as we have done continuous scales) the ∆-2 scale (relative to the molecular scale) of quantum radiation, with the heat scale, and its reversed processes of death and growth, and usual constants and exponential/logarithmic functions always present in transitions of scales.
We cannot in this forcefully introductory site for all stiences, overextend in any discipline. So we shall comment only on the key element of Mr. Planck’s work on the radiation of blackbodies; his masterly use of the key formulae explained above:
Now the equation is part of a ginormous number of equations ultimately related to the logistic curve.
What we witness indeed is a ratio, which means a ‘transfer’ of two parameters (one of entropy and one of energy) between two scales of reality, and this is treated as a ‘limit’, in the same way that the logistic curves ‘limit’ the growth in the ‘population’ of a system. So we can compare the equations:
The comparison is relevant in as much as the third equation is a population growth limited by a parameter (K, maximal population fit in a system), which cannot be overpassed.
Now we know the origin of Planck’s law is a case of the Bose-Einstein distribution of equally indistinguishable particles. Quantum theoretical explanation of Planck’s law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium.
Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution.
Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium, which is simplified unlike the case of distinguishable molecules which interact with a chemical potential – zero for the Bose-Einstein and Planck distribution.
So basically photons do not grow because as in the case of any logistic curve of population they are annihilated back into the gravitational quantum potential, ∆-1 field; when they ‘saturate’ the ecosystem – the inner vacuum space-time which in the refurbished model of Bohm pilot wave theory, guides and feeds photons (and electrons), which ‘DO’ feed on them:
In the ‘competitive’ environment of the black body, the saturated radiation dissolves back and so energy does NOT constantly increase but ENTROPY causes the death of light photons.
So following the comparison the ‘limiting carrying capacity’ of an environment, which is given by K, the saturation constant of the ecosystem in the radiation equation is given by nhv/c² in the equation of radiation; which essentially means that for each ‘dimensional sheet of light space-time, c², we ca fit nhv, ‘plancktons’ of energy. And no more. Beyond those limits the photons dissolve in competition for the limited ‘neutrinos’ of the gravitational scale, they are made of.
There is though an inverse path, as always 3 paths of future, 3 solutions (seen normally as 2 since the conservation of present is the same state, often disregarded), which if you start to be familiar with the ‘ternary method’ you must alway seek in stience, as the alternative path of future: the informative solution.
Alas, this is precisely the bose-einstein distribution: the indistinguishable particles become a much tighter highly evolved ‘bose-einstein condensate’ but for that to happen, the thermodynamic equilibrium must expel the entropy of heat, hence it must go the opposite direction in the ∆±1 variable which we have already defined as TEMPERATURE.
So you get at frozen temperatures, the bose condensate, which has this beautiful shape, which indeed looks exactly as the perfect tetraktys of the fifth dimension.
And what about hv/kT. Well if you are following, it is self-evident, the speed or ratio of transformation of temperature ‘bolts’ into the ∆-1 ‘planckton’ frequencies, actions of space/time≈events of the thermodynamic scale which became actions of space-time≈events of the quantum scale.
Of course the equation has in the 3 x3 + 0, multidimensional, multifunctional Universe, a few more perspectives, and encodes many other beautiful secrets in that first constant of Universal constants.
It includes among other equations, one worth to notice for its beauty and simplicity:
The Stefan–Boltzmann law that states the total energy radiated per unit surface area of a black body across all wavelengths per unit time,
j, is directly proportional to the fourth power of the black body’s thermodynamic temperature T:
j =σ T^4.
As energy falls 2 ‘entropic scales from atoms through particles into radiation: ∆º<<∆-2 – a fact which is derived from the ‘derivative and integral operandi’ used to translate ‘quantities’ between scales of wholes and parts (as per the articles on the meaning of mathematical equations).
But even more beautiful is the constant of proportionality σ, called the Stefan–Boltzmann constant derives from other known constants of nature. The value of the constant is:
σ=2 π 5 kˆ4/ 15 c ˆ2 hˆ3 = 5.670373…
There you have all the main constants of nature again, and so we finish here, letting you wonder, what is the reason they are each one dimensionally elevated from 2 to 5 potencies: cˆ2, hˆ3, kˆ4, πˆ5…
III. Transition to gravitational scale
Now because somehow physicists feel intuitively that with thermodynamics they are ‘hitting’ the right spot≈scale when studying matter, they have used the theories of thermodynamics to expand their worldview to the entire Universe, with the faulty errors due to a single time arrow, that of death, and a single scale. The result are some of the most ‘in-famous’ mishaps of science in the last century. As we have dealt in cosmology with the big-bang error of an entropy only dying Universe, we shall here only consider a specific case of that error, which on view of the present experiments with black holes and big-bang replication can have dare consequences for mankind, we talk of the erroneous, Hawking’s theory regarding…
∆+1. THE THERMODYNAMICS OF BLACK HOLES
I. WHY BLACK HOLES DON’T EVAPORATE FOR LAYMAN PEOPLE.
We study here yet another error of the conception of an entropic only Universe. In this case the misuse of the concept of thermodynamic single arrows of time (2nd laws of thermodynamics), to define out of nowhere, the thermodynamics of black holes. Indeed, we have already denied the Universal validity of the second law, balanced by gravitation at cosmic level and by cold crystals of solid order in the matter scale. It is then interesting that while physicists have expanded entropy well beyond their ‘realm’ – gaseous states, entropic processes and the ‘acceptable’ translation of similar events into ‘frequency≈temperature’ at the quantum scale (plasma, etc.), they denied the law in a local effect in which it applies: the birth of ultra-hot black holes at the ∆-1 thermodynamic level, as seeds of a gravitational species, which will finally emerge into the gravitational scale, once it cools down and feeds on the thermodynamic world.
The case is of enormous beauty and illustrates multiple laws of 5D metric, when properly interpreted. And yet we have instead a conceptual crass error by Mr. Hawking, who affirms the ultra-hot black hole keeps getting hotter, cooling down its surroundings and finally evaporates, braking locally – where it applies, the…
Heat. ‘Heat is energy that is transferred from one body to another as the result of a difference in temperature. If two bodies at different temperatures are brought together energy is transferred – i.e. heat flows – from the hotter body to the colder. The effect of this transfer of energy is an increase in the temperature of the colder body and a decrease in the temperature of the hotter body’.
Britannica, 1st paragraph, article on ‘Heat’, Macropaedia; Volume 8, page 701
We are just a mush in the surface of a rock lost in the corner of the Universe, departing of those facts we can talk about man’ Schopenhauer, father of modern philosophy.
I. WHY BLACK HOLES DO NOT EVAPORATE FOR NON-SCIENTISTS
The hard facts of known-known science.
The evaporation of black holes, depends on this formula:
As the formula is filled with Universal Constants (latter understood in more depth0, and only two variables we shall from now on write it in a simplified manner as:
±ΔMass ≤ Konstant/±ΔTemperature.
Or easier to Understand moving Temperature to the other side:
±ΔMass x ±ΔTemperature = Constant.
Simple, isn’t? It is the formula that defines the changes in temperature and mass of black holes.
We know both can change but in which direction? If the black hole mass increases, temperature must diminish for the product to remain constant. If the black hole mass diminishes, then temperature must increase for the product to remain constant.
If we write as Hawking does:
-Δ Mass X +Δ Temperature = Constant…
Black holes t increase temperature, getting hotter and diminishing mass, evaporating.
If we write:
+Δ Mass x -Δ Temperature = Constant
Hyperhot black holes will cool down, transferring heat to the surrounding star or planet, evaporating them, into a cosmic explosion, a Nova.
So wwhat chooses those symbols?
The Universe and its second thermodynamic law, which merely states that a hot object cools down and transfers heat to a surrounding colder environment. Thus it is obvious that Mr. Hawking got his sign wrong, and arbitrarily decided the second law of thermodynamics was wrong and the black hole will evaporate.
The problem is that the Universe and his fundamental laws, when they apply locally as the case of the Laws of entropy and heat imply exactly the opposite and what we observe: an ultra-active baby-born black hole, as all systems born in an ∆-1 ‘faster scale’ of the 5th dimension, have accelerated time clocks – which in this case translate into ‘faster dynamic temperature/frequency’ and so it is the black hole that evaporates the cold surroundings, transferring its matter and energy (bottom of the equation) through an intermediate state o pure entropic radiation, into the black hole, which absorbs it, and then inside, we theoretically assume, convert part into mass and part into ≥c dark energy shot through its poles.
Indeed, any ultra-hot object as a black hole, born in a cold environment as the Earth is, according to the laws of entropy cools down and transfers heat to the environment, evaporating us. And this is what we see in the Universe happening, always, when a black hole is born. It cools down and evaporates its surroundings into a big explosion, a Nova.
When you take an iron rod from the oven and put it in cold water, the water evaporates and the iron cools down, ALWAYS. And in the Universe whenever we see the birth of an ultra-hot black hole it evaporates its surrounding electromagnetic world (us) and gets colder, ALWAYS, till it reaches as a mature huge black hole a thrermodynamic balance with the cold vacuum that surrounds it.
In this principle, that heat moves from the hot source to the cold one, are based all the laws of thermodynamics, all the machines of the planet. If this principle would not exist you could made a heat machine of eternal motion, the biggest hoax of science.
Now we stress, the 2nd law applies to heat processes, entropy processes, and those processes of the quantum realm where is licit to translate temperature into frequency, as this case. It does NOT apply globally to the entire Universe, and in upper ‘colder’ scales of ‘zero temperature’, and ‘higher order’, the gravitational and dark energy scales. So the example is a good illustration to deal also with concepts such as ‘what is truly mass’, how energy and information transfer asymmetrically between ∆-scales and so on.
Now, within the local symmetries where entropy applies there is NOT a single exception to the laws of Thermodynamics in the Universe. Everything in heat science and its frequency equivalencies in quantum scales is based in this law.
So Mr. Hawking an enfant terrible – a child of thought, to be more precise – with utter disregard of the known-known laws of the Universe, just chose to break the ‘law’ – one might imagine to shock the audience- changed wrongly the arrow of time, in one of the few cases it did apply (: and wrote the inverse symbols:
– Δ Mass X + Δ Temperature = Constant…
It is Hawking’s formula of evaporation of black holes…that defies all the laws of entropy, all the laws of time, all the laws of Einstein and all the laws of the 5th dimension, the expansion of Einstein’s relativity that I study.
And that is the problem.
What the equation means is easy to understand:
Hawking affirms that black holes do exactly the opposite that all other entities of the thermodynamic Universe:
They are born very hot, the hottest objects of the Universe (in this we all agree), but then instead of cooling down in our cool Universe, burning us into hell, they will ‘magically’ absorb heat from the cold environment (+Δ), getting even hotter, breaking all the laws of entropy!
It is like if you throw a flame into water and the flame would get hotter and the water would become ice!
It is like if you get a cup of hot coffee and the cup keeps getting hotter ‘evaporating’, while it freezes your hand!
This has never happened and the mere idea was for very long in science a laughing matter. Since the laws of entropy are crystal clear. When a hot system is put by the side of a cold system, temperature moves from the hot system that cools (in this case the black hole) to the cold one that heats and evaporates.
But Hawking insisted for decades with an obsessive mind and charm. When a black hole is born, hotter than the environment, instead of evaporating the environment as any hot object does, it will become hotter and evaporate!
How he figures the black hole does that? Well it can’t according to the laws of time and entropy. So alas! He figured out that the black hole travels back into the past instead of traveling to the future. And that is why it evaporates. It is like if a baby will travel back into the past and enter the womb of the mother evaporating. Easy.
Indeed, he also muses after that astounding discovery that he could enter a black hole and come into the past and kill his grand-father. Seriously.
Of course here the error is that he doesn’t understand ‘time arrows are ALWAYS LOCAL, relative past entropy, relative future ∆information and conserved relative momentum, all of them adding to a zero sum (as past entropy and future information cancel each other leaving the integral of momentum or ‘energy world cycle’ that elongates the eternal present of the Universe). Simple and beautiful:
∫p ±(Spe≈tiƒ) = Present Energy, which is a zero sum that conserves momenta and energy and cancels entropy and information.
It is worth to repeat it such a blunder of the many we will expose in simple terms using the basic laws of ∆ºst here, harmonising all sciences:
-Δ Mass x +Δ Temperature = Constant
Defines ‘imaginary black holes’ that break the laws of entropy and will get hotter. Then for all the constants of the Universe to remain constant their mass must diminish and balance the increase of heat.
But the Universe has never done this choice.
On the contrary the Temperature of a hot mass always diminishes in a cold environment.
So mass always increases. Let us then respect the laws of the Universe, proved ad nauseam in all systems and write instead the symbols right:
+Δ Mass x -Δ Temperature= Constant.
The hot black hole as all the systems of the Universe will decrease its temperature as it is born much hotter than our Universe, and so the heat will be transferred to our electromagnetic world and evaporate it, and the black hole will absorb it as energy in its event horizon collapsing that energy into mass at the speed of light: M=E/c²
This is what Einstein say, what we observe in the Universe, what every black hole born in that Universe proofs:
The black hole is born very hot, evaporates the surroundings and absorbs it, exploding the world into a Nova at the speed of light.
Why then he broke the laws of thermodynamics, of Einstein’s gravitation – and the yet to be known laws of the 5th dimension and balances of the arrows of time, and consideration of momentum and energy…. running into the paradox of information, etc. etc.
Obviously it cannot be because of science and the delights of true knowledge, but for all the spurious, wrong, unscientific causes of our civilisation: fame, provocation, faked news, etc. What is far more worrisome is that science after initial derision accepted it, and now incidentally physicists are truing to do such black holes at CERN, with the obvious risks of one being born, growing fast, swallowing the Earth and killing us all.
The error of Hawking is like the tale of the emperor’s naked clothes. One day an emperor forgot to dress and went into a parade. And none would say anything till a child pointed out that the emperor was naked. His errors are so evident and absurd that nobody dares to contradict him. But the emperor walks naked and black holes will cool down and swallow the Earth if they are born at LHC.
And yet the entire situation is so absurd that nobody will shout, ‘cover the emperor’s with decent cloths’. The emperor is Mr. Hawking invested with so much authority by the celebrity P.R.ess that only an intelligent child – that is a new born model as 5D dares to point out the error without fear of making a fool of myself.
Still with the proper arrow the formula is a beautiful one, so we can now study what it means, once we have corrected.
II. THE STRUCTURE OF NON-EVAPORATING BLACK HOLES & ITS TERNARY SYMMETRY.
Science is truth when 3 elements are met: experimental evidence, correspondence with previous theories known to be truth and only then mathematical analysis with the proper dynamic evolution towards the future.
Now, we observe that the smaller the black holes are the faster they grow in a furious swallowing path. And yet the Fermi Satellite sent in orbit to find the signature of evaporating black holes has NOT found the slightest signature of black hole evaporation.
So what to make of this? Obviously, the 3 elements of science are not met:
- Experimental evidence (none) ->Correspondence principle with previous theorems (Einstein’s hole always grow, no hair theorem gives no temperature to black holes) -> Mathematical equations (improper interpretation of the symbol of time).
So both fundamental laws of scientific truth, falsification in the negative, and experimental evidence in the positive, prove that baby black holes do not evaporate, breaking the essential laws of thermodynamics, and Nature stubbornly imposes its cosmic order.
Today the fundamental theory of the Universe, the big-bang, is based in an expansion backwards in time of a lineal equation, similar to Hawking’s:
Hubble established a cosmological velocity-distance law: velocity = Ho × distance. Whereas the variables are here instead of mass and temperature, speed and distance and the constant is Ho.
According to this Hubble law, the greater the distance of a galaxy, the faster it recedes. Modern estimates place the value of H around 22 km/s per 1 million light years. While the reciprocal of Hubble’s constant lies between 13 billion and 14 billion years, and this cosmic time scale serves as an approximate measure of the moment of birth of the universe.
Thus the same procedure of running backwards in time a black hole mass growth, pattern would give us travelling to the past, the point of birth of the black holes at the Planck scale at enormous temperature.
Does this mean Hawking’s work lacks scientific merit?
Not at all, as if we respect the laws of thermodynamics, the ratio mass-temperature of black holes, describes the birth of a small black hole, which as all systems of Nature show an enormous activity and rate of growth, in its initial stages, feeding in the energy that surrounds them – the placenta of our seminal seeds, the nest in which the parental system feeds them, the environment in which the new species becomes the dominant predator; in the case of the black hole the rich field of electromagnetic energy that surrounds it in stars where they are born by gravitation collapse and/or planets against which they collide.
It is then easy to understand the importance of the formula to interpret the genesis of black holes, the mechanisms of Nova explosions, and the way in which the energy and information of the lower scales of nature (the quantum and thermodynamic scales emerges as mass in the larger cosmological scale), since and this is the second merit of the equation of Hawking, which stirred the imagination of scientists, to the point of accepting his peculiar interpretation is the fact that it uses all the main constants of space and time of Nature, whose meaning is still no understood in theoretical physics.
And so it hinted to the solution of the fundamental ending question of physics – what is the relationship and meaning of those constants, and how can we unify mathematically the 3 fundamental scales of physical systems, the smaller quantum scale of electromagnetic charges, the human scale of thermodynamic molecules and the larger scale of cosmological masses:Now we have written the equation with the proper symbol and alas, it is a very important equation, as now IT FOLLOWS THE CORRESPONDENCE PRINCIPLE.
Why that beautiful equation is an equality?
Because the event horizon is a discontinuum, which act as an osmotic membrane, with 2 sides, which have the same ‘surface’ area, so to speak.
The event horizon has two sides as all membranes. In one side you are inside the black hole (8π GM), and there is NO temperature here (no hair theorem, black holes are defined only with 3 parameters, angular momentum, mass and charge). So the black hole DOES not evaporate because it is NOT made of temperature. It is impossible to evaporate.
On the other side of the membrane, there is though temperature, because we ARE in the Thermodynamic and quantum world, which the black hole swallows.
This duality of a membrane is perfectly understood in terms of Topological laws. The fundamental theorem of topology states that a closed circle, any n-dimensional membrane breaks the continuum into an internal world and an external Universe, with 2 different surfaces, an internal elliptic, implosive, in-formative surface and an external hyperbolic, expansive, entropic geometry,
And those are the 2 sides of the black hole: Internally the black hole creates pure information. Externally the black hole increases the entropy and disorder of our world just before it swallows it.
Consider any other ‘organic system’ absorbing energy and trans-forming it into its own substance. You eat, and first you disorder, increasing the entropy of the food you eat down to amino acids. This is what the black hole does in the side of the quantum-thermodynamic scales of the Universe. But then you stomach evolves the amino-acids back into proteins of your ‘own DNA code’. This is what the black hole does in the inside of the membrane: converts entropy into information.
But we can equal both sides, because they have the same surface, one side is gravity surface, the other side is entropy surface.
This is the beauty of the equation of Hawking, when we respect the 2 ‘fundamental proofs’ of truth in science:
Experimental evidence (black holes always swallow our world in nova explosions that increase the entropy of our galaxies but absorb and grow in mass internally).
Correspondence principle (no hair theorem: black holes do not have temperature; they can be described with only 3 parameters, mass, angular momentum and in some cases charge, if they do have it; Einstein’s equations: black holes do always increase mass, do not evaporate, do not emit radiation).
Thus when we put properly the symbol, ≥, black holes grow according to the beautiful formula of Hawking, which in this manner respects the 2 fundamental proofs of truth in science – experimental Nova evidence and Correspondence principle; further clarifies the process of creation of mass, in a more detailed version of E=Mc², explains why the Universe is fractal, with discontinuous membranes between the entropic side of the Universe (electromagnetic-thermodynamic membrane) and the in-formative gravitational side that in-forms reality (mass-black hole side).
And why there are 2 ‘geometrical description’ of space-time (elliptic curved space-time of Einstein’s gravitation, made of accelerated vortex like informative mass clocks) and the hyperbolic, entropic, expansive description of quantum physics. Which ARE 2 DIFFERENT discontinuous sides of the fractal Universe that balance each other and MUST NOT be unified in simple terms because BOTH ARE NEEDED to balance the Universe.
On one hand you have what the black hole eats, our space-time world, on the other the gravitational space-time world.
There are in fact 3 scales of space-time in reality and that is what the Universe shows.
On the side of mass, we have a beautiful equation, which is identical to the tensor of Einstein’s Relativity, that describes a gravitational world, 8 πG Mass.
It must be noticed that unlike Einstein’s tensor, here there is no energy-entropy, but only the other 2 elements of Einstein’s equation: 8 π G, the curvature of the black hole and Mass, the substance of gravitational space.
On the other side, we have the same equation above for quantum space-time where h is the angular momentum of its clocks of information (so quantum systems code with h-planck quanta its informative spin and form) and c is obviously the speed-distance of space, as we see light space. Space is made of light, as impressionists painters realized and Einsteins’ relativity principle that cannot distinguish motions from distances, or the spatial expansion of intergalactic space, homologous to the red-shift elongation of light space proves.
Our human electronic mind perceives light-space, through, its time clocks, Tƒ=
h-planck, is the minimal quanta perceived by an electron, and Sp the speed of light, and H/c is so small that we have a mind that processes very little information and in terms of Lobachevski’s parameter of geometry displays a flat Euclidean World:
In the graph, you can see what i talk about. You see light and light has 3 perpendicular Euclidean dimensions. That is why your mind is flat and Euclidean. This product
h c³ is thus the time-space clocks and volume of the quantum world, and its ratio Tƒ/Sp=h/c³, is the ‘ratio of information/space’ of the human mind, which in terms of Geometry is called the Gaussian/Lobachevski ratio of curvature that defines a flat mind. Beautiful isn’t. So many things hidden in an equation (-; Saper vedere.
On the other side, we have a ratio between the simplest quantum minimal, larger world and our intermediate thermodynamic scale, meaning that the black hole absorbs the quantum space-time (hc³), and first it converts it into pure entropy and temperature (KT) – hence the ratio; and then it moves it to the other side of the membrane, converted into 8 π G Mass.
Now, a brief lesson on modern cosmology. The Universe has shown to be fractal, structured in relative scales of size and with different time clocks of different speed. This we know since Einstein. We do NOT have mechanical clocks around the Universe measuring time. As obvious as this is people do not seem to understand it.
And we co-exist in several scales, with membranes and discontinuous between them, such as the Lorentz transformations that define transitions between the electromagnetic light and mass scales, the quantum equations of Planck (violet catastrophe) that define the transition regions between the thermodynamic and radiation scales.
There are 3 fundamental of such scales which we show in the next graph, the larger gravitational scale, the human thermodynamic scale and the quantum scale, with different space-quanta and time clocks:
In the graph we show them, with different space and time quanta. But all of them have a common co-invariant formula, which defines them as space-time planes of reality, with a ‘different time clock’ that measures the information of the system in the frequency and form of its cycles, and a given space quanta that measures its pieces of space.
And there are transfers of motion between them, called ‘angular momentum and ‘lineal momentum, or ‘information and ‘space’. There are many formula on physics to describe this space-time planes, and the easiest ones are those of Energy, and those that define its forces. They vary slightly (we show on the left side the formula for energy, and slight variations show formulae for momentum and forces in each scales).
This is the new stuff on physics, the formalism of the 5th dimension that this writer pioneered before this activism, as it explain the ‘whys’ of Universal constants, and the structure between scales, the reason of the exponential discontinuous equations between scales. And Hawking’s equation is a fundamental equation that relates the 3 scales together. In fact it is the only equation that relates them, as the black hole swallows it all. Swallows both the quantum and electromagnetic world and converts it into mass; it is the true devouring monster of the Universe:
∆+1(cosmological scale) : M ( 8 π G) ≥ ∆-1: hc³/ k T: ∆: Thermodynamic scale)
Now the formula has the 3 clocks of time, (G-curvature, h-angular momentum, T-emparture) and 3 space quanta of the 3 scales of the fractal Universe (c3 volume of light space, M-ass and K-entropy quanta).
So his work is essential to the fractal Universe structure and its 3 5D scales.
As such is a ‘beauty’ and it is a pity that Mr. Hawking was not happy enough finding it, and that he said ‘philosophers of science envy me, because they don’t know mathematics, and that is why they criticise the evaporation of black holes’. Of course, we do know mathematics, we invent mathematics. People like Descartes and Leibniz invented analytic geometry and calculus the two main branches of modern mathematics and they defined themselves as philosophers of science, NOT physicists. That is why we can have rigour with truths.
So to the beauty of it. First we must ask. Why there are two sides on that equality, which when properly written with the positive arrow of time is M ≥ k/T, that is the black hole mass always grows?
Because this equation is taken from the event horizon, the membrane that separates the discontinuous fractal scale of gravitation – the inside of the hole, or M (8π G) – which creates mass-information; from the outer region, the entropic, thermodynamic and quantum world, which the Hole is swallowing.
So we have an equation that shows how ∆+1 gravitational mass information is created by accretion of the lower scales of quantum light space-time (Hc3) and thermodynamic space-time (KT).
And it is precisely the simplicity and perfection of the equation, which relates the time clocks and space-quanta of the 3 scales of the Universe, what makes it so important. As it truly writes in the formalism of the fractal Universe:
∆+1 Gravitational Scale: Space quanta: M x Tƒ(curved clock): G = ∆-1: quantum scale/∆ Thermodynamic scale: Sp:c³ x Tƒ: h / Sp: K x Tƒ (T)
That is, the mass quanta of the gravitational space, whose frequency or curvature or attractive accelerated clock is given by G, is swallowing the Space-time of the electromagnetic, quantum Universe, whose clocks of time are here measured by the angular momentum h and its quanta of space, by the volume of 3-dimensional Euclidean space; after converting it first into thermodynamic entropy, in its process of ‘rising it up’ and transforming it upwards through the scales of information of the 5th dimension, into thermodynamic space quanta, k-constants, and its time clocks, temperature.
This is the analytical, algebraic understanding of the accretion of black holes, which absorb the space-time, c-h quanta of electromagnetic scales, convert it by this ∆-1/∆ space-time ratio first into entropic heat and then swallow it up into gravitational mass.
And the beauty of it is that we DO have here the equations in perfect form. we have in the quantum scale on top, the h-angular momentum, the minimal clock of time-information of the quantum scale, multiplied by c³, the volume of space (as we are in a light space-time membrane, with the 3 euclidean perpendicular co-ordinates of light, the electric magnetic and c-speed arrows). And we have below the thermodynamic space constant, k-entropy, as entropy is an expansive space, and the time clocks of the thermodynamic world which is temperature.
So the equation allow us to understand for the first time, properly interpreted the meaning of the 3 Universal constants of space of the Universe, in its 3 relative scales of size, Mass, Entropy and c‚ light space-time. And its 3 relative cyclical clocks of time, G-curvature,h-angular momentum and temperature:
This equation is therefore a fundamental equation, which properly understood with the 2 arrows of the future, entropy and information and the laws of the fractal Universe duality.
We can now compare it with the previous equation of radiation to further illuminate it:
A simplified analysis, shows both equations to have similar terms above hcˆ3 and hcˆ² and below Kb T, and so without doing an exhaustive analysis of the two forms, one a mere ratio the other a ratio ‘passed over an exponential decay’, they confirm our earlier model of radiation in black bodies: the thermodynamic ∆-scale ‘evaporates’ into the lower ∆-1 quantum scale of frequency radiation.
The differences beyond the mathematical scripture though are relevant:
- In the black hole the radiation is swallowed inward into Tiƒ-mass. In the black body is emitted outwards as Energy/Entropy (B). And so we see once more the inverse relationship between inward masses and outward radiation.
- In the black hole c is three dimensional, meaning the whole volume of ternary space-time is swallowed. In the blackbody the form is bidimensional, which we shall constantly find explained by the holographic principle: bidimensional sheets of space-time, which write c² are at the core of all formulae of light constants.
- Finally related to 1) in the radiation what disappears are the excessive photons of the empty space, dissolved into the entropic arrow of the quantum field of non-local dark energy (Spe, ∆-4), while in the black hole disappear into the evolved ∆+4 scale of top quark masses (the true content of those black holes).
III. TOP QUARK BLACK HOLES
Yes, this fast turning, heavier, more attractive particle that can be the atom of black holes that gives them substance do exist, and it is called the 3rd family of heavy quarks, which are amazingly heavier than our quarks, thousands of time heavier and correspond to the exact parameters of a black atom.
Now, we shall elaborate this with the new physics of duality and the fractal paradigm to further understand why this is the most logic, symmetry, natural solution according to the laws of the scientific method (economicity, simplicity) to explain the structure of dark matter, the Halo and the galaxy.
It is all there, again. Physicists ask for a new 5th dimension to make black holes, and we find it as a relativistic motion, which is proved experimentally (the fastest clock of time of the Universe are the bottom quarks, as top quarks have not yet produced in enough numbers to measure its no doubt even faster, more attractive rotational clock). And this superstring, supermassive quark is the cut off substance of black holes.
In brief, if we define mass by the speed and frequency of its gravitational vortices, according to the equivalence principle of Einstein between gravitation and acceleration, those fast turning particles are vortices of space-time, similar to attractive tornadoes (E=mc2 +E=hv-> M=k ƒ, so the faster the particle, black hole or mass turns as a space-time vortex the more it attracts:
In the graph, the evolution of the concept of mass in relativity, from the initial image of an abstract substance in the center of an accelerated vortex of space-time, proper of the abstract, pre–world war age, when Einstein first published his work on gravitation, to the first pictures obtained in bubble chambers in the postwar age, to the realization that each mass is a fractal space-time made of smaller cyclic motions proper of twenty-first century.
In the graph, for the pedantic observer, which rejects Newton as too simple, over seeded by Einstein, a final note on the multiple perspectives we can have of any event of space-time according to the ∆±1 or Sp, Tƒ, ST perspective we adopt, we classify the 4 standing models of gravitation as relative truths, belonging to the 4 perspectives of reality: SP< =>TO.
In the graph the 4 obvious descriptions of mathematical physics regarding gravitation: It (relativity)≈ Tƒ (Newton) <-> Sp (Poisson)≈ Es (Lagrange):
Tƒ- Newton is a moving CLOCK LIKE vortex of the same mass-motion regions)
Sp-Poisson is a potential static field of energy gradients..
Es- Hamiltonian-Lagrangian is a dynamic description of the conservative energy of the system.
It: Einstein’s simultaneous measures in relativity are a still, formal description in ‘present’ of the gravitational space-time.
And so all of them are equivalent. In fact Einstein derived his work via Poisson from Newton. Today Relativity’ informative mappings of the galaxy space-time are transformed in AMD theory into a Lagrangian-Hamiltonian ‘bidimensional model’, for computer calculations, proving once more the bidimensional structure of space-time.
All are in fact derived one from each other, as Einstein took its beginnings from Poisson, who elaborated on Newton; and the more important of them all the Hamiltonian).
All this of course can be turned with algebra into very complicated equations to describe those vortices as masses with different mathematical equations. The graph shows 4 of those ‘formalisms’, which are equivalent with more or less ‘finesse’ in the degree of detail, they have.
The 5D model.
Now the 5D model of the fractal, organic Universe, does not deny Einstein, it merely expands its views, and adds fractal organic properties to the galaxy explaining better the function of those black holes, its atomic substance and working according to the known-known facts of astrophysics, in which black holes have come to dominate most of the creative processes of the galaxy. In that regard, you can compare the galaxy to a cell in a much larger scale, with mitochondria stars, of light yd-matter which end up being devoured and becoming the energy for the creation of black holes, the informative vortices, equivalent to the DNA, that swarm in huge numbers in the centerl nuclei and control its shape and provoke the reproduction of stars, with its in-formative gravitational waves.
Einstein had asked for a cut-off substance, or ‘atom of black holes’, which he could not guess at the time as quarks had not been found, but now we hint that black holes are as he thought ‘frozen stars’ of the heaviest quark families (bct quarks), and that should be the ‘realist modeling’ of black holes. As those quarks will appear in increasing numbers in accelerators, which are crossing the dark matter barrier is very likely they will be produced on those accelerators, in any of its possible varieties of dark matter that range from strangelets (s-quarks) to toplets (TTT-quarks), through Higgs decay (H->Top and anti top).
Now, all this in mathematical physics implies a constant growth of the mass of the black hole along the previous equation: M=k/T, that is as the black hole cools down, it converts via the weak force, lighter matter into heavier quarks that increase its volume and hence its area.
The last great advances in classic black hole theory were done by Kerr, a new Zealander which defined rotary black holes with or without charge. Those would be the top quark frozen stars with positive charge and the same density at macro-scale than a black hole acting as a relative ‘proton’ acts in an atom in the center of the galaxy. While the halo should be made of strangelet quarks (negative charged), acting in this symmetry between the 3 families of mass and 3 regions of the galaxy, as a relative negative ‘electron’ cover, with the ud-stars and planets in the middle, as seen in the next graph, greatly expanded in its detailed explanation on the post of 5D astrophysics:
In the graph the ‘sane’ understanding of black holes, which are born exceedingly hot and active, as all ‘seminal species’ in a lower scale of size, on the compton wavelength as a heavy quark particle, and the similar form of the halo of strange matter. this simple scheme, following Einstein’s search for cut off substances for black holes and Witten hypothesis of a halo made of strangelets, now again all the rage in astrophysics
Now, once we had the ‘solutions’ (Kerr black holes), the study of those holes was limited by the c-speed turning event horizons, which ‘absorbed’ the light of electromagnetic matter after exploding it, and ‘digested’ it via the weak force, creating heavier particles. But the maths were worked out by Christodoulou:
‘He had shown that no process whose ultimate outcome is the capture of a particle by a Kerr black hole can result in the decrease of a certain quantity which he named the irreducible mass of the black hole, M . In fact, most processes result in an increase in M, with the exception of a very special class of limiting processes, called reversible processes, which leave M unchanged.
It turns out that M, is proportional to the square root of the black hole’s area.”
So it is all clear and nice, regardless of what ‘language you prefer’, that of 4D Einstein or the added scalar meaning for a proper, scientific real understanding of 5D (as we indeed do have infinite proofs of the fractal structure and scales of size and different speeds of time clocks of physical systems from galaxies to particles). As all is obeying the laws of symmetry and Relativity of the Universe (call it Einstein’s relativity or Absolute relativity, its expansion into several 5D planes).
BUT THOSE are themes expanded further in the posts on quark matter. So we stop here.
III. FREE ENERGY, ENTHALPY, TEMPERATURE AND ENTROPY EQUATIONS
Worldcycles of energy integrated in all its entropic future branching.
Now, returning to classic thermodynamics, we shall change our perspective from ∆±i to the S, T, ST elements of the ∆º scale, which as usual being the human scale is the one we are more interested in.
First we must reconsider how humans translate the different arrows of futures into a quantitative parameter. And for that aim we can consider the key equation that relates the 4 parameters of temperature, volume, entropy and energy.
In GST, entropy defines all the possible paths of future of a system, which develop in sequential processes, and increase in Tiƒ, solid states, whose smaller volume implies a faster speed of time cycles (∆ metric: Spe x Tiƒ = K). On the other hand energy integrates all the momenta from past to future, closing the entire world cycle of the being.
And so since solid states have more modes of vibration, they store more energy, in a faster manner (the same can be found in the ∆-1 translation of energy = h v, which increases with the vibrations.
So ultimately energy integrates both, the time-events or frequencies and temperatures, and the parallel splits of time in different futures all of which happen to be accounted into the energy volume. And the question we shall find once and again in different systems is if they are ‘parallel universe’ (no really, that is just physicists imagination), or they happen in sequential time and become ‘stored’ as past tails (sometimes) or they branch out as the branches of a tree, simultaneously and give birth to different ‘resonances’ of the same being, In brief, energy integrates all, but all can be a split of populations in space, or an acceleration of frequencies and temperatures in time. And humans sometimes cannot distinguish both. I.e are the three colours of gluons different gluons or the same gluon evolving in time?
This said in a general manner we can now understand one of the fundamental equations of thermodynamics, the so-called Maxwell relationship, which basically expresses the law of conservation of energy, we have already analysed in terms of pressure and volume in terms of temperature and entropy:
where E is the energy, S is the entropy, and the partial derivative is taken at constant volume. So we can also write T ∂ S = ∂ E, where the increase of entropy and energy are closely related, in as much as a larger entropy – more possible paths of future, has a larger energy, since those paths of future must ALL take place, which can only be the case if there is either consecutive causal time sequence of them (in a system faster in time than one which only stores a determined path without entropy), or a branching of futures.
This a profound result with interesting results in all sciences, as in all there will be branching, giving birth to a ternary ‘being’ in space; or partition of a whole into different futures, as each chaotic group takes a different path.
In its most profound meaning, it means that the Universe for any ‘partition of species’, will try all possible paths of future, regardless of the quantity of populations≈probabilities (a space-time symmetry, similar to the just described), each one happens. In history it will mean there are as many fractal planets as possible histories there are: in some humans survive; in most, robots and strangelets take over… In biology that all mutations do happen in a chaotic way by ‘chance’ but then only certain paths survive.
So yes, once we understand with certain conceptual finesse the laws of physics they give us interpretations for far more complex systems, in as much as its simplicity leave the first principles of the homological Universe crystal clear (except for physicists, it seems 🙂
Schwarz’s identity, Maxwell relations, and the reversible symmetry of space anytime forms and functions.
Next, we can consider the relationships between mathematical physics and entropy, since the previous equation and the whole set of Maxwell relationships between thermodynamic parameters respond to a larger homological mathematical equation the Schwarz’s theorem – which establishes for an enormous range of functions the fact that if we derivate in space and then in time, due to the symmetry of space forms and time functions, the result is equivalent to derivate in time first and then in space. So we shall find this hidden jewel of ‘experimental mathematics’ in many different equations:
In the graph schwarz’s theorem expresses one of the fundamental laws of the Universe, the symmetry of process which evolve together first in space and then in time or vice versa, as ∆nalysis shows a derivative to be an evolution/densification of a system of parts into a ‘whole’ whose ‘present parameter, ∆s/∆t is the derivative in time and its inverse ∆t/∆s the derivative in space.
This understood properly in terms of informative time; entropic space; and its bidimensional combinations (holographic principle) implies that for systems of minimal order, it does not exist a distinction between spatial herds of populations – a mere slice of present in a flow of time – and equivalent stochastic process in time with ‘no causal memory’ (entropic, ‘markovian’ processes as those involved in heat and thermodynamics, or in brownian systems, where past events do not influence present ones ) and time truly becomes a space dimension or vice versa
Its application once we understand its S≈T meaning will guides the analysis of many stientific concepts and parameters not yet fully understood. In the case of thermodynamics, they are the origin of the maxwell relations, which encode; as the Maxwell equations of electromagnetism encode a ternary system with a magnetic membrane a tiƒ charge and its dual ST relationships; the basic structure of an ∆º thermodynamic system. In the next graph, we have arranged them roughly according to the fundamental law of GST:
Energy (ST) = Spe x Tiƒ… so the reader can easily assess, both the cyclical nature of space-time process and the exact matching of thermodynamic equations with the Generator of all ternary systems of reality
Reversible vs. irreversible processes: how to turn back the arrow of entropy into an orderly form.
Now as always we are here more interested in the philosophical, conceptual, organic, causal nature of time and space and its symmetries. So we are not going to copycat further the mathematical physics of entropy, dissecting all those equations. Of more interest to us is to consider a HUGE QUESTION FOR PRAXIS in all sciences SPECIALLY TODAY as we enter in History the AGE OF ENTROPY and death of our civilisation, if the process can be reverted.
So we shall consider another huge field of thermodynamics, which as we said, is the queen of physics not because its second law is universal (it is not, and this was somehow recognised when the third law: zero entropy for a crystal mind, was accepted) but because the ∆º detail is maximal and the systems simple enough to deduce patterns in its most essential components – akin to mathematics and universal grammar in that sense.
So what we learn here will apply to all other sciences, whose errors should be corrected to match the thermodynamic knowledge specially in quantum physics – for which Gibbs canonical ensembles and statistical mechanics works fine. It is indeed mimetic (but quantum physicists love to ‘feel different’ – so they stress only their errors as ‘novelties’ which they are not).
The same can be said of mechanics, where perception however is close enough to have little errors. So in essence we can talk on terms of Lagrangians and Hamiltonians in Mechanics, to understand reversible processes, akin to those concepts in thermodynamics (and when we get rid of the bullshit of copenhagen interpretations and understand the ‘slightly’ different view from the human ∆º of the three scales, given the asymmetry of 5D parts and whole arrows, with a balanced view in the same ST level, a view of lesser information of the mechanical, whole ∆+1 level and a massive amount of information coming from the faster more abundant fractal scales of the parts, in quantum physics.
So what is the key to a reversible process? 2 are in fact those keys, which can be expressed in a simple sentence:
“For a reversible process in time to happen, the level of control of time events and spatial individuals must be at ∆-1, infinitesimal parts and minimal frequencies micro-management’.
As a corollaries all this of course implies that the macro-being, the elephant in the room, shall NOT control BUT MERELY dissuade the micro-parts to take their decisions, since the infinitesimal infinite micro-events and time frequencies can only be managed internally. In praxis it means the system needs a Tiƒ internal knot of order, or mind invainginated by nervous/informative fractal networks that touch all cells to create simultaneous behaviour which only happens in crystal solids, through van der waals and magnetic field control or in biologic systems with nervous control of simultaneous cell motions – and we imagine in the less observed invisible world of galaxies by gravitational ‘DNA-like informative black holes’ micro-managing the evolution, and feeding of its dark matter cells, on the star mitochondria.
And this is expressed in mechanics by the Lagrangian function whose minimalist derivatives on time tend to zero (least time actions), and hence in the whole conservative energy world cycle integral of the being, the Hamiltonian becomes also a conserved zero-sum.
In thermodynamics, the same concept means a reversible process is a process whose direction can be “reversed” by inducing infinitesimal changes to some property of the system via its surroundings, while not increasing entropy.
Throughout the entire reversible process, the system is in thermodynamic equilibrium with its surroundings. Since it would take an infinite amount of time for the reversible process to finish, perfectly reversible processes are impossible. However, if the system undergoing the changes responds much faster than the applied change, the deviation from reversibility may be negligible. In a reversible cycle, a reversible process which is cyclic, the system and its surroundings will be returned to their original states if the forward cycle is followed by the reverse cycle.
So it is understood why humans cannot as elephants on a very subtle scalar Universe make most process reversible just with their huge, dull methods of control. Simply speaking the ‘will of individual atoms’ refuses to yield to the bullies and so heat and entropy ensues; as it does in brutish social dictatorships as opposed to subtle placebo democracies where the capitalist masters of the Financial-media system merely ‘suggest’ the mass from its advantageous point of view, what to do, and the citizen, ‘believer’ will therefore create order without understanding the networks of money and mass-media have suggested him what to do.
So the Universe reaches order and reversibility only when it is build a system with efficient, superorganic structures.
All other systems will have entropy as synchronicity is lacking.
IN THAT REGARD THE ENTIRE CONCEPT OF AN IRREVERSIBLE ENTROPY ARROW FOR THE WHOLE UNIVERSE, DOES NOT EVEN WORK ON MATTER SYSTEMS, AS IT IS MERELY YET ANOTHER EGO-TRIP OF THE HOMUNCULUS ON THE LOOSE, thinking that if he cannot ‘get it’ by himself, the universe is guilty and wrong- those damn atoms that misbehave (:
(to be cont’ed)
So that is a fast overview of the ∆º matter scale from the perspective of its ∆±1 relationships.
The other great field for a bulky description of any system being the three topological functions of space-time (S-st-T) symmetries (its world cycles in sequential 3 ages time and topological trinity organs in space), we shall now deal with an introduction to the so-called ‘state physics’ which studies those three ‘crystal clear’ formal arrows, S-gas, st-liquids and Tiƒ ‘clear crystal solid minds’ (:
IV: ST:ATE PHYSICS
∆-1: PLASMA>∆: S-GAS>ST-LIQUID>Tiƒ-CRYSTAL SOLID>>∆+1 ‘MASS’ VS. << BIG BANG.
State Physics… studies the 3 ‘Ages of matter systems’, between its i-1 plasmatic birth and E=Mc2, death.
We will study in this post the laws of ‘STe, past entropic gas’, ST, balanced, present liquids and Tƒ, future informative solids and its ’empirical cases’. The i-1 plasma age of conception belongs to quantum i-2 planes, as it does the i<∑i-1 big-bang age of death.
The main article on the three states of time-matter will be the isomorphism of the generator equation, which better describes it, so we shall NOT consider here more than the basic symmetries of the three ages of time in matter.
The 3 ages of matter
In the graph, the STe gas, Str liquid and STo solid ages of matter and the dynamic:
STe<=>Liquid<=>Solid<=>Gas <=>i- Plasma
possible transformations between ages, which follow the multiplicity of time arrows happening in systems of minimal network organisation, that anchors a system made of multiple states into a steady state, and makes impossible fast transformations between time ages. Matter however allows all possible changes between ages with preference for the natural flow of a time ages cycle: gas>liquid>Solid. And this preference is observed in the ‘minimal energy spent in those 2 fundamental time transformations of state.
State Physics can be further subdivided into a combined ‘scalar-temporal’ analysis:
-i: Plasma physics
STe-nergy: Gaseous states
STr-eproductive, balanced age: Liquid states
STo-informative age: Solid states
+i: Boson Physics.
The properties of state physics thus derive from the general properties of the 3±i dimensional ages of time.
3±i TIME STATES: ∆-GAS ∆—±1 FIELDS< ∆ LIQUID ∆±1 WAVES > ∆±1 PARTICLES ∆-CRYSTALS
In the graph, the worldcycle of matter. We an see how the ∆±1 birth and death phase (plasma made of dissociated ions, is the same and closes the cycle.
The same cycle on the lower ∆-1 quantum scale, in which liquid waves, gaseous fields and solid particles play the same roles on quantum systems.
As those states happen in the ‘human scale, ∆o±1, thermodynamics is the referential stience, albeit it must be fully corrected to avoid the traps of its ‘wrong laws’, which define a Universe with only a single arrow of entropy, as if it were merely a ‘gaseous Universe’; due to the complete misunderstanding of time arrows in classic physics.
The importance for humanity of thermodynamics, today somewhat relegated by the quanta world of electronic machines as humans and life become expendable to the new ‘robotic species’, is difficult to stress.
We are NOT quantum beings as machines are, neither Gravitational, cosmological beings, as the galaxy is. We are thermodynamic beings, and this is the key science for the human system and the planet we live in.
Since in geological structures the interplay of gas, liquid and solid cycles creates the conditions for Gaia, to become a super organism and its ∆-1 life scale to flourish.
Thermodynamics (original texts from britannica cd) and its generator equations
ENERGY. As we have said Energy is a present, Max. e x i function, where entropy understood as motion is dominant but there is form, hence there is in-form-ation in the motion. So its equations can be resumed in terms of lineal, expansive motions with a minimal form.
WORK therefore, which is essentially the measure of ‘active energy’; that is, ‘energy in time’, does not exist if there is not an open displacement, a lineal motion.
TEMPERATURE on the other hand is precisely a vibrational mode of energy, which closes its vibration returning almost all systems in which we measure temperature, vibrational solids, gases studied in volumes closed by pressure; so it plays the role of a ‘time clock’ in the ∆-human scale.
So finally we arrive to HEAT a very anthropomorphic concept as all those related to the present understanding of thermodynamics; but less important in ∆st. Heat is indeed just the entropic, scattering expansion of energy as entropy: ST (ENERGY) > S (HEAT); and so a form of entropy, which therefore allow us to write in simple terms, the Generator ‘elements’ of thermodynamics:
Γ. Spe (Entropic Heat) < ST (Energy-Work) > Tiƒ (Temperature clocks)
The laws of thermodynamics.
What subtle correction must we then introduce into the laws of thermodynamics?
Obviously as thermodynamics considers only a single plane of space-time and reduces the 3 arrows of time and its symmetric generator in space to a single entropic arrow, here is where there are huge misunderstandings, essentially the absurd idea that ‘the Universe is dying’ (Helmoth).
Of course it is if it had only the negative time arrow of entropy and was a single plane; but what entropy ‘kills’, information resurrects, and the losses of ‘entropy’ in a given scale de to thermodynamic equilibrium are reversed by the order of Tiƒ systems, the ‘Demons of Maxwell’, which act creating organic order. Plus, the inverse arrows of the 5th dimension allow that entropy losses, when we try to ‘move’ the energy of an ∆-1 scale to an ∆-scale are compensated by the perfect synchronicity without loss of energy when we act from a whole into a smaller part:
In the graph, thermodynamics change completely in its interpretation when we consider multiple scales of time, as the loss of entropy in Simplex Physics (ab. Sx, or Æ) from ∆-1 to ∆ is upset by the inverse growth of in/form/ation from ∆ to ∆-1 since those motions do not have entropy (synchronicity of motion).
The most important laws of thermodynamics are thus transformed:
-0: The zeroth law of thermodynamics. When two systems are each in thermal equilibrium with a third system, the first two systems are in thermal equilibrium with each other. This property makes it meaningful to use thermometers as the “third system” and to define a temperature scale. And it is indeed truth but it does NOT mean that the system is dead because they have thermal equilibrium. It is all more subtle. A system in thermal equilibrium has a higher degree of order. It becomes therefore able to organise itself better with information, as in life systems which requires homeostasis.
So we shall instead generalise it to all systems with a different name (the willy-nilly game of ‘numbers’, for the awe-inspiring-digital groupie do not apply to GST, as other scales of reality do have different languages and we are unifying them):
The Law of homeostasis, and state that:
“A system of fractal points joined by a present ∆-wave of ‘energy≈heat’ tends to distribute ‘democratically’ the energy and form of the wave to all its components, to ensure the internal balance of them’. And put as customary in the Homologic Method of GST, a minimum of 3 examples from physical, biological and social sciences:
In other terms, systems tend to establish a just, distributive balance between the points of the network, which will receive a minimal amount of energy, call it a blood networks,
-1st: The first law of thermodynamics, or the law of conservation of energy.
The change in a system’s internal energy is equal to the difference between heat added to the system from its surroundings and work done by the system on its surroundings. It is a trivial laws as it is explained today without understanding what is energy. And the most important fact of it: that there is no work or energy expenditure in closed time-cycles.Essentially it means that Time does NOT work, meaning time cycles are closed cycles which do NOT spend work-energy; and this amazingly important fact, hardly understood philosophically by the wannabe gurus of the absolute means ultimately that time is eternal, and so it is the Universe, which has no manifest destiny, no lineal goal. Lineal time indeed would exhaust itself as it would spend the energy of the Universe.
Again we must expand this law to include information and at the same time, diminish its range by applying it only to exchanges within a single plane of existence, which not being the general rule as always there are leaks of energy and information up and down those scales, restrict the accuracy of our measures.
‘Energy never dies, it constantly transforms back and forth into information, through the repetition of body-wave across ∆±1 planes of existence: Spe < ExI>Tiƒ
-2nd: The second law of thermodynamics. Heat does not flow spontaneously from a colder region to a hotter region, or, equivalently, heat at a given temperature cannot be converted entirely into work.
Yet while this is truth it only applies to the transmission of energy from lower, ∆-1 ensembles into an ∆-whole, not viceversa: ∆-wholes synchronise the simultaneous motions of all its ∆-1 parts, achieving with it ‘lesser information’ and ‘simpler, larger motions’ a loss of zero entropy when they move all its parts, according to the direction of future set up by the whole.
So the error of physicists is as usual to generalise a local phenomena of the ∆±1 thermodynamic scale, and forget the balances obtained from non-entropic motions handled by the whole/mind, which restaures the balance of the system.
By ignoring the ∆@ elements of reality thermodynamics looses any value as a philosophy of general global laws.
Such laws apply only to the entropy of a closed single ∆-scale system, which tends toward an equilibrium state in which entropy – the scattering and equality of heat among all its ∆-1 elements, is at a maximum and no energy is available to do useful work at the ∆-level.
This asymmetry between forward and backward processes gives rise to what is known as the “arrow of time” in classic thermodynamics, which as we said is a simplification of the 3 arrows of time, due to the error of a single space-time continuum, converted into a huge global error by extending it to every system.
Indeed, ‘a single entropic arrow of time’ for all phenomena deduced of the study of expansive heat=entropy in steam machines, is a very local reductionist simplex analysis of time arrows. And to expand it to include all the planes of the Universe, all the beings, by reducing the 7 motions of reality to ‘heat, entropic motions’ is plainly bogus. The law merely becomes a single arrow of dying entropic time, when we eliminate all other scales and st, T, ELEMENTS of the system. And we will return to that.
Even when considering the single arrow between atoms and heat-human scale, the comprehension is completely blurred by the language – what entropy measures is not motions backwards and forwards in ‘absolute time’ but in the 5th dimension of social scalar evolution, and backwards in such dimension of dissolution of wholes and its order.
Such as when we want to use and exhaust the motion of ∆-1 systems, which are NOT organised by complex networks but just merely as humans do, with ‘heating machines’, entropic ‘fire’, and some other brutish systems, obviously the molecules and atoms of the lower scale have the same interest to order perfectly and give up its motion to that brutish wholes a mass of humans have to be herded by a military thug.
Entropy appears with minimal processes of organisation such as heat is to extract the motion of individual ∆-1 elements.
However when a system is fully organised, according to 5D organic laws entropy greatly diminishes, as in Crystals, which have basically zero entropy, or organisms, which increase the order of beings and diminish its entropy.
TO deny the existence of fractal points that gauge information and order the Universe, gravitational forces that balance expansive, electromagnetic entropy, is just a dogma of astounding arrogance on the part of physicists, which just want to construct reality with their ‘restricted matter laws’ and will not accept there is something more out there.
So the law should be rephrased regarding the conservation of motion and information:
‘In the whole Universe entropy does not exist. As the Universe is made of motion and curvature, which balance each other through all its ∆-scales. So when a system becomes disordered and expands its entropy in an ∆-1 scale; the order is restored by the simultaneous, informative order of larger ∆+1 wholes which contract and synchronise the ∆-systems.
So the ∆-n quantum scales of physical, electromagnetic entropy are balanced by the ∆+n scales of gravitational, only-attractive information. And we have to asses the total order of a system, studying at least ∆±n scales together.
-The third law of thermodynamics. The entropy of a perfect crystal of an element in its most stable form tends to zero as the temperature approaches absolute zero. This allows an absolute scale for entropy to be established that, from a statistical point of view, determines the degree of randomness or disorder in a system.
So we come to the third law, which establishes that absolute zero, where thermodynamic effects of entropic disorder by heat, no longer apply. We can say that ‘organic physical matter’ and its most important effects of order creation happen when thermodynamic entropy is minimal – phenomena such as superconductivity, superfluidity, bosons, etc. We talk then of zero as the relative ‘temperature’ of a mind, which creates a still map of reality it will then project on that reality diminishing its entropy. In other words:
‘The Universe is filled with Maxwell’s demons’
Zero cyrstal entropy tells us also several things: a crystal, or perceptive physical, Tiƒ mind that maps out an ‘intelligible’ mirror image of the Universe inside its mind is cold, tends to total order, and minimal motion, so it is the 3rd of physical matter. But the crystal does move the Universe, as it becomes focus of smaller ∆-2 pixels of information below it. So again, when we consider several scales motion never stops.
Latter on we shall therefore redefine those laws in terms of ∆@st, as the ∆@ elements do not exist in physics, and they are the ∆ dual arrows of order of the previous pyramid and the ‘Maxwell demon @minds’ that tell the system where to be and go.
How then including those ∆-scales beyond, modify those laws. Obviously: creating order. I.e. gravitation is a force of order as it only attracts and ‘contracts’, in-form-ing a system; so it balances the tendency to entropy of ¥-rays.
So we must talk of a Universe which rescues the world from entropic death through ∆• elements.
So because thermodynamics developed rapidly during the 19th century in response to the need to optimize the performance of steam engines, it is wrong to expand by dogma to the whole universe the laws of thermodynamics, as previously written, which makes them applicable only to physical, molecular matter, the ∆±1 scales and biological systems, without understanding how order is restored in biological systems by its minds, in ∆±1 by crystals and future arrows of social evolution. As the system either has motion or form, often switching between both states; a fact only recognised by the pioneers of fractal physico-chemistry such as Mr. Mehaute, which clearly proves that when a thermodynamic system stops moving externally, it then subtly starts to create further internal order.
So, the ‘particular’ laws of thermodynamics give a complete description of all changes in the energy state of any system and its ability to perform useful work on its surroundings, but only when we make a ‘ceteris paribus’ analysis of that plane and its internal phenomena, discharging the interchanges of energy and information with the ∆±1 scales.
In that sense thermodynamics also has an scalar structure, and so epistemologically we talk of 2 branches:
∆o: Thermodynamics or Classical thermodynamics which does not involve the consideration of individual atoms or molecules.
∆-1>∆: Such concerns are the focus of the branch of thermodynamics known as statistical thermodynamics, or statistical mechanics, which expresses macroscopic thermodynamic properties in terms of the behaviour of individual particles and their interactions.
It has its roots in the latter part of the 19th century, when atomic and molecular theories of matter began to be generally accepted. And so a clear form to study the ‘errors’ of a single space-time analysis of entropy and one which includes at least a couple of ∆-scales is to see the differences between both scalar approaches. Since the key to understand properly thermodynamics is to analyse how disorder in one scale ∆-heat in fact merely means that the ∆-1 scales wishes to create its proper order and motions.
So we need to include besides the study of open and closed states of a thermodynamic system in a single ∆-scale, the ‘whole picture’ by adding the study of ‘closed and opened’ systems in 3 ∆±1 scales, asking questions such as:
‘This event study transfers energy or information to the ∆±1 scales of the system and if so, which arrow is dominant, ∆+1>∆ order or ∆-1<∆ entropy?
The classic example being, the beta decay equations of disorder of a particle, which seems to miss spin (cyclical order) and entropy (energetic motions) and so, the order was restored by including the ∆-1, neutrino quanta which the process of death and devolution of the nucleon transfers as all entropic phenomena do, to its lower ∆-1 faster/larger scale.
It is this type of actions of balancing the ‘books’ which physicists stubbornly do not do for entropic heat, adding to the mix gravitational in-form-ative forces and ‘@minds’, Maxwell demons, what explains the errors of a dying universe of its science philosophies.
Or in other words, what we call ‘dead heat’ balance or thermodynamic equilibrium means merely the ∆-1 scale, ‘refuses’ to become ‘organised’ and synchronised by the ∆-scale, and prefers to remain in a herd, wave, ‘relatively free’ scale. It is only entropy from the human point of view.
The application of thermodynamic principles begins by defining a system that is in some sense distinct from its surroundings. For example, the system could be a sample of gas inside a cylinder with a movable piston, an entire steam engine, a marathon runner, the planet Earth, a neutron star, a black hole, or even the entire universe. In general, systems are free to exchange heat, work, and other forms of energy with their surroundings.
However this approach of classic thermodynamics is limited in its scope, as we must first define the system, more precisely in its 3 x 3 + 0 dimensions, to know if it is a complete system (with 3 co-existing scales, 3 topologies, 3 time ages and 1 mind-point), in which case there will not be entropy but rather information-creation by the mind on a closed system, growing its information at expenses of the external world – case of life systems (x, x, x in the next graph):
When the system looses the mind-point, there is not an internal element of order, and so the system is ‘ready’ to give up part of its information to an external agent. Yet if the enclosing membrane remains (a fact not easily obtained through time, as it is managed and maintained in isolation by the invagination and connection with its @mind-point), the system will remain in balance with no exchange of e or i (x,x,√) at the ∆+1 ‘mechanical level, leaking though disordered, entropic heat at the ∆-1 level. And a similar case, when the systems only thermally isolated at ∆-1 level happens: the system can be used to cause work, as the mechanical, motion, ∆-level is not isolated.
So what we know as classic thermodynamics apply to such systems in which either or both of the ∆-1 and ∆+1 quantum and mechanical scales are NOT isolated and transfer of energy and motion in inverse faction happens between them (open and closed system); and to those systems the laws of thermodynamic apply when we do a ceteris paribus analysis from the point of view of a single scale – disregarding the order or entropy ∆ or ∇ effects on those 2 other scales.
Only such systems we shall call ‘thermodynamic engines’, classic laws apply. Then those laws will refer basically to the Conservation of present energy of a system, where the present energy is given by the temperature and its composite elements will be volume or expansive entropy and pressure or increasing order, given us the 1st key equation of thermodynamics:
P (t) x V (s) = nK T (exi content)
For example a gas in a cylinder with a movable piston, the state of the system is identified by the temperature, pressure, and volume of the gas. Where the volume is proportional to the spatial expansion OR ENTROPY PROPER using the concept of ENTROPY as expansive motion, defined in GST; the pressure is its inverse ‘pure form’ function, or arrow of order. And the nKT element the present, energy x information parameter of the system, ‘its existential force’ calculated multiplying its quanta of entropy-space, nK and its speed of time, its ‘frequency of time vibration’, T.
In such systems it also applies the fundamental law of ‘immortality of time’, which states that:
‘a closed path of time has not done work, hence spent its potential energy, when it returns to its initial space-time condition’
So P, V, and kT are characteristic parameters that have definite values at each state and are independent of the way in which the system arrived at that state. In other words, any change in value of a property depends only on the initial and final states of the system, not on the path followed by the system from one state to another. Such properties are called state functions.
In contrast, when we talk of an open system (√,√,√) the final state is not one of balance, since there is a gain or loss of energy and information which the external system absorbs or gives. So, the final work done as the piston moves and the gas expands and the heat the gas absorbs from its surroundings depend on the detailed way in which the expansion occurs.
The behaviour of a complex thermodynamic system, such as Earth’s atmosphere, can be understood by first applying the principles of states and properties to its component parts—in this case, water, water vapour, and the various gases making up the atmosphere. By isolating samples of material whose states and properties can be controlled and manipulated, properties and their interrelations can be studied as the system changes from state to state. It is though not accurate to consider then ‘ceteris paribus analysis’ of the system to uphold the wrong limited classic laws of entropy. I.e. when we consider the Earth-sun system we wrongly state that the order of the planet grows because the disorder of the star happens giving up ¥-energy. But it does NOT consider the ∆+3 scale of gravitational, increasing order, which in fact will finally dominate collapsing the star. It is then we could talk of the whole solar system in ‘e x i equilibrium’, which classic thermodynamics called a…
Classic thermodynamics is concerned then with the different states of disorder, when there is a dynamic transfer of Pt, Vs and nkT (ST) between systems, which changes the system parameters of e x i, or with the different paths a system can use to get from a state to other; changing only one of those parameters; which are analysis of value for all other studies of single space-time systems at all other scales.
In that regard, a particularly important concept is thermodynamic equilibrium, in which there is no tendency for the state of a system to change spontaneously. For example, the gas in a cylinder with a movable piston will be at equilibrium if the temperature and pressure inside are uniform and if the restraining force on the piston is just sufficient to keep it from moving. The system can then be made to change to a new state only by an externally imposed change in one of the state functions, such as the temperature by adding heat or the volume by moving the piston. A sequence of one or more such steps connecting different states of the system is called a process. In general, a system is not in equilibrium as it adjusts to an abrupt change in its environment. For example, when a balloon bursts, the compressed gas inside is suddenly far from equilibrium, and it rapidly expands until it reaches a new equilibrium state.
However, the same final state could be achieved by placing the same compressed gas in a cylinder with a movable piston and applying a sequence of many small increments in volume (and temperature), with the system being given time to come to equilibrium after each small increment.
Such a process is said to be reversible because the system is at (or near) equilibrium at each step along its path, and the direction of change could be reversed at any point.
This example illustrates how two different paths can connect the same initial and final states.
The first is irreversible (the balloon bursts), and the second is reversible. The concept of reversible processes is something like motion without friction in mechanics.
It represents an idealized limiting case that is very useful in discussing the properties of real systems. Many of the results of thermodynamics are derived from the properties of reversible processes. And the first conclusion we obtain from that duality, is indeed a general law of all systems: When the change is very fast and the system cannot ‘reorganise step by step’ of its frequency motions, the simultaneous location of all its parts, the system becomes disordered. As all entropic, death processes and big-bangs, ∆+1<<∆-1, are by definition fast transformations that erase the disorder of the system.
Yet when those changes are minimal changes, where the parameter of temporal order is higher (slow processes with minute changes) than the parameter of expansive space, the process is fully reversible with no entropy: ∆<∆-1>∆.
The concept of temperature is fundamental then to any discussion of thermodynamics, as it is the parameter of frequency in time, hence of the speed of the system in its time clocks, which will vary all the other parameters of the system to define its existential SxT force, which in all systems grow when S-imultaneity in space and size grow (spatial force) or its temporal frequency and timing of its actions-reactions grow (temperature speed), within the S≈T limits of internal balance allowed by the system (homeostasis), which for any system establishes an interval of temperature of maximal efficiency.
Thus Temperature is a measure of the density of energy and information, of the frequency of motion, of the quantity of temporal form, of the existential momentum of a thermodynamic ensemble of waves, particle and fields of the ∆±1 planes of existence, with ∆o humans on its middle point. As such temperature is the frequency of the activity of a certain environment in those planes, which increases as we increase the parameters which will increase that density such as Pt x Vs = n Ks Tt
Yet, in any assemble of ∆-1 forms, the arrow of social evolution dominates the system. So when one ∆-1 element has a higher frequency of existence it will share it among SIMILAR entities by colliding and transferring them energy. Without the existence of an attractor, @mind, Maxwell’s demon in the system, the energy of the system will remain in a wave, steady state equilibrium, providing the external membrane is closed, or else it will dissipate its entropy if the external ‘pressure membrane’ disappears.
Temperature is therefore a social form of energy ∆-1 systems prefer to shared to equalise its parameter of energy and information with its close clone molecules.
So, when two objects are brought into thermal contact, heat will flow between them until they come into equilibrium with each other. When the flow of heat stops, they are said to be at the same temperature.
The zeroth law of thermodynamics formalizes this by asserting that if an object A is in simultaneous thermal equilibrium with two other objects B and C, then B and C will be in thermal equilibrium with each other if brought into thermal contact.
Object A can then play the role of a thermometer through some change in its physical properties with temperature, such as its volume or its electrical resistance.
With the definition of equality of temperature in hand, it is possible then to establish a temperature scale by assigning numerical values to certain easily reproducible fixed points. And consider in an opposite fashion to thermodynamic classic laws that:
‘When entropy increases in the system as a whole, an equal amount of order created by isotemperature equilibrium happens in the ∆-1 scale of the system’.
Work and energy
Energy in that sense has a precise meaning in classic single ∆-scale physics that does not correspond to the concept of GST.
The word is derived from the Greek word ergon, meaning work, but the term work itself acquired a technical meaning with the advent of Newtonian mechanics. For example, a man pushing on a car may feel that he is doing a lot of work, but no work is actually done unless the car moves. The work done is then the product of the force applied by the man multiplied by the distance through which the car moves. If there is no friction and the surface is level, then the car, once set in motion, will continue rolling indefinitely with constant speed. The rolling car has something that a stationary car does not have—it has kinetic energy of motion equal to the work required to achieve that state of motion.
The introduction of the concept of energy in this way is of great value in mechanics because, in the absence of friction, energy is never lost from the system, although it can be converted from one form to another. What this means in GST is that closed motions, which are time motions do not spend work or energy because the Universe never spends its immortal closed motions, which is ultimately the substance of which it is made. YOU LIVE in a Universe of time in which spaces are just partial parts, open motions that invariably will return as a zero sum, in spatial, topological form, ages, or scales.
For example, if a coasting car comes to a hill, it will roll some distance up the hill before coming to a temporary stop. At that moment its kinetic energy of motion has been converted into its potential energy of position, which is equal to the work required to lift the car through the same vertical distance. After coming to a stop, the car will then begin rolling back down the hill until it has completely recovered its kinetic energy of motion at the bottom. In the absence of friction, such systems are said to be conservative because at any given moment the total amount of energy (kinetic plus potential) remains equal to the initial work done to set the system in motion.
As the science of physics expanded to cover an ever-wider range of phenomena, it became necessary to include additional forms of energy in order to keep the total amount of energy constant for all closed systems (or to account for changes in total energy for open systems).
For example, if work is done to accelerate charged particles, then some of the resultant energy will be stored in the form of electromagnetic fields and carried away from the system as radiation. In turn the electromagnetic energy can be picked up by a remote receiver (antenna) and converted back into an equivalent amount of work. With his theory of special relativity, Albert Einstein realized that energy (E) can also be stored as mass (m) and converted back into energy, as expressed by his famous equation E = mc2, where c is the velocity of light. All of these systems are said to be conservative in the sense that energy can be freely converted from one form to another without limit. Each fundamental advance of physics into new realms has involved a similar extension to the list of the different forms of energy.
Yet because there was not a concept of fractal scales, the relationship between those forms of energy and the relative present states of each scale, its different time clocks and quanta of space, where not clearly defined.
Thermodynamics encompasses all of these forms of energy, with the further addition of heat to the list of different kinds of energy.
However, heat, which is a concept restricted to the subjective human desire to ‘be the focus and attention of all other scales’ is fundamentally different from the others in that the conversion of work (or other forms of energy) into heat is not completely reversible. Since it is not the whole tale, but merely the duality of human scales.
So this specific law of energy-information transfer MEANS merely that part of the motion from ∆+1 < ∑∆-1 becomes absorbed, stolen by the more informative ∆-1 entities, unless there is truly an organic network that establishes the simultaneity of all those motions (as in biological organisms):
∆ +1 ‹ Tiƒ ∆ + Spe ∆-1
That is ∆+1 transfers part of its motion to the well organise, synchronous ∆-particles of the lower scale, but also to its field, ∆-1 which won’t return it back.
(nt.1 When we use also ≤ and ≥ bigger or smaller classic symbols of algebra, GST < expansive entropy an > informative flow, are substituted by ‹ and ›, to avoid confessions).
In the example of the rolling car, some of the work done to set the car in motion is inevitably lost as heat due to friction, and the car eventually comes to a stop on a level surface. Even if all the generated heat were collected and stored in some fashion, it could never be converted entirely back into mechanical energy of motion. This fundamental limitation is expressed quantitatively by the second law of thermodynamics (see below).
The role of friction in degrading the energy of mechanical systems may seem simple and obvious, but the quantitative connection between heat and work, as first discovered by Count Rumford, played a key role in understanding the operation of steam engines in the 19th century and similarly for all energy-conversion processes today.
Total internal energy
Although classical thermodynamics deals exclusively with the macroscopic properties of materials—such as temperature, pressure, and volume—thermal energy from the addition of heat can be understood at the microscopic level as an increase in the kinetic energy of motion of the molecules making up a substance.
This is the proper way to define then the system not as one that looses energy but one which transfers it to ∆-1 scales.
For example, gas molecules have translational kinetic energy that is proportional to the temperature of the gas: the molecules can rotate about their centre of mass, and the constituent atoms can vibrate with respect to each other (like masses connected by springs). Additionally, chemical energy is stored in the bonds holding the molecules together, and weaker long-range interactions between the molecules involve yet more energy. The sum total of all these forms of energy constitutes the total internal energy of the substance in a given thermodynamic state. The total energy of a system includes its internal energy plus any other forms of energy, such as kinetic energy due to motion of the system as a whole (e.g., water flowing through a pipe) and gravitational potential energy due to its elevation.
It is then when the concept of a ‘dying entropic universe’ makes no longer sense, reason why the 2nd law should be abolished in its present format. Let us now study in more depth each of those laws with its added corrections.
The first law of thermodynamics
The laws of thermodynamics are deceptively simple to state, but they are far-reaching in their consequences. The first law asserts that the total energy of a system plus its surroundings is conserved; in other words, the total energy of the universe remains constant as keep extending a nested series of worlds in ∆, S and T scales and symmetries, to infinity.
Yet since energy and its symmetric form information is a combination of entropy and form≈curvature, we deduce that the total curvature and entropy of the Universe is also conserved, on the total cycle of times, in which a world cycle does not make work. Work is non-existent in the long term in the Universe. Work never happens because all cycles are closed so the total energy and information of the five-dimensional block of existences is completed. And yet it never ceases to create its details.
The first law is put into action by considering the flow of energy across the boundary separating a system from its surroundings. Consider the classic example of a gas enclosed in a cylinder with a movable piston. The walls of the cylinder act as the boundary separating the gas inside from the world outside, and the movable piston provides a mechanism for the gas to do work by expanding against the force holding the piston (assumed frictionless) in place. If the gas does work W as it expands, and/or absorbs heat Q from its surroundings through the walls of the cylinder, then this corresponds to a net flow of energy W − Q across the boundary to the surroundings. In order to conserve the total energy U, there must be a counterbalancing change
in the internal energy of the gas. The first law provides a kind of strict energy accounting system in which the change in the energy account (ΔU) equals the difference between deposits (Q) and withdrawals (W).There is an important distinction between the quantity ΔU and the related energy quantities Q and W. Since the internal energy U is characterized entirely by the quantities (or parameters) that uniquely determine the state of the system at equilibrium, it is said to be a state function such that any change in energy is determined entirely by the initial (i) and final (f) states of the system: ΔU = Uf − Ui. However, Q and W are not state functions. Just as in the example of a bursting balloon, the gas inside may do no work at all in reaching its final expanded state, or it could do maximum work by expanding inside a cylinder with a movable piston to reach the same final state. All that is required is that the change in energy (ΔU) remain the same. By analogy, the same change in one’s bank account could be achieved by many different combinations of deposits and withdrawals. Thus, Q and W are not state functions, because their values depend on the particular process (or path) connecting the same initial and final states. Just as it is only meaningful to speak of the balance in one’s bank account and not its deposit or withdrawal content, it is only meaningful to speak of the internal energy of a system and not its heat or work content.
From a formal mathematical point of view, the incremental change dU in the internal energy is an exact differential (see differential equation), while the corresponding incremental changes d′Q and d′W in heat and work are not, because the definite integrals of these quantities are path-dependent. These concepts can be used to great advantage in a precise mathematical formulation of thermodynamics (see below Thermodynamic properties and relations).
The classic example of a heat engine is a steam engine, although all modern engines follow the same principles. Steam engines operate in a cyclic fashion, with the piston moving up and down once for each cycle. Hot high-pressure steam is admitted to the cylinder in the first half of each cycle, and then it is allowed to escape again in the second half. The overall effect is to take heat Q1 generated by burning a fuel to make steam, convert part of it to do work, and exhaust the remaining heat Q2 to the environment at a lower temperature. The net heat energy absorbed is then Q = Q1 − Q2. Since the engine returns to its initial state, its internal energy U does not change (ΔU = 0). Thus, by the first law of thermodynamics, the work done for each complete cycle must be W = Q1 − Q2. In other words, the work done for each complete cycle is just the difference between the heat Q1 absorbed by the engine at a high temperature and the heat Q2 exhausted at a lower temperature. The power of thermodynamics is that this conclusion is completely independent of the detailed working mechanism of the engine. It relies only on the overall conservation of energy, with heat regarded as a form of energy.
In order to save money on fuel and avoid contaminating the environment with waste heat, engines are designed to maximize the conversion of absorbed heat Q1 into useful work and to minimize the waste heat Q2. The Carnot efficiency (η) of an engine is defined as the ratio W/Q1—i.e., the fraction of Q1 that is converted into work. Since W = Q1 − Q2, the efficiency also can be expressed in the form
If there were no waste heat at all, then Q2 = 0 and η = 1, corresponding to 100 percent efficiency. While reducing friction in an engine decreases waste heat, it can never be eliminated; therefore, there is a limit on how small Q2 can be and thus on how large the efficiency can be. This limitation is a fundamental law of nature—in fact, the second law of thermodynamics (see below).
Isothermal and adiabatic processes
Because heat engines may go through a complex sequence of steps, a simplified model is often used to illustrate the principles of thermodynamics. In particular, consider a gas that expands and contracts within a cylinder with a movable piston under a prescribed set of conditions. There are two particularly important sets of conditions. One condition, known as an isothermal expansion, involves keeping the gas at a constant temperature. As the gas does work against the restraining force of the piston, it must absorb heat in order to conserve energy. Otherwise, it would cool as it expands (or conversely heat as it is compressed). This is an example of a process in which the heat absorbed is converted entirely into work with 100 percent efficiency. The process does not violate fundamental limitations on efficiency, however, because a single expansion by itself is not a cyclic process.
The second condition, known as an adiabatic expansion (from the Greek adiabatos, meaning “impassable”), is one in which the cylinder is assumed to be perfectly insulated so that no heat can flow into or out of the cylinder. In this case the gas cools as it expands, because, by the first law, the work done against the restraining force on the piston can only come from the internal energy of the gas. Thus, the change in the internal energy of the gas must be ΔU = −W, as manifested by a decrease in its temperature. The gas cools, even though there is no heat flow, because it is doing work at the expense of its own internal energy. The exact amount of cooling can be calculated from the heat capacity of the gas.
Many natural phenomena are effectively adiabatic because there is insufficient time for significant heat flow to occur. For example, when warm air rises in the atmosphere, it expands and cools as the pressure drops with altitude, but air is a good thermal insulator, and so there is no significant heat flow from the surrounding air. In this case the surrounding air plays the roles of both the insulated cylinder walls and the movable piston. The warm air does work against the pressure provided by the surrounding air as it expands, and so its temperature must drop. A more-detailed analysis of this adiabatic expansion explains most of the decrease of temperature with altitude, accounting for the familiar fact that it is colder at the top of a mountain than at its base.
The second law of thermodynamics
The first law of thermodynamics asserts that energy must be conserved in any process involving the exchange of heat and work between a system and its surroundings.
A machine that violated the first law would be called a perpetual motion machine of the first kind because it would manufacture its own energy out of nothing and thereby run forever.
Such a machine would be impossible even in theory. However, this impossibility would not prevent the construction of a machine that could extract essentially limitless amounts of heat from its surroundings (earth, air, and sea) and convert it entirely into work. Although such a hypothetical machine would not violate conservation of energy, the total failure of inventors to build such a machine, known as a perpetual motion machine of the second kind, led to the discovery of the second law of thermodynamics. The second law of thermodynamics can be precisely stated in the following two forms, as originally formulated in the 19th century by the Scottish physicist William Thomson (Lord Kelvin) and the German physicist Rudolf Clausius, respectively:
A cyclic transformation whose only final result is to transform heat extracted from a source which is at the same temperature throughout into work is impossible.
A cyclic transformation whose only final result is to transfer heat from a body at a given temperature to a body at a higher temperature is impossible.
The two statements are in fact equivalent because, if the first were possible, then the work obtained could be used, for example, to generate electricity that could then be discharged through an electric heater installed in a body at a higher temperature. The net effect would be a flow of heat from a lower temperature to a higher temperature, thereby violating the second (Clausius) form of the second law. Conversely, if the second form were possible, then the heat transferred to the higher temperature could be used to run a heat engine that would convert part of the heat into work. The final result would be a conversion of heat into work at constant temperature—a violation of the first (Kelvin) form of the second law.
Central to the following discussion of entropy is the concept of a heat reservoir capable of providing essentially limitless amounts of heat at a fixed temperature. This is of course an idealization, but the temperature of a large body of water such as the Atlantic Ocean does not materially change if a small amount of heat is withdrawn to run a heat engine. The essential point is that the heat reservoir is assumed to have a well-defined temperature that does not change as a result of the process being considered.
Entropy and efficiency limits
The concept of entropy was first introduced in 1850 by Clausius as a precise mathematical way of testing whether the second law of thermodynamics is violated by a particular process. The test begins with the definition that if an amount of heat Q flows into a heat reservoir at constant temperature T, then its entropy S increases by ΔS = Q/T. (This equation in effect provides a thermodynamic definition of temperature that can be shown to be identical to the conventional thermometric one.) Assume now that there are two heat reservoirs R1 and R2 at temperatures T1 and T2. If an amount of heat Q flows from R1 to R2, then the net entropy change for the two reservoirs is
(3) ΔS is positive, provided that T1 > T2. Thus, the observation that heat never flows spontaneously from a colder region to a hotter region (the Clausius form of the second law of thermodynamics) is equivalent to requiring the net entropy change to be positive for a spontaneous flow of heat. If T1 = T2, then the reservoirs are in equilibrium and ΔS = 0.The condition ΔS ≥ 0 determines the maximum possible efficiency of heat engines. Suppose that some system capable of doing work in a cyclic fashion (a heat engine) absorbs heat Q1 from R1 and exhausts heat Q2 to R2 for each complete cycle. Because the system returns to its original state at the end of a cycle, its energy does not change. Then, by conservation of energy, the work done per cycle is W = Q1 − Q2, and the net entropy change for the two reservoirs is
This is the fundamental equation limiting the efficiency of all heat engines whose function is to convert heat into work (such as electric power generators). The actual efficiency is defined to be the fraction of Q1 that is converted to work (W/Q1), which is equivalent to equation (2).The maximum efficiency for a given T1 and T2 is thus:
Entropy and heat death
The example of a heat engine illustrates one of the many ways in which the second law of thermodynamics can be applied. One way to generalize the example is to consider the heat engine and its heat reservoir as parts of an isolated (or closed) system—i.e., one that does not exchange heat or work with its surroundings. For example, the heat engine and reservoir could be encased in a rigid container with insulating walls. In this case the second law of thermodynamics (in the simplified form presented here) says that no matter what process takes place inside the container, its entropy must increase or remain the same in the limit of a reversible process. Similarly, if the universe is an isolated system, then its entropy too must increase with time. Indeed, the implication is that the universe must ultimately suffer a “heat death” as its entropy progressively increases toward a maximum value and all parts come into thermal equilibrium at a uniform temperature. After that point, no further changes involving the conversion of heat into useful work would be possible. In general, the equilibrium state for an isolated system is precisely that state of maximum entropy. (This is equivalent to an alternate definition for the term entropy as a measure of the disorder of a system, such that a completely random dispersion of elements corresponds to maximum entropy, or minimum information. )
Entropy and the arrow of time
The inevitable increase of entropy with time for isolated systems plays a fundamental role in determining the direction of the “arrow of time.” Everyday life presents no difficulty in distinguishing the forward flow of time from its reverse. For example, if a film showed a glass of warm water spontaneously changing into hot water with ice floating on top, it would immediately be apparent that the film was running backward because the process of heat flowing from warm water to hot water would violate the second law of thermodynamics. However, this obvious asymmetry between the forward and reverse directions for the flow of time does not persist at the level of fundamental interactions. An observer watching a film showing two water molecules colliding would not be able to tell whether the film was running forward or backward.
So what exactly is the connection between entropy and the second law? Recall that heat at the molecular level is the random kinetic energy of motion of molecules, and collisions between molecules provide the microscopic mechanism for transporting heat energy from one place to another. Because individual collisions are unchanged by reversing the direction of time, heat can flow just as well in one direction as the other. Thus, from the point of view of fundamental interactions, there is nothing to prevent a chance event in which a number of slow-moving (cold) molecules happen to collect together in one place and form ice, while the surrounding water becomes hotter. Such chance events could be expected to occur from time to time in a vessel containing only a few water molecules. However, the same chance events are never observed in a full glass of water, not because they are impossible but because they are exceedingly improbable. This is because even a small glass of water contains an enormous number of interacting molecules (about 1024), making it highly unlikely that, in the course of their random thermal motion, a significant fraction of cold molecules will collect together in one place. Although such a spontaneous violation of the second law of thermodynamics is not impossible, an extremely patient physicist would have to wait many times the age of the universe to see it happen.
The foregoing demonstrates an important point: the second law of thermodynamics is statistical in nature. It has no meaning at the level of individual molecules, whereas the law becomes essentially exact for the description of large numbers of interacting molecules. In contrast, the first law of thermodynamics, which expresses conservation of energy, remains exactly true even at the molecular level.
The example of ice melting in a glass of hot water also demonstrates the other sense of the term entropy, as an increase in randomness and a parallel loss of information. Initially, the total thermal energy is partitioned in such a way that all of the slow-moving (cold) molecules are located in the ice and all of the fast-moving (hot) molecules are located in the water (or water vapour). After the ice has melted and the system has come to thermal equilibrium, the thermal energy is uniformly distributed throughout the system. The statistical approach provides a great deal of valuable insight into the meaning of the second law of thermodynamics, but, from the point of view of applications, the microscopic structure of matter becomes irrelevant. The great beauty and strength of classical thermodynamics are that its predictions are completely independent of the microscopic structure of matter.
Most real thermodynamic systems are open systems that exchange heat and work with their environment, rather than the closed systems described thus far. For example, living systems are clearly able to achieve a local reduction in their entropy as they grow and develop; they create structures of greater internal energy (i.e., they lower entropy) out of the nutrients they absorb. This does not represent a violation of the second law of thermodynamics, because a living organism does not constitute a closed system.
In order to simplify the application of the laws of thermodynamics to open systems, parameters with the dimensions of energy, known as thermodynamic potentials, are introduced to describe the system. The resulting formulas are expressed in terms of the Helmholtz free energy F and the Gibbs free energy G, named after the 19th-century German physiologist and physicist Hermann von Helmholtz and the contemporaneous American physicist Josiah Willard Gibbs. The key conceptual step is to separate a system from its heat reservoir. A system is thought of as being held at a constant temperature T by a heat reservoir (i.e., the environment), but the heat reservoir is no longer considered to be part of the system. Recall that the internal energy change (ΔU) of a system is given by
where Q is the heat absorbed and W is the work done. In general, Q and W separately are not state functions, because they are path-dependent. However, if the path is specified to be any reversible isothermal process, then the heat associated with the maximum work (Wmax) is Qmax = TΔS. With this substitution the above equation can be rearranged as
Note that here ΔS is the entropy change just of the system being held at constant temperature, such as a battery. Unlike the case of an isolated system as considered previously, it does not include the entropy change of the heat reservoir (i.e., the surroundings) required to keep the temperature constant. If this additional entropy change of the reservoir were included, the total entropy change would be zero, as in the case of an isolated system. Because the quantities U, T, and S on the right-hand side are all state functions, it follows that −Wmax must also be a state function. This leads to the definition of the Helmholtz free energy
such that, for any isothermal change of the system,
is the negative of the maximum work that can be extracted from the system. The actual work extracted could be smaller than the ideal maximum, or even zero, which implies that W ≤ −ΔF, with equality applying in the ideal limiting case of a reversible process. When the Helmholtz free energy reaches its minimum value, the system has reached its equilibrium state, and no further work can be extracted from it. Thus, the equilibrium condition of maximum entropy for isolated systems becomes the condition of minimum Helmholtz free energy for open systems held at constant temperature. The one additional precaution required is that work done against the atmosphere be included if the system expands or contracts in the course of the process being considered. Typically, processes are specified as taking place at constant volume and temperature in order that no correction is needed.Although the Helmholtz free energy is useful in describing processes that take place inside a container with rigid walls, most processes in the real world take place under constant pressure rather than constant volume. For example, chemical reactions in an open test tube—or in the growth of a tomato in a garden—take place under conditions of (nearly) constant atmospheric pressure. It is for the description of these cases that the Gibbs free energy was introduced. As previously established, the quantity
is a state function equal to the change in the Helmholtz free energy. Suppose that the process being considered involves a large change in volume (ΔV), such as happens when water boils to form steam. The work done by the expanding water vapour as it pushes back the surrounding air at pressure P is PΔV. This is the amount of work that is now split out from Wmax by writing it in the form
where W′max is the maximum work that can be extracted from the process taking place at constant temperature T and pressure P, other than the atmospheric work (PΔV). Substituting this partition into the above equation for −Wmax and moving the PΔV term to the right-hand side then yields
This leads to the definition of the Gibbs free energy
such that, for any isothermal change of the system at constant pressure,
is the negative of the maximum work W′max that can be extracted from the system, other than atmospheric work. As before, the actual work extracted could be smaller than the ideal maximum, or even zero, which implies that W′ ≤ −ΔG, with equality applying in the ideal limiting case of a reversible process. As with the Helmholtz case, when the Gibbs free energy reaches its minimum value, the system has reached its equilibrium state, and no further work can be extracted from it. Thus, the equilibrium condition becomes the condition of minimum Gibbs free energy for open systems held at constant temperature and pressure, and the direction of spontaneous change is always toward a state of lower free energy for the system (like a ball rolling downhill into a valley). Notice in particular that the entropy can now spontaneously decrease (i.e., TΔS can be negative), provided that this decrease is more than offset by the ΔU + PΔV terms in the definition of ΔG. As further discussed below, a simple example is the spontaneous condensation of steam into water. Although the entropy of water is much less than the entropy of steam, the process occurs spontaneously provided that enough heat energy is taken away from the system to keep the temperature from rising as the steam condenses.A familiar example of free energy changes is provided by an automobile battery. When the battery is fully charged, its Gibbs free energy is at a maximum, and when it is fully discharged (i.e., dead), its Gibbs free energy is at a minimum. The change between these two states is the maximum amount of electrical work that can be extracted from the battery at constant temperature and pressure. The amount of heat absorbed from the environment in order to keep the temperature of the battery constant (represented by the TΔS term) and any work done against the atmosphere (represented by the PΔV term) are automatically taken into account in the energy balance.
Gibbs free energy and chemical reactions
All batteries depend on some chemical reaction of the form
for the generation of electricity or on the reverse reaction as the battery is recharged. The change in free energy (−ΔG) for a reaction could be determined by measuring directly the amount of electrical work that the battery could do and then using the equation Wmax = −ΔG. However, the power of thermodynamics is that −ΔG can be calculated without having to build every possible battery and measure its performance. If the Gibbs free energies of the individual substances making up a battery are known, then the total free energies of the reactants can be subtracted from the total free energies of the products in order to find the change in Gibbs free energy for the reaction,
Once the free energies are known for a wide variety of substances, the best candidates for actual batteries can be quickly discerned. In fact, a good part of the practice of thermodynamics is concerned with determining the free energies and other thermodynamic properties of individual substances in order that ΔG for reactions can be calculated under different conditions of temperature and pressure.In the above discussion, the term reaction can be interpreted in the broadest possible sense as any transformation of matter from one form to another. In addition to chemical reactions, a reaction could be something as simple as ice (reactants) turning to liquid water (products), the nuclear reactions taking place in the interior of stars, or elementary particle reactions in the early universe. No matter what the process, the direction of spontaneous change (at constant temperature and pressure) is always in the direction of decreasing free energy.
Enthalpy and the heat of reaction
As discussed above, the free energy change Wmax = −ΔG corresponds to the maximum possible useful work that can be extracted from a reaction, such as in an electrochemical battery. This represents one extreme limit of a continuous range of possibilities. At the other extreme, for example, battery terminals can be connected directly by a wire and the reaction allowed to proceed freely without doing any useful work. In this case W′ = 0, and the first law of thermodynamics for the reaction becomes
where Q0 is the heat absorbed when the reaction does no useful work and, as before, PΔV is the atmospheric work term. The key point is that the quantities ΔU and PΔV are exactly the same as in the other limiting case, in which the reaction does maximum work. This follows because these quantities are state functions, which depend only on the initial and final states of a system and not on any path connecting the states. The amount of useful work done just represents different paths connecting the same initial and final states. This leads to the definition of enthalpy (H), or heat content, as
Its significance is that, for a reaction occurring freely (i.e., doing no useful work) at constant temperature and pressure, the heat absorbed is
where ΔH is called the heat of reaction. The heat of reaction is easy to measure because it simply represents the amount of heat that is given off if the reactants are mixed together in a beaker and allowed to react freely without doing any useful work.The above definition for enthalpy and its physical significance allow the equation for ΔG to be written in the particularly illuminating and instructive form
Both terms on the right-hand side represent heats of reaction but under different sets of circumstances. ΔH is the heat of reaction (i.e., the amount of heat absorbed from the surroundings in order to hold the temperature constant) when the reaction does no useful work, and TΔS is the heat of reaction when the reaction does maximum useful work in an electrochemical cell. The (negative) difference between these two heats is exactly the maximum useful work −ΔG that can be extracted from the reaction.
Thus, useful work can be obtained by contriving for a system to extract additional heat from the environment and convert it into work. The difference ΔH − TΔS represents the fundamental limitation imposed by the second law of thermodynamics on how much additional heat can be extracted from the environment and converted into useful work for a given reaction mechanism. An electrochemical cell (such as a car battery) is a contrivance by means of which a reaction can be made to do the maximum possible work against an opposing electromotive force, and hence the reaction literally becomes reversible in the sense that a slight increase in the opposing voltage will cause the direction of the reaction to reverse and the cell to start charging up instead of discharging.As a simple example, consider a reaction in which water turns reversibly into steam by boiling. To make the reaction reversible, suppose that the mixture of water and steam is contained in a cylinder with a movable piston and held at the boiling point of 373 K (100 °C) at 1 atmosphere pressure by a heat reservoir. The enthalpy change is ΔH = 40.65 kilojoules per mole, which is the latent heat of vaporization. The entropy change is
representing the higher degree of disorder when water evaporates and turns to steam. The Gibbs free energy change is ΔG = ΔH − TΔS. In this case the Gibbs free energy change is zero because the water and steam are in equilibrium, and no useful work can be extracted from the system (other than work done against the atmosphere). In other words, the Gibbs free energy per molecule of water (also called the chemical potential) is the same for both liquid water and steam, and so water molecules can pass freely from one phase to the other with no change in the total free energy of the system.
Thermodynamic properties and relations
In order to carry through a program of finding the changes in the various thermodynamic functions that accompany reactions—such as entropy, enthalpy, and free energy—it is often useful to know these quantities separately for each of the materials entering into the reaction. For example, if the entropies are known separately for the reactants and products, then the entropy change for the reaction is just the difference
and similarly for the other thermodynamic functions. Furthermore, if the entropy change for a reaction is known under one set of conditions of temperature and pressure, it can be found under other sets of conditions by including the variation of entropy for the reactants and products with temperature or pressure as part of the overall process. For these reasons, scientists and engineers have developed extensive tables of thermodynamic properties for many common substances, together with their rates of change with state variables such as temperature and pressure.
The science of thermodynamics provides a rich variety of formulas and techniques that allow the maximum possible amount of information to be extracted from a limited number of laboratory measurements of the properties of materials. However, as the thermodynamic state of a system depends on several variables—such as temperature, pressure, and volume—in practice it is necessary first to decide how many of these are independent and then to specify what variables are allowed to change while others are held constant. For this reason, the mathematical language of partial differential equations is indispensable to the further elucidation of the subject of thermodynamics.
Of especially critical importance in the application of thermodynamics are the amounts of work required to make substances expand or contract and the amounts of heat required to change the temperature of substances. The first is determined by the equation of state of the substance and the second by its heat capacity. Once these physical properties have been fully characterized, they can be used to calculate other thermodynamic properties, such as the free energy of the substance under various conditions of temperature and pressure.
In what follows, it will often be necessary to consider infinitesimal changes in the parameters specifying the state of a system. The first law of thermodynamics then assumes the differential form dU = d′Q − d′W. Because U is a state function, the infinitesimal quantity dU must be an exact differential, which means that its definite integral depends only on the initial and final states of the system. In contrast, the quantities d′Q and d′W are not exact differentials, because their integrals can be evaluated only if the path connecting the initial and final states is specified. The examples to follow will illustrate these rather abstract concepts.
Work of expansion and contraction
The first task in carrying out the above program is to calculate the amount of work done by a single pure substance when it expands at constant temperature. Unlike the case of a chemical reaction, where the volume can change at constant temperature and pressure because of the liberation of gas, the volume of a single pure substance placed in a cylinder cannot change unless either the pressure or the temperature changes. To calculate the work, suppose that a piston moves by an infinitesimal amount dx. Because pressure is force per unit area, the total restraining force exerted by the piston on the gas is PA, where A is the cross-sectional area of the piston. Thus, the incremental amount of work done is d′W = PAdx.
However, Adx can also be identified as the incremental change in the volume (dV) swept out by the head of the piston as it moves. The result is the basic equation d′W = PdV for the incremental work done by a gas when it expands. For a finite change from an initial volume Vi to a final volume Vf, the total work done is given by the integral
Equations of state
The equation of state for a substance provides the additional information required to calculate the amount of work that the substance does in making a transition from one equilibrium state to another along some specified path. The equation of state is expressed as a functional relationship connecting the various parameters needed to specify the state of the system. The basic concepts apply to all thermodynamic systems, but here, in order to make the discussion specific, a simple gas inside a cylinder with a movable piston will be considered.
The equation of state then takes the form of an equation relating P, V, and T, such that if any two are specified, the third is determined. In the limit of low pressures and high temperatures, where the molecules of the gas move almost independently of one another, all gases obey an equation of state known as the ideal gas law: PV = nRT, where n is the number of moles of the gas and R is the universal gas constant, 8.3145 joules per K. In the International System of Units, energy is measured in joules, volume in cubic metres (m3), force in newtons (N), and pressure in pascals (Pa), where 1 Pa = 1 N/m2. A force of one newton moving through a distance of one metre does one joule of work. Thus, both the products PV and RT have the dimensions of work (energy). A P–V diagram would show the equation of state in graphical form for several different temperatures.
To illustrate the path-dependence of the work done, consider three processes connecting the same initial and final states. The temperature is the same for both states, but, in going from state i to state f, the gas expands from Vi to Vf (doing work), and the pressure falls from Pi to Pf. According to the definition of the integral in equation (22), the work done is the area under the curve (or straight line) for each of the three processes. For processes I and III the areas are rectangles, and so the work done is
respectively. Process II is more complicated because P changes continuously as V changes. However, T remains constant, and so one can use the equation of state to substitute P = nRT/V in equation (22) to obtain
for an (ideal gas) isothermal process,
WII is thus the work done in the reversible isothermal expansion of an ideal gas. The amount of work is clearly different in each of the three cases. For a cyclic process the net work done equals the area enclosed by the complete cycle.
Heat capacity and specific heat
As shown originally by Count Rumford, there is an equivalence between heat (measured in calories) and mechanical work (measured in joules) with a definite conversion factor between the two. The conversion factor, known as the mechanical equivalent of heat, is 1 calorie = 4.184 joules. (There are several slightly different definitions in use for the calorie. The calorie used by nutritionists is actually a kilocalorie.) In order to have a consistent set of units, both heat and work will be expressed in the same units of joules.
The amount of heat that a substance absorbs is connected to its temperature change via its molar specific heat c, defined to be the amount of heat required to change the temperature of 1 mole of the substance by 1 K. In other words, c is the constant of proportionality relating the heat absorbed (d′Q) to the temperature change (dT) according to d′Q = ncdT, where n is the number of moles. For example, it takes approximately 1 calorie of heat to increase the temperature of 1 gram of water by 1 K. Since there are 18 grams of water in 1 mole, the molar heat capacity of water is 18 calories per K, or about 75 joules per K. The total heat capacity C for n moles is defined by C = nc.
However, since d′Q is not an exact differential, the heat absorbed is path-dependent and the path must be specified, especially for gases where the thermal expansion is significant. Two common ways of specifying the path are either the constant-pressure path or the constant-volume path. The two different kinds of specific heat are called cP and cV respectively, where the subscript denotes the quantity that is being held constant. It should not be surprising that cP is always greater than cV, because the substance must do work against the surrounding atmosphere as it expands upon heating at constant pressure but not at constant volume. In fact, this difference was used by the 19th-century German physicist Julius Robert von Mayer to estimate the mechanical equivalent of heat.
Heat capacity and internal energy
The goal in defining heat capacity is to relate changes in the internal energy to measured changes in the variables that characterize the states of the system. For a system consisting of a single pure substance, the only kind of work it can do is atmospheric work, and so the first law reduces to
(28)Suppose now that U is regarded as being a function U(T, V) of the independent pair of variables T and V. The differential quantity dU can always be expanded in terms of its partial derivatives according to
(29) where the subscripts denote the quantity being held constant when calculating derivatives. Substituting this equation into dU = d′Q − PdV then yields the general expression:
The above equation then gives immediately
For a temperature change at constant pressure, dP = 0, and, by definition of heat capacity, d′Q = CPdT, resulting in:
represents the additional atmospheric work that the system does as it undergoes thermal expansion at constant pressure, and the second term involving:
represents the internal work that must be done to pull the system apart against the forces of attraction between the molecules of the substance (internal stickiness). Because there is no internal stickiness for an ideal gas, this term is zero, and, from the ideal gas law, the remaining partial derivative is:
for the molar specific heats. For example, for a monatomic ideal gas (such as helium), cV = 3R/2 and cP = 5R/2 to a good approximation. cVT represents the amount of translational kinetic energy possessed by the atoms of an ideal gas as they bounce around randomly inside their container. Diatomic molecules (such as oxygen) and polyatomic molecules (such as water) have additional rotational motions that also store thermal energy in their kinetic energy of rotation. Each additional degree of freedom contributes an additional amount R to cV.
Because diatomic molecules can rotate about two axes and polyatomic molecules can rotate about three axes, the values of cV increase to 5R/2 and 3R respectively, and cP correspondingly increases to 7R/2 and 4R. (cV and cP increase still further at high temperatures because of vibrational degrees of freedom.) For a real gas such as water vapour, these values are only approximate, but they give the correct order of magnitude. For example, the correct values are cP = 37.468 joules per K (i.e., 4.5R) and cP − cV = 9.443 joules per K (i.e., 1.14R) for water vapour at 100 °C and 1 atmosphere pressure.
Entropy as an exact differential
Because the quantity dS = d′Qmax/T is an exact differential, many other important relationships connecting the thermodynamic properties of substances can be derived. For example, with the substitutions d′Q = TdS and d′W = PdV, the differential form (dU = d′Q − d′W) of the first law of thermodynamics becomes (for a single pure substance)
The advantage gained by the above formula is that dU is now expressed entirely in terms of state functions in place of the path-dependent quantities d′Q and d′W. This change has the very important mathematical implication that the appropriate independent variables are S and V in place of T and V, respectively, for internal energy.
This replacement of T by S as the most appropriate independent variable for the internal energy of substances is the single most valuable insight provided by the combined first and second laws of thermodynamics. With U regarded as a function U(S, V), its differential dU is
A comparison with the preceding equation shows immediately that the partial derivatives are:
Furthermore, the cross partial derivatives,
(42) must be equal because the order of differentiation in calculating the second derivatives of U does not matter. Equating the right-hand sides of the above pair of equations then yields:
The other three Maxwell relations follow by similarly considering the differential expressions for the thermodynamic potentials F(T, V), H(S, P), and G(T, P), with independent variables as indicated. The results are
As an example of the use of these equations, equation (35) for CP − CV contains the partial derivative:
which vanishes for an ideal gas and is difficult to evaluate directly from experimental data for real substances. The general properties of partial derivatives can first be used to write it in the form:
Combining this with equation (41) for the partial derivatives together with the first of the Maxwell equations from equation (44) then yields the desired result:
comes directly from differentiating the equation of state. For an ideal gas
(47) and so:
is zero as expected.
The departure of:
The Clausius-Clapeyron equation
Phase changes, such as the conversion of liquid water to steam, provide an important example of a system in which there is a large change in internal energy with volume at constant temperature. Suppose that the cylinder contains both water and steam in equilibrium with each other at pressure P, and the cylinder is held at constant temperature T. The pressure remains equal to the vapour pressurePvap as the piston moves up, as long as both phases remain present. All that happens is that more water turns to steam, and the heat reservoir must supply the latent heat of vaporization, λ = 40.65 kilojoules per mole, in order to keep the temperature constant.
The results of the preceding section can be applied now to find the variation of the boiling point of water with pressure. Suppose that as the piston moves up, 1 mole of water turns to steam. The change in volume inside the cylinder is then ΔV = Vgas − Vliquid, where Vgas = 30.143 litres is the volume of 1 mole of steam at 100 °C, and Vliquid = 0.0188 litre is the volume of 1 mole of water. By the first law of thermodynamics, the change in internal energy ΔU for the finite process at constant P and T is ΔU = λ − PΔV.
The variation of U with volume at constant T for the complete system of water plus steam is thus
A comparison with equation (46) then yields the equation:
(49) However, for the present problem, P is the vapour pressure Pvapour, which depends only on T and is independent of V. The partial derivative is then identical to the total derivative
(50) giving the Clausius-Clapeyron equation:
This equation is very useful because it gives the variation with temperature of the pressure at which water and steam are in equilibrium—i.e., the boiling temperature. An approximate but even more useful version of it can be obtained by neglecting Vliquid in comparison with Vgas and using
(52) from the ideal gas law. The resulting differential equation can be integrated to give:
For example, at the top of Mount Everest, atmospheric pressure is about 30 percent of its value at sea level. Using the values R = 8.3145 joules per K and λ = 40.65 kilojoules per mole, the above equation gives T = 342 K (69 °C) for the boiling temperature of water, which is barely enough to make tea.
The sweeping generality of the constraints imposed by the laws of thermodynamics makes the number of potential applications so large that it is impractical to catalog every possible formula that might come into use, even in detailed textbooks on the subject. For this reason, students and practitioners in the field must be proficient in mathematical manipulations involving partial derivatives and in understanding their physical content.
One of the great strengths of classical thermodynamics is that the predictions for the direction of spontaneous change are completely independent of the microscopic structure of matter, but this also represents a limitation in that no predictions are made about the rate at which a system approaches equilibrium. In fact, the rate can be exceedingly slow, such as the spontaneous transition of diamonds into graphite. Statistical thermodynamics provides information on the rates of processes, as well as important insights into the statistical nature of entropy and the second law of thermodynamics.
The 20th-century English scientist C.P. Snow explained the first three laws of thermodynamics, respectively, as:
- You cannot win (i.e., one cannot get something for nothing, because of the conservation of matter and energy).
- You cannot break even (i.e., one cannot return to the same energy state, because entropy, or disorder, always increases).
- You cannot get out of the game (i.e., absolute zero is unattainable because no perfectly pure substance exists).
Now this if truth will be a very badly rigged dying Universe, but 2 is false. Yes, you cannot win, all is ultimately a zero sum, as energy returns and you die for others to live. Yes, you cannot get out, as motion is eternal and so there is always a remnant ‘yang’ in the ‘yin’… a seed of thermal energy which when the quantum or mass state disorders becomes the ‘reproductive seed’ for new ‘Boltzs’ of temperature to activate. But you can break even through present reproduction, as we all will be repeated again.