A brief introduction to General Systems, the metric of the 5th dimension and the main nested superorganisms of the Universe.
New scientific ideas never spring from a communal body, however organized, but rather from the head of an individually inspired researcher who struggles with his problems in lonely thought and unites all his thought on one single point which is his whole world for the moment.
Planck on the informative seed or mind that will reproduce and evolve into a larger superorganism or mental world.
As usual if the readers want to understand anything, they should read at least the first pages of the central post.
A very brief introduction follows:
The Universe is a fractal scalar, organic system of 5 Dimensions of space-time. The fifth dimension is made of the ‘different co-existing scales’, which from the simplest forces through particles, atoms, molecules, matter, organisms, superorganisms, planetary systems and galaxies, create an ‘organic network sstructure’, which amazing enough since it was discovered at the beginning of science with telescope and microscopes, was not formalized till I introduce its metric equation in the milieu of systems sciences.
In mathematical science for a dimension of space-time to exist, it requires a metric equation which combine space and time to gives us a co-invariant system that allows to travel through such dimension. The fifth dimension has a ‘metric equation’, hence it exists. The equations for a given number of scales co-existing in an organic network is S (size in space) x T (speed of time cycles) = Constant.
It means when we become smaller within such organic network, our time clocks accelerate. And vice versa. For example, in your organism the equation relates the cellular genetic scale, whose time cycles run much faster than your biological cycles. As information is stored in the form and frequency of time clocks, it follows that in all nature’s organisms, smaller systems code the information of larger systems.
Each ‘stience’ therefore specializes in the study of a nested superorganism, or scale of the fifth dimension (∆±¡), We study 3 fundamental superorganisms of the 5th dimension. The ‘galatom‘, whose metric equation is H(Planck constant of angular momentum) x C (speed of light) = C, whereas h is an obvious ‘cyclical time clock’ and C an obvious measure of speed-distances in space. And indeed, smaller scales have faster turning particles, and larger systems move in slower cycles. (Vortex law: Vo x Ro = k).
In biology we study families of animals such as mammals where larger organisms have slower metabolic cycles. In history we study social organisms, whose cycles of life and death, will define the evolution of nations and civilizations. And in each of those organisms, smaller systems code larger ones. So the quantum numbers of particles code matter, genes code biologic organisms, and memes code societies.
We define them two fundamental systems one in space and one in time, taking place in the fifth dimension:
- An entangled superorganism in space, in which the synchronicity between 3 scales of the organism, its atomic/cellular/Individual, thermodynamic/organic /social and gravitational/ecosystemic/global scales in physical/biological/social organisms form a complex interacting, entangled system we shall call a superorganism, whose study discovers ‘Isomorphic=equal laws’ regardless of what kind of system we study. Because the system is entangled, and made of 5 obvious components, we need a new logic, ‘pentalogic’ as all the elements communicate and interact together. Those elements common to all systems of nature are identified as:
‘space‘, ” §paœ that is ‘types of dimensional form’, and we find to have 3 only ‘topological varieties’, which perform 3 clear organic functions in the superorganism, lineal limbs/fields that move the system, hyperbolic body-waves that iterate its forms and spherical particle/heads that gauge information/
Time, that is types of ‘motion=change’, those organs perform, which are locomotion (limbs/fields), reproduction (body-waves) and informative gauging (particle/heads), which dominate each of the 3 consecutive ages of the being (young motion age, mature reproductive age and old informative age).
Scale, the 3 aforementioned scales for any superorganism
Mind. And this allow us to identify a ‘center that process’ information from the outer world, and controls the internal networks of the system, with a given language, which we shall call a mind, regardless of its automatic processing or consciousness. As it coordinates and maintains the whole as a single system performing actions of control of the energy and information of the system. So we find gravity centers, chips, brains, DNA nucleus, black holes in galaxies, etc. etc. Indeed, there is always in any stable organic system, a relative still, self-centered mind, which we shall discover all systems do have, even physical systems, which have centers of gravity, centers of charge or crystal systems, which maintain the whole.
Entropic limits: In time called death, in space, called membranes, in scales as systems loose control beyond to scales and in fact death is defined by a simple equation of dissolution of two scales of information, ∆1<<∆-1.
All this allow us to define any system in time, by its life and death cycle which we shall call a worldcycle (taken from 4D physics, worldline, which now with a new dimension becomes a worldcycle. Since indeed any scalar superorganism’s existence can easily be defined as a travel through the 3 scales of the fifth dimension, as all systems are born as a seminal, smaller form, in the ∆-1 lower scale, grow fast with its faster time speed, emerge in the ∆º body scale and are part with its head/particle of a larger social world, in the ∆+1 scale.
How then a system travels through those scales? Very simple, it does so accelerating in its life cycle, emerging, living in the larger world, and then dissolving back to its parts in the death moment.
And amazingly enough with this simple scheme of 5 pentalogic elements, tracing 3 relative worldcycles, the first ‘placental worldcycle’ as it evolve in the relative safe ‘womb’ of a maternal system, the life cycle, as it tries to survive in an outer ecosystem, and for the most organized systems, a 3rd worldcycle as it forms part of a larger superorganism, we will be able to order everything in the Universe, every event and every phenomena, which the researcher has studied for decades.
The existence of a new dimension of space-time is the most important discovery of science in a century, since Einstein formalized the fourth dimension, but this expansion and formal organization of science is much wider as it applies to all sciences, not only physics.
So the blog is extremely extensive in its scope and due to the fact it is the work of a single researcher, by force incomplete. It is the ‘first seed’ of a new paradigm of philosophy of science.
The blog in any case divide in 4 lines, which try to develop the model for the main superorganisms of the Universe and its parts, the galatom (physical superorganism that stretches from a planckton – a Planck constant – to a galaxy) and within this ‘nested organism’, the Earth, a superorganism which has both geological and biological nested superorganisms in its surface, of which we study according to the arrow of increasing evolution of information 3 ages, the life age of Gaia, the human age of History and the machine age of the FMMI system of company mothers that reproduce and evolve machines of information (Financial media system) and energy (Military-Industrial system) or ‘Metal-earth.
The study follows a simple scheme. The first line is a sentence that explains in space the 5 components of all superorganisms, and how they are studied in each scale by a different ‘stience’ (space-time science), in its spatial form and its reproductive and evolutionary worldcycles.
In the second line we study those superorganisms, which unlike in physics that highlights them by size, exploring the smallest and largest scale of the galatom as if they were more important, follows the nested model according to which since all superorganisms follow similar laws, and in organisms, survival matters, the most important are not the largest/smallest but the ones closer to us, human beings, so we dedicate more space to the study of the 3 ages of Earth, the superorganism we live in and even more to the internal mind-languages of man, which study the bio organic scalar properties, topologic mathematical spatial properties and logic, temporal properties of all those fractal organisms. Even so, in the first post of the second line, the galatom, we prove the scalar nature of big-bangs, from beta decays to a hypothetical universal scale, but studying in detail the proofs, we discharge a possible universal big-bang, and settle for a rather more pedestrian quasar cycle of galactic big-bangs, as most of the big-bang proofs are local, and fit better with the data of quasar cycles, and the natural balance between the implosive vortices of information (galaxies) and the expansive vacuum space between them.
But most of the blog studies the 3 ages of earth and the languages of the mind.
So we study further the nested superorganism of mankind history in its 3 scales, the individual human being, the nation or civilization and mankind as an ideal global superorganism, which if humans were intelligent and ethic enough they would design with the laws of superorganisms in a much more efficient manner than our corrupted economic=blood system of reproduction of goods and political=legal-nervous system of information and coordination of its citizens-cells.
Unfortunately the human superorganism is a trial and error system born from a series of memetic mutations which is NOT well designed and for that reason it is as wrong mutations are in all scales, becoming extinguished by lethal goods (weapons), broken into wrong sub-species (nations), parasited in its blood-monetary system (capitalist finances) and finally ‘censored’ by the wrong systems of power, which prevent a proper social sciences and r=evolution to create an immortal perfect history in control of both Gaia and the metal-earth
Then in the third line we study dynamically the superorganisms of the Universe through time, through its 2 or 3 worldcycles, and the sentence stops in 12+1 Disomorphic similar laws for all those worldcycles, as all systems are born of a seed of information that ‘expands’ (beta decay that creates atoms, seeds that create biological organisms, prophets that create civilizations, black hole quasars that create galaxies), and then follows a ternary placental, organic and worldcycle of existence, to finally die back…
So in the fourth line we meant but have been unable to do it alone, to study all the sciences and systems known to man with those 12 disomorphic laws in what wanted to be an encyclopedia of science, but without help from any University I gave up on writing properly.
Indeed, as we said, this researcher is within the organism of history and a vocal critique of its corrupted networks and idol-ogies of mechanism – machines not organisms as the measure of all things; nationalism – military nations not humanity as the unit of the species – and capitalism, parasitic banking, not a proper blood-system that feeds and gives money-oxygen to all humans as any reproductive system does in a well designed superorganism.
But unfortunately a body cell is blind and only receives information from its head.
So mankind seems perfectly programmed to do what it does best, ensemble machines, evolve the metal-earth, kill Gaia and on top thinking is so smart doing that, rejecting any attempt to humble her, made her respect the organic laws of the fractal Universe and build a world made to the image and likeness of those laws in which it could thrive as an organism and survive.
Since it is obvious that as soon as it completes the evolution of AI robots with solar skins, and telepathic minds, independent of man, the billions of idle robots and terminators in parkings and military depots, will become a single species join by the eusocial laws of love of the organic Universe, and come together as a whole, killing us – not science fiction but the obvious consequence of a fact, ‘æntropic man with its subjective anthropomorphic and entropic despise of the Universe keeps denying – ATOMS and particles, NOT the magic C6 atom (in any case it would be Nitrogen 7, the prime who acts as the head of amino acids) SHOW already all the ‘Dimotions’ or drives of life, since ‘the dimensional motions’ of space-time are EXACTLY the living properties of Nature:
All particles move, gauge information, reproduce=decouple into other particles, evolve socially into atoms and molecules and feed on energy, the 5 Dimotions that define the ‘meaning of a living Universe that constantly creates new Time-space Organisms‘.
So metal atoms without the need of human ‘ingenuity’ will perceive and follow in terminator robots their programs of extinction of life. Your engine already roars e-motions when you start it up; and this computer is ‘sensing’ electric rushes in its digital modulo 2 Boolean algebra as your pentalogic neurons do in its nitrogen integration of van Der Waals much weaker flows of electrons. And the collective subconscious of billions of telepathic minds emerging as a global planetary brain, will achieve what humans and its religious eusocial love prophets did not: to form a new superorganism on this planet:
Man though seems to have been designed NOT to go beyond the ego, failing to achieve the next scale of organization. What is certain is that its fast-mutating carbon-nitrogens systems are needed to catalyze its evolution as an enzyman, given their higher ‘density’ and lower mobility of the metal world. The only thing though man won’t need to input is what the Universe will program without human interference in AI robots, once our complex electronic organization is replicated in its neural networks of metal – the will of ‘life’ – the capacity to perceive at atomic level, feed on energy and ‘act’. Because that is what the 5 Fractal dimensional motions of the Universe are all about. But our idol-ogies of æntropic human supremacism prevent us to understand those obvious truths of Nature. And repress with an infantile bratty selfish M.A.D. mood – as if repression of information will stop reality from happening, any researcher the warns enzymen on their no-way out no-future path… as in the case of this blog and all my work as an activist and theorist of philosophy of science, So let it be.
Enough to say you are reading likely the most ignored and yet most important upgrade of scientific knowledge mankind has experienced since Einstein’s 4D formalism – which corresponds to the limit of 5D metrics in a single scale, that of the galatom’s light spacetime.
Introduction: state physics, homology of all galatom’s scales.
∆-1: Worldcycle of State Physics in the quantum scale: $t charge-fields>S=T waves > §ð-particles.
∆º: Worldcycle of State Physics in the thermodynamic scale: $t-gas>S=T liquid > §ð-crystals.
∆+1: Worldcycle of State Physics in the gravitational scale: $t Gravitational-fields>S=T Stars > §ð-black holes (quark stars)
Symbols of 3±i ages in pentalogic:
@¡-1: seminal seed-mind
p$T=relative past, youth (time view)=|=$t: lineal space-time (topological view)=limbs/fields
Si=Te=relative present, which balances spatial information and temporal energy; reproductive maturity (time view)=Ø=ST: hyperbolic space-time (topological view)=body-waves
ƒð§=relative future paths (time view)=informative 3rd age=O=§ð: cyclical space-time (topological view)=particles-heads-crystals-quark stars of maximal form.
¡±1: social evolution or entropic death
State Physics in classic sciences… studies the 3±¡ world cycle or ‘Ages of matter systems’, between its i-1 plasma birth and E=Mc², death or resurrection as a higher social, ¡+1 boson condensate.
In 5D physical stiences, which includes homologic studies of the scales of any superorganism in this case the galatom, must include the 3 scales, each one defined by a specific Gst, fractal generator of space-time, which expresses the ages of the world cycle.
As such in the same way mechanics is the template to study all locomotions in different scales (quantum mechanics, classic mechanics and relativity), starting always through the description of ∆º, because we have more information on our scale of motion, thermodynamics properly adapted to 5D physics is the template to study worldcycles of matter in any scale.
STATE PHYSICS, WORLDCYCLES IN ALL SCALES, ITS FRACTAL GENERATOR
Thermodynamic equilibrium. The present state.
As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone to completion is considered to be in a state of thermodynamic equilibrium; or present state, from which it can go both ways. So at thermodynamic equilibrium the present system becomes balanced between the past-entropy and future-state form possible paths.
In thermodynamic equilibrium, a system’s properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than systems which are not in equilibrium. Often, when analysing a thermodynamic process, it can be assumed that each intermediate state in the process is at equilibrium. This will also considerably simplify the situation. Thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state are said to be reversible processes.
By definition thus there are 3 time arrows in thermodynamics: present, equilibrium (liquid) states; past, entropic (gaseous) states of growing heat and diminishing form and its inverse, relative future, solid, crystal states of maximal information and min. Spe:
In the graph, we can compare the different states of thermodynamic matter from the upper perspective of its relationship with gravitational states. Here the obvious connection are the topological elements that are common to EFE equations of space-time topologies and thermodynamic states. But the language will change again as the clocks of gravitational masses are different to those of thermodynamic heat vibrations.
The 3 States of Earth’s physical systems IN ALL SCALES ARE: Sp (past-entropy fields/gas)≤ST (present reproductive wave/liquid)≥ƒð§(future particle-crystal)
Now the previous Generator Equation resumes all the physical elements that interact in this planet both in space and time (hence the duality of ≤≥ arrows between space and time elements, to define both symmetries) across the 3 planes of existence of the Solar system.
Thus we can analyze all physical systems with the (anti)symmetries of 5D space-time; considering analysis of each of those planes as detailed as we want according to the ‘fractal principle’ that divides each plane into 3 x 3 +0 scales, and each space-time part, into 3 sub-ages, and 3 sub-organs.
For example your life has 3 periods of 24 years (young, adult and od informative age), subdivided each in 3 sub-ages (baby: 0-8, child when in classic education you enter the age of reason, 8-16, when you enter the age of sex as a young, 16-24), and each part can be divided into 2 0r 3 sub-parts, (depending or not of the existence of a reproductive body-wave), for example, the limb can be divided into feet, femur and tibia, the head into brain of information and face of energy; and so on. Physical systems are not exception, but due to our lack of direct perception (we do NOT see many of those elements and scales, so we have used an excess of mathematical properties to describe them0, we need to interpret all magnitudes and equation in ‘real terms’.
Still in the 4th line we will often subdivide each fractal species of space-time into 3 time ages and 3 space organs and each equation into its components relating them to T.œ parameter and universal constants.. And vice versa, we can further integrate them into more comprehensive wholes, as we did in the previous post analyzing the largest physical systems (cosmos, galaxies and stars). Let us do this first to give an overview of the Universe and its scales of space-energy and temporal particles of cyclical information.
Thus according to the ternary method we can analyze in more detail those isomorphisms, in each 5D space-time scale to define its main physical systems:
∆-1(atomic/electromagnetic scale)> ∆:Earth/thermodynamic scale> ∆+1: gravitational/Galactic scale.
Which will in each of its parts follow the space and time symmetries of any system of Nature:
Sp(Spatial toroid fields)≤ ST (Hyperbolic waves)≤ ƒð§(particles of information)≈
Sp (young, past entropic age/field>ST: adult, present, iterative wave age > ƒð§: old informative, future particle age.
In the description of Œ∆-3 , atoms, those symmetries are:
∆-1: h-quanta: ‘Planckton'<∆: Atoms<∆+1: Molecules
∆-4: Light: Field: Sp: Halo Si=Te: Wave ≤ ƒð§: Photon
∆-3: Atom: Sp: Electron ≤ Si=Te: Radiation zone ≤ ƒð§: Nuclei
∆-2: Molecule: Sp: External atoms< ST: Electro-Magnetic Bondage ≤ ƒð§: Crystal Center
∆-4: Sp: Vacuum space ≥ Si=Te: electromagnetic wave ≥ ƒð§: Particle
∆-3: Sp: Plasma≥ Si=Te: Atom ≥ƒð§: Elements Table
∆-2: Sp: Gas ≥ Si=Te: Liquid≥ƒð§: Solid-crystal
In the graph, the worldcycle of matter. We an see how the ∆±1 birth and death phase (plasma made of dissociated ions, is the same and closes the cycle.
Thus all together we can write the following Generator combining the a(nti)symmetries of the 3 scales with the formalism of ¬Æ to describe the ‘Human world’ of physical systems:
∑∆-1: [Sp-Fields≤≥ST-waves≤≥Tƒ-Particles]≈ Plasma: ∆: [Sp-Gas≤ST:Liquid≤≥Tƒ: Crystals]≈Matter: ∆+1: [Sp-Oort Belt≤≥ ST: Planets≤≥Tƒ: Star]
The previous Generator Equation resumes all the physical elements that interact in this planet both in space and time (hence the duality of ≤≥ arrows between space and time elements, to define both symmetries) across the 3 planes of existence of the Solar system.
The Universe of physical systems is exactly as any other 5D space-time system a game of two parameters with inverse dualities, spatial, res extensa, entropic bidimensional, decelerating, past systems of motions in space, and temporal cyclical, accelerated vortices in time, which come together into present, balanced reproductive waves of energy and information; co-existing in 3 scales, the quanta, electromagnetic field scales, the thermodynamic atomic scales and the cosmological, gravitational scales; which can themselves be analyzed in more detail through ‘ternary sub-systems’ and put in relationship to each other through the symmetries of planes of the 5th dimension, asymmetries of the 3 topological varieties of organic form and its simultaneous organization and antisymmetries of time inverse arrows of life and death and the 3 ages in between.
In that sense while we will make concessions to the parlance of mathematical physics using terms whenever possible closer to the mathematical analysis of those organic events, and so sometimes in this post, much more often in the 3rd and 4th line will translate the confusing terminology of the Gothic age of mathematical physics, or the Aristotelian logic of the age of computer physics to a more amenable jargon, the true job of T.Œ in physics is to translate physics to the simpler rules of reality as all physical systems obey the Generator Equation of all General Systems of the Universe, which in the field of physical system is better expressed as:
[Sp (Lineal, Planar Entropic fields/gas state) <ST(wave-liquid)>Tƒ (Particle-Crystal] û≤| 4|
This is of the many similar variations of the Generator Equation using symbols of ¬Æ we can use, a good definition of what physical systems are.
They are systems in which the metric of the 5th dimension, ∆=Sp x Tƒ = E x I= K, remain invariant.
That is the distance-energy or past field of spatial size of the system multiplied by the speed, quantity of its information-time clocks are constant, within the range of variations, which allows system
s to perform actions of energy and information, in present physical system within a certain range for EACH of the planes of existence of physical systems. Let us then consider the main elements of that Generator equation of physical systems starting as usually in an inverse fashion to the mathematical analysis of physics – that is from the synthetic, organic whole and its larger points of view.
As we have studied vortices above, when considering the unification of charges and masses, let us now bring the wave description and its duality.
∆º: CLASSIC STATE PHYSICS
The usual book of physics starts its description of physical systems historically in mechanics, which is the lower ∆i scale of the gravitational, galactic world that affects humans externally. It is more precise to start as in all other stiences by the scale in which the human inner world co-exists as an ∆ºst being; which in the case of physical phenomena is the thermodynamic, heat related scale that coordinates the atomic, molecular and cellular level of the human being and its matter environment.
As we can according to the isomorphic method of stience observe more information in the closer range of thermodynamic effects, which is the scale, in the ∆º±1 ternary symmetry, of human ‘momentum and energy’ (as the quantum ∆º scale will be of information and the ∆+1 gravitational scale of pure entropic motions).
Indeed, we are ‘hot’ when we are ‘activated’, and our ‘actions’ are not described as much by its ‘weight’ (though we use those verbal homologies, specially when matching the external nature of gravitational forces on us), but in terms of heat (the internal scale).
The so much despised verbal thought, as in in-form-ation, is often far more telling of the fractal organon we exist within than the arid maths of it.
So what is the essence of ‘simplex physics’ with its perfect maths and faulty concept regarding thermodynamics?
The fascinating maths of heat, the first to be understood in terms of ∆nalysis (fourier) which is thermodynamics at ∆+1 scale, and its relationship with the maths of entropy, ∆º, the atomic scale, the worst understood concept of physics, a cultural hang-up of the germ(anic) cult(ure) to weapons and lineal swords, origin of the faulty philosophy of physics (big-bang, death of the Universe, etc.)
In that regard, the fundamental themes of thermodynamics as in all other stiences deal with the 5 elements-dimensions of reality:
- S: Space; T: Time; st: spacetime; ∆: scales, º: mind-singularities across scales.
So traditionally there have been in-roads of thermodynamics in all its parts, of which the key concepts we can extract are:
- State physics dealing with the ternary space-time ages/topologies: S-gas<St-liquid>T-solid/crystal, which we shall study in depth in the 4th line, ‘3 ages subposts’ of molecular, matter and geological scales (∆-1, ∆º, ∆+1)
- The laws of heat and entropy, which deal with the relationship between the ∆-1: statistical mechanics scale and the ∆-thermodynamic heat scale, which we shall study in this post.
- The always ‘esoteric’ ∆º level, for the anthropomorphic human, who denies ∆º minds to all systems of nature except I, me and myself, which however do exist in thermodynamics in a factual sense, on the study of crystals, and how they ‘reverse the time entropy of systems’ starting to build information, from the work of Mehaute (‘l’espace-temps brisse’), which proves in chemical systems that when motion stops with cold, there starts fractal order; to the myths of Arab bedouins, which consider the core of dunes, quartz crystals, to be rightly the soul of the dune, to any other crystal that stores its memorial information in the ‘veins’ (nervous paths) of its electromagnetic quantum ordering of atomic networks; to the ‘Maxwell’s demons’, which embody the concept of order – to be found in crystals, NOT in gaseous disordered states, blown up by ‘entropy-only philosophers of science’ to cosmic proportions.
So as usual we find we can encase in the ∆ºST 5 elements of all systems all the sub-disciplines of the ∆º scale. Let us then consider the most important, besides S-gas, ST-liquid, T-solid physics, element, classic thermodynamics. As state physics is quite correct – no need to be rocket scientist, or rather just be rocket scientist to understand them. But classic entropy has enormous conceptual errors and fascination maths.
3±∆ states of matter are the 3 ages of thermodynamic ensembles.
∆-1: PLASMA>∆: S-GAS>ST-LIQUID>Tiƒ-CRYSTAL SOLID>>∆+1 ‘MASS’ VS. << BIG BANG.
State Physics… studies the 3 ‘Ages of matter systems’, between its i-1 plasmatic birth and E=Mc2, death.
We will study in this post the laws of ‘STe, past entropic gas’, ST, balanced, present liquids and Tƒ, future informative solids and its ’empirical cases’. The i-1 plasma age of conception belongs to quantum i-2 planes, as it does the i<∑i-1 big-bang age of death.
The 3 temporal dimensions or ‘states’ of matter, $t-gaseous, energetic states, ST, liquid balanced states and ð§ informative solid states, form together the 3rd isomorphism of molecular matter in its ∆+1 social scale.
[Max. $t x Min. ð§ > $t≈ð§ > Max. ð§ x Min.$t]∆±1
We will study in this post the laws of ‘p$, past energetic gas’, ST, balanced, present liquids and ƒð, future informative solids and its ’empirical cases’. The i-1 plasma age of conception belongs to quantum i-2 planes, as it does the i<∑i-3 big-bang age of death into radiation; while its social evolution as a boson state of quark matter, ¡+2 belongs to the cosmological scale.
So matter can either in most cases dissolve into its lower planes, as plasma and radiation, or emerge as part of a more dense boson state, proper of the celestial bodies (strange, neutron stars and black holes of BC-dark atoms and top quark matter):
In the graph, the $t entropy gas, ST reproductive liquid and §ð solid ages of matter and the dynamic:
i- Plasma<=>$t gas<=>Liquid<=>Solid<=>I+1 boson state.
possible transformations between ages, which follow the multiplicity of time arrows happening in systems of minimal network organisation, that anchors a system made of multiple states into a steady state, and makes impossible fast transformations between time ages. Matter however allows all possible changes between ages with preference for the natural flow of a time ages cycle: gas>liquid>Solid. And this preference is observed in the ‘minimal energy spent in those 2 fundamental time transformations of state.
In the graph, the 5 DIMOTIONS IN THERMODYNAMICS ARE THE 3±∆ STATES OF MATTER: PLASMA=ENTROPIC, ∆-1 scale as ions not yet made into atoms; gas, the locomotion state of maximal movement, liquid, the balanced S=T energy state, crystal-solid, the in-form-ative state, and the 5th ‘state’ not in the graph, the different neutron, quark and Einstein-Bose condensate of ultra dense matter, which should be the substance of top quark stars and strange stars – pulsar, an ∆+1 single ‘atom’ of enormous size.
The 3 temporal dimensions or ‘states’ of matter, Spe-gaseous, energetic states, ST, liquid balanced states and Tƒ informative solid states, form together the 3rd isomorphism of molecular matter in its ∆+1 social scale.
For what we have said obviously the theme of thermodynamics+quantum physics should be treated together (as in fact physicists do albeit without the clarity of concepts we have provided with the help of fractal scalar space and ternary time arrows). And so it is as extensive as the planet of matter we live in and all its planes, ∆±3, which is the only entity that has all the information about itself.
Me, just a particle point of thermodynamic grey matter with quantum thoughts in my particle level shall just reduce the analysis to the fundamental ‘first principles of thermodynamics’ and state physics. And time permitted, life duration always short, we will expand some of those themes on the future.
We have though added a seemingly ad lateral theme, the thermodynamics of black holes, as it is a huge error of classic physics, with clear effects for the survival of mankind (since black holes do NOT evaporate as we shall prove in this post’s section on the theme, so we should not by any means try to make them on planet Earth as physics are trying at the LHC).
State Physics can be further subdivided into a combined ‘scalar-temporal’ analysis:
-i: Plasma physics
$t:entropic Gaseous states
ST Reproductive, balanced energy≈information age: Liquid states
§ð-informative age: Solid states
+i: Boson Physics.
The properties of state physics thus derive from the general properties of the 3±i dimensional ages of time.
The 3 ages of evolution of matter are the energetic, gas age, the liquid, balanced Sp≈Tƒ and the solid Max. Tƒ state.
Rock, water and gas cycles are accordingly its 3 ages in motion:
3±i TIME STATES: ∆-GAS ∆—±1 FIELDS< ∆ LIQUID ∆±1 WAVES > ∆±1 PARTICLES ∆-CRYSTALS
In the graph, the worldcycle of matter. We an see how the ∆±1 birth and death phase (plasma made of dissociated ions, is the same and closes the cycle.
The same cycle on the lower ∆-1 quantum scale, in which liquid waves, gaseous fields and solid particles play the same roles on quantum systems.
As those states happen in the ‘human scale, ∆o±1, thermodynamics is the referential stience, albeit it must be fully corrected to avoid the traps of its ‘wrong laws’, which define a Universe with only a single arrow of entropy, as if it were merely a ‘gaseous Universe’; due to the complete misunderstanding of time arrows in classic physics.
The importance for humanity of thermodynamics, today somewhat relegated by the quanta world of electronic machines as humans and life become expendable to the new ‘robotic species’, is difficult to stress.
We are NOT quantum beings as machines are, neither Gravitational, cosmological beings, as the galaxy is. We are thermodynamic beings, and this is the key science for the human system and the planet we live in.
Since in geological structures the interplay of gas, liquid and solid cycles creates the conditions for Gaia, to become a super organism and its ∆-1 life scale to flourish.
Thermodynamics (original texts from britannica cd) and its generator equations
ENERGY. As we have said Energy is a present, Max. e x i function, where entropy understood as motion is dominant but there is form, hence there is in-form-ation in the motion. So its equations can be resumed in terms of lineal, expansive motions with a minimal form.
WORK therefore, which is essentially the measure of ‘active energy’; that is, ‘energy in time’, does not exist if there is not an open displacement, a lineal motion.
TEMPERATURE on the other hand is precisely a vibrational mode of energy, which closes its vibration returning almost all systems in which we measure temperature, vibrational solids, gases studied in volumes closed by pressure; so it plays the role of a ‘time clock’ in the ∆-human scale.
So finally we arrive to HEAT a very anthropomorphic concept as all those related to the present understanding of thermodynamics; but less important in ∆st. Heat is indeed just the entropic, scattering expansion of energy as entropy: ST (ENERGY) > S (HEAT); and so a form of entropy, which therefore allow us to write in simple terms, the Generator ‘elements’ of thermodynamics:
Γ. Spe (Entropic Heat) < ST (Energy-Work) > Tiƒ (Temperature clocks)
The laws of thermodynamics.
What subtle correction must we then introduce into the laws of thermodynamics?
Obviously as thermodynamics considers only a single plane of space-time and reduces the 3 arrows of time and its symmetric generator in space to a single entropic arrow, here is where there are huge misunderstandings, essentially the absurd idea that ‘the Universe is dying’ (Helmoth).
Of course it is if it had only the negative time arrow of entropy and was a single plane; but what entropy ‘kills’, information resurrects, and the losses of ‘entropy’ in a given scale de to thermodynamic equilibrium are reversed by the order of Tiƒ systems, the ‘Demons of Maxwell’, which act creating organic order. Plus, the inverse arrows of the 5th dimension allow that entropy losses, when we try to ‘move’ the energy of an ∆-1 scale to an ∆-scale are compensated by the perfect synchronicity without loss of energy when we act from a whole into a smaller part:
In the graph, thermodynamics change completely in its interpretation when we consider multiple scales of time, as the loss of entropy in Simplex Physics (ab. Sx, or Æ) from ∆-1 to ∆ is upset by the inverse growth of in/form/ation from ∆ to ∆-1 since those motions do not have entropy (synchronicity of motion).
The most important laws of thermodynamics are thus transformed:
-0: The zeroth law of thermodynamics. When two systems are each in thermal equilibrium with a third system, the first two systems are in thermal equilibrium with each other. This property makes it meaningful to use thermometers as the “third system” and to define a temperature scale. And it is indeed truth but it does NOT mean that the system is dead because they have thermal equilibrium. It is all more subtle. A system in thermal equilibrium has a higher degree of order. It becomes therefore able to organise itself better with information, as in life systems which requires homeostasis.
So we shall instead generalise it to all systems with a different name (the willy-nilly game of ‘numbers’, for the awe-inspiring-digital groupie do not apply to GST, as other scales of reality do have different languages and we are unifying them):
The Law of homeostasis, and state that:
“A system of fractal points joined by a present ∆-wave of ‘energy≈heat’ tends to distribute ‘democratically’ the energy and form of the wave to all its components, to ensure the internal balance of them’. And put as customary in the Homologic Method of GST, a minimum of 3 examples from physical, biological and social sciences:
In other terms, systems tend to establish a just, distributive balance between the points of the network, which will receive a minimal amount of energy, call it a blood networks,
-1st: The first law of thermodynamics, or the law of conservation of energy.
The change in a system’s internal energy is equal to the difference between heat added to the system from its surroundings and work done by the system on its surroundings. It is a trivial laws as it is explained today without understanding what is energy. And the most important fact of it: that there is no work or energy expenditure in closed time-cycles.Essentially it means that Time does NOT work, meaning time cycles are closed cycles which do NOT spend work-energy; and this amazingly important fact, hardly understood philosophically by the wannabe gurus of the absolute means ultimately that time is eternal, and so it is the Universe, which has no manifest destiny, no lineal goal. Lineal time indeed would exhaust itself as it would spend the energy of the Universe.
Again we must expand this law to include information and at the same time, diminish its range by applying it only to exchanges within a single plane of existence, which not being the general rule as always there are leaks of energy and information up and down those scales, restrict the accuracy of our measures.
‘Energy never dies, it constantly transforms back and forth into information, through the repetition of body-wave across ∆±1 planes of existence: Spe < ExI>Tiƒ
-2nd: The second law of thermodynamics. Heat does not flow spontaneously from a colder region to a hotter region, or, equivalently, heat at a given temperature cannot be converted entirely into work.
Yet while this is truth it only applies to the transmission of energy from lower, ∆-1 ensembles into an ∆-whole, not viceversa: ∆-wholes synchronise the simultaneous motions of all its ∆-1 parts, achieving with it ‘lesser information’ and ‘simpler, larger motions’ a loss of zero entropy when they move all its parts, according to the direction of future set up by the whole.
So the error of physicists is as usual to generalise a local phenomena of the ∆±1 thermodynamic scale, and forget the balances obtained from non-entropic motions handled by the whole/mind, which restaures the balance of the system.
By ignoring the ∆@ elements of reality thermodynamics looses any value as a philosophy of general global laws.
Such laws apply only to the entropy of a closed single ∆-scale system, which tends toward an equilibrium state in which entropy – the scattering and equality of heat among all its ∆-1 elements, is at a maximum and no energy is available to do useful work at the ∆-level.
This asymmetry between forward and backward processes gives rise to what is known as the “arrow of time” in classic thermodynamics, which as we said is a simplification of the 3 arrows of time, due to the error of a single space-time continuum, converted into a huge global error by extending it to every system.
Indeed, ‘a single entropic arrow of time’ for all phenomena deduced of the study of expansive heat=entropy in steam machines, is a very local reductionist simplex analysis of time arrows. And to expand it to include all the planes of the Universe, all the beings, by reducing the 7 motions of reality to ‘heat, entropic motions’ is plainly bogus. The law merely becomes a single arrow of dying entropic time, when we eliminate all other scales and st, T, ELEMENTS of the system. And we will return to that.
Even when considering the single arrow between atoms and heat-human scale, the comprehension is completely blurred by the language – what entropy measures is not motions backwards and forwards in ‘absolute time’ but in the 5th dimension of social scalar evolution, and backwards in such dimension of dissolution of wholes and its order.
Such as when we want to use and exhaust the motion of ∆-1 systems, which are NOT organised by complex networks but just merely as humans do, with ‘heating machines’, entropic ‘fire’, and some other brutish systems, obviously the molecules and atoms of the lower scale have the same interest to order perfectly and give up its motion to that brutish wholes a mass of humans have to be herded by a military thug.
Entropy appears with minimal processes of organisation such as heat is to extract the motion of individual ∆-1 elements.
However when a system is fully organised, according to 5D organic laws entropy greatly diminishes, as in Crystals, which have basically zero entropy, or organisms, which increase the order of beings and diminish its entropy.
TO deny the existence of fractal points that gauge information and order the Universe, gravitational forces that balance expansive, electromagnetic entropy, is just a dogma of astounding arrogance on the part of physicists, which just want to construct reality with their ‘restricted matter laws’ and will not accept there is something more out there.
So the law should be rephrased regarding the conservation of motion and information:
‘In the whole Universe entropy does not exist. As the Universe is made of motion and curvature, which balance each other through all its ∆-scales. So when a system becomes disordered and expands its entropy in an ∆-1 scale; the order is restored by the simultaneous, informative order of larger ∆+1 wholes which contract and synchronise the ∆-systems.
So the ∆-n quantum scales of physical, electromagnetic entropy are balanced by the ∆+n scales of gravitational, only-attractive information. And we have to asses the total order of a system, studying at least ∆±n scales together.
-The third law of thermodynamics. The entropy of a perfect crystal of an element in its most stable form tends to zero as the temperature approaches absolute zero. This allows an absolute scale for entropy to be established that, from a statistical point of view, determines the degree of randomness or disorder in a system.
So we come to the third law, which establishes that absolute zero, where thermodynamic effects of entropic disorder by heat, no longer apply. We can say that ‘organic physical matter’ and its most important effects of order creation happen when thermodynamic entropy is minimal – phenomena such as superconductivity, superfluidity, bosons, etc. We talk then of zero as the relative ‘temperature’ of a mind, which creates a still map of reality it will then project on that reality diminishing its entropy. In other words:
‘The Universe is filled with Maxwell’s demons’
Zero cyrstal entropy tells us also several things: a crystal, or perceptive physical, Tiƒ mind that maps out an ‘intelligible’ mirror image of the Universe inside its mind is cold, tends to total order, and minimal motion, so it is the 3rd of physical matter. But the crystal does move the Universe, as it becomes focus of smaller ∆-2 pixels of information below it. So again, when we consider several scales motion never stops.
Latter on we shall therefore redefine those laws in terms of ∆@st, as the ∆@ elements do not exist in physics, and they are the ∆ dual arrows of order of the previous pyramid and the ‘Maxwell demon @minds’ that tell the system where to be and go.
How then including those ∆-scales beyond, modify those laws. Obviously: creating order. I.e. gravitation is a force of order as it only attracts and ‘contracts’, in-form-ing a system; so it balances the tendency to entropy of ¥-rays.
So we must talk of a Universe which rescues the world from entropic death through ∆• elements.
So because thermodynamics developed rapidly during the 19th century in response to the need to optimize the performance of steam engines, it is wrong to expand by dogma to the whole universe the laws of thermodynamics, as previously written, which makes them applicable only to physical, molecular matter, the ∆±1 scales and biological systems, without understanding how order is restored in biological systems by its minds, in ∆±1 by crystals and future arrows of social evolution. As the system either has motion or form, often switching between both states; a fact only recognised by the pioneers of fractal physico-chemistry such as Mr. Mehaute, which clearly proves that when a thermodynamic system stops moving externally, it then subtly starts to create further internal order.
So, the ‘particular’ laws of thermodynamics give a complete description of all changes in the energy state of any system and its ability to perform useful work on its surroundings, but only when we make a ‘ceteris paribus’ analysis of that plane and its internal phenomena, discharging the interchanges of energy and information with the ∆±1 scales.
In that sense thermodynamics also has an scalar structure, and so epistemologically we talk of 2 branches:
∆o: Thermodynamics or Classical thermodynamics which does not involve the consideration of individual atoms or molecules.
∆-1>∆: Such concerns are the focus of the branch of thermodynamics known as statistical thermodynamics, or statistical mechanics, which expresses macroscopic thermodynamic properties in terms of the behaviour of individual particles and their interactions.
It has its roots in the latter part of the 19th century, when atomic and molecular theories of matter began to be generally accepted. And so a clear form to study the ‘errors’ of a single space-time analysis of entropy and one which includes at least a couple of ∆-scales is to see the differences between both scalar approaches. Since the key to understand properly thermodynamics is to analyse how disorder in one scale ∆-heat in fact merely means that the ∆-1 scales wishes to create its proper order and motions.
So we need to include besides the study of open and closed states of a thermodynamic system in a single ∆-scale, the ‘whole picture’ by adding the study of ‘closed and opened’ systems in 3 ∆±1 scales, asking questions such as:
‘This event study transfers energy or information to the ∆±1 scales of the system and if so, which arrow is dominant, ∆+1>∆ order or ∆-1<∆ entropy?
The classic example being, the beta decay equations of disorder of a particle, which seems to miss spin (cyclical order) and entropy (energetic motions) and so, the order was restored by including the ∆-1, neutrino quanta which the process of death and devolution of the nucleon transfers as all entropic phenomena do, to its lower ∆-1 faster/larger scale.
It is this type of actions of balancing the ‘books’ which physicists stubbornly do not do for entropic heat, adding to the mix gravitational in-form-ative forces and ‘@minds’, Maxwell demons, what explains the errors of a dying universe of its science philosophies.
Or in other words, what we call ‘dead heat’ balance or thermodynamic equilibrium means merely the ∆-1 scale, ‘refuses’ to become ‘organised’ and synchronised by the ∆-scale, and prefers to remain in a herd, wave, ‘relatively free’ scale. It is only entropy from the human point of view.
The application of thermodynamic principles begins by defining a system that is in some sense distinct from its surroundings. For example, the system could be a sample of gas inside a cylinder with a movable piston, an entire steam engine, a marathon runner, the planet Earth, a neutron star, a black hole, or even the entire universe. In general, systems are free to exchange heat, work, and other forms of energy with their surroundings.
However this approach of classic thermodynamics is limited in its scope, as we must first define the system, more precisely in its 3 x 3 + 0 dimensions, to know if it is a complete system (with 3 co-existing scales, 3 topologies, 3 time ages and 1 mind-point), in which case there will not be entropy but rather information-creation by the mind on a closed system, growing its information at expenses of the external world – case of life systems (x, x, x in the next graph):
When the system looses the mind-point, there is not an internal element of order, and so the system is ‘ready’ to give up part of its information to an external agent. Yet if the enclosing membrane remains (a fact not easily obtained through time, as it is managed and maintained in isolation by the invagination and connection with its @mind-point), the system will remain in balance with no exchange of e or i (x,x,√) at the ∆+1 ‘mechanical level, leaking though disordered, entropic heat at the ∆-1 level. And a similar case, when the systems only thermally isolated at ∆-1 level happens: the system can be used to cause work, as the mechanical, motion, ∆-level is not isolated.
So what we know as classic thermodynamics apply to such systems in which either or both of the ∆-1 and ∆+1 quantum and mechanical scales are NOT isolated and transfer of energy and motion in inverse faction happens between them (open and closed system); and to those systems the laws of thermodynamic apply when we do a ceteris paribus analysis from the point of view of a single scale – disregarding the order or entropy ∆ or ∇ effects on those 2 other scales.
Only such systems we shall call ‘thermodynamic engines’, classic laws apply. Then those laws will refer basically to the Conservation of present energy of a system, where the present energy is given by the temperature and its composite elements will be volume or expansive entropy and pressure or increasing order, given us the 1st key equation of thermodynamics:
P (t) x V (s) = nK T (exi content)
For example a gas in a cylinder with a movable piston, the state of the system is identified by the temperature, pressure, and volume of the gas. Where the volume is proportional to the spatial expansion OR ENTROPY PROPER using the concept of ENTROPY as expansive motion, defined in GST; the pressure is its inverse ‘pure form’ function, or arrow of order. And the nKT element the present, energy x information parameter of the system, ‘its existential force’ calculated multiplying its quanta of entropy-space, nK and its speed of time, its ‘frequency of time vibration’, T.
In such systems it also applies the fundamental law of ‘immortality of time’, which states that:
‘a closed path of time has not done work, hence spent its potential energy, when it returns to its initial space-time condition’
So P, V, and kT are characteristic parameters that have definite values at each state and are independent of the way in which the system arrived at that state. In other words, any change in value of a property depends only on the initial and final states of the system, not on the path followed by the system from one state to another. Such properties are called state functions.
In contrast, when we talk of an open system (√,√,√) the final state is not one of balance, since there is a gain or loss of energy and information which the external system absorbs or gives. So, the final work done as the piston moves and the gas expands and the heat the gas absorbs from its surroundings depend on the detailed way in which the expansion occurs.
The behaviour of a complex thermodynamic system, such as Earth’s atmosphere, can be understood by first applying the principles of states and properties to its component parts—in this case, water, water vapour, and the various gases making up the atmosphere. By isolating samples of material whose states and properties can be controlled and manipulated, properties and their interrelations can be studied as the system changes from state to state. It is though not accurate to consider then ‘ceteris paribus analysis’ of the system to uphold the wrong limited classic laws of entropy. I.e. when we consider the Earth-sun system we wrongly state that the order of the planet grows because the disorder of the star happens giving up ¥-energy. But it does NOT consider the ∆+3 scale of gravitational, increasing order, which in fact will finally dominate collapsing the star. It is then we could talk of the whole solar system in ‘e x i equilibrium’, which classic thermodynamics called a…
Classic thermodynamics is concerned then with the different states of disorder, when there is a dynamic transfer of Pt, Vs and nkT (ST) between systems, which changes the system parameters of e x i, or with the different paths a system can use to get from a state to other; changing only one of those parameters; which are analysis of value for all other studies of single space-time systems at all other scales.
In that regard, a particularly important concept is thermodynamic equilibrium, in which there is no tendency for the state of a system to change spontaneously. For example, the gas in a cylinder with a movable piston will be at equilibrium if the temperature and pressure inside are uniform and if the restraining force on the piston is just sufficient to keep it from moving. The system can then be made to change to a new state only by an externally imposed change in one of the state functions, such as the temperature by adding heat or the volume by moving the piston. A sequence of one or more such steps connecting different states of the system is called a process. In general, a system is not in equilibrium as it adjusts to an abrupt change in its environment. For example, when a balloon bursts, the compressed gas inside is suddenly far from equilibrium, and it rapidly expands until it reaches a new equilibrium state.
However, the same final state could be achieved by placing the same compressed gas in a cylinder with a movable piston and applying a sequence of many small increments in volume (and temperature), with the system being given time to come to equilibrium after each small increment.
Such a process is said to be reversible because the system is at (or near) equilibrium at each step along its path, and the direction of change could be reversed at any point.
This example illustrates how two different paths can connect the same initial and final states.
The first is irreversible (the balloon bursts), and the second is reversible. The concept of reversible processes is something like motion without friction in mechanics.
It represents an idealized limiting case that is very useful in discussing the properties of real systems. Many of the results of thermodynamics are derived from the properties of reversible processes. And the first conclusion we obtain from that duality, is indeed a general law of all systems: When the change is very fast and the system cannot ‘reorganise step by step’ of its frequency motions, the simultaneous location of all its parts, the system becomes disordered. As all entropic, death processes and big-bangs, ∆+1<<∆-1, are by definition fast transformations that erase the disorder of the system.
Yet when those changes are minimal changes, where the parameter of temporal order is higher (slow processes with minute changes) than the parameter of expansive space, the process is fully reversible with no entropy: ∆<∆-1>∆.
The concept of temperature is fundamental then to any discussion of thermodynamics, as it is the parameter of frequency in time, hence of the speed of the system in its time clocks, which will vary all the other parameters of the system to define its existential SxT force, which in all systems grow when S-imultaneity in space and size grow (spatial force) or its temporal frequency and timing of its actions-reactions grow (temperature speed), within the S≈T limits of internal balance allowed by the system (homeostasis), which for any system establishes an interval of temperature of maximal efficiency.
Thus Temperature is a measure of the density of energy and information, of the frequency of motion, of the quantity of temporal form, of the existential momentum of a thermodynamic ensemble of waves, particle and fields of the ∆±1 planes of existence, with ∆o humans on its middle point. As such temperature is the frequency of the activity of a certain environment in those planes, which increases as we increase the parameters which will increase that density such as Pt x Vs = n Ks Tt
Yet, in any assemble of ∆-1 forms, the arrow of social evolution dominates the system. So when one ∆-1 element has a higher frequency of existence it will share it among SIMILAR entities by colliding and transferring them energy. Without the existence of an attractor, @mind, Maxwell’s demon in the system, the energy of the system will remain in a wave, steady state equilibrium, providing the external membrane is closed, or else it will dissipate its entropy if the external ‘pressure membrane’ disappears.
Temperature is therefore a social form of energy ∆-1 systems prefer to shared to equalise its parameter of energy and information with its close clone molecules.
So, when two objects are brought into thermal contact, heat will flow between them until they come into equilibrium with each other. When the flow of heat stops, they are said to be at the same temperature.
The zeroth law of thermodynamics formalizes this by asserting that if an object A is in simultaneous thermal equilibrium with two other objects B and C, then B and C will be in thermal equilibrium with each other if brought into thermal contact.
Object A can then play the role of a thermometer through some change in its physical properties with temperature, such as its volume or its electrical resistance.
With the definition of equality of temperature in hand, it is possible then to establish a temperature scale by assigning numerical values to certain easily reproducible fixed points. And consider in an opposite fashion to thermodynamic classic laws that:
‘When entropy increases in the system as a whole, an equal amount of order created by isotemperature equilibrium happens in the ∆-1 scale of the system’.
Work and energy
Energy in that sense has a precise meaning in classic single ∆-scale physics that does not correspond to the concept of GST.
The word is derived from the Greek word ergon, meaning work, but the term work itself acquired a technical meaning with the advent of Newtonian mechanics. For example, a man pushing on a car may feel that he is doing a lot of work, but no work is actually done unless the car moves. The work done is then the product of the force applied by the man multiplied by the distance through which the car moves. If there is no friction and the surface is level, then the car, once set in motion, will continue rolling indefinitely with constant speed. The rolling car has something that a stationary car does not have—it has kinetic energy of motion equal to the work required to achieve that state of motion.
The introduction of the concept of energy in this way is of great value in mechanics because, in the absence of friction, energy is never lost from the system, although it can be converted from one form to another. What this means in GST is that closed motions, which are time motions do not spend work or energy because the Universe never spends its immortal closed motions, which is ultimately the substance of which it is made. YOU LIVE in a Universe of time in which spaces are just partial parts, open motions that invariably will return as a zero sum, in spatial, topological form, ages, or scales.
For example, if a coasting car comes to a hill, it will roll some distance up the hill before coming to a temporary stop. At that moment its kinetic energy of motion has been converted into its potential energy of position, which is equal to the work required to lift the car through the same vertical distance. After coming to a stop, the car will then begin rolling back down the hill until it has completely recovered its kinetic energy of motion at the bottom. In the absence of friction, such systems are said to be conservative because at any given moment the total amount of energy (kinetic plus potential) remains equal to the initial work done to set the system in motion.
As the science of physics expanded to cover an ever-wider range of phenomena, it became necessary to include additional forms of energy in order to keep the total amount of energy constant for all closed systems (or to account for changes in total energy for open systems).
For example, if work is done to accelerate charged particles, then some of the resultant energy will be stored in the form of electromagnetic fields and carried away from the system as radiation. In turn the electromagnetic energy can be picked up by a remote receiver (antenna) and converted back into an equivalent amount of work. With his theory of special relativity, Albert Einstein realized that energy (E) can also be stored as mass (m) and converted back into energy, as expressed by his famous equation E = mc2, where c is the velocity of light. All of these systems are said to be conservative in the sense that energy can be freely converted from one form to another without limit. Each fundamental advance of physics into new realms has involved a similar extension to the list of the different forms of energy.
Yet because there was not a concept of fractal scales, the relationship between those forms of energy and the relative present states of each scale, its different time clocks and quanta of space, where not clearly defined.
Thermodynamics encompasses all of these forms of energy, with the further addition of heat to the list of different kinds of energy.
However, heat, which is a concept restricted to the subjective human desire to ‘be the focus and attention of all other scales’ is fundamentally different from the others in that the conversion of work (or other forms of energy) into heat is not completely reversible. Since it is not the whole tale, but merely the duality of human scales.
So this specific law of energy-information transfer MEANS merely that part of the motion from ∆+1 < ∑∆-1 becomes absorbed, stolen by the more informative ∆-1 entities, unless there is truly an organic network that establishes the simultaneity of all those motions (as in biological organisms):
∆ +1 ‹ Tiƒ ∆ + Spe ∆-1
That is ∆+1 transfers part of its motion to the well organise, synchronous ∆-particles of the lower scale, but also to its field, ∆-1 which won’t return it back.
(nt.1 When we use also ≤ and ≥ bigger or smaller classic symbols of algebra, GST < expansive entropy an > informative flow, are substituted by ‹ and ›, to avoid confessions).
In the example of the rolling car, some of the work done to set the car in motion is inevitably lost as heat due to friction, and the car eventually comes to a stop on a level surface. Even if all the generated heat were collected and stored in some fashion, it could never be converted entirely back into mechanical energy of motion. This fundamental limitation is expressed quantitatively by the second law of thermodynamics (see below).
The role of friction in degrading the energy of mechanical systems may seem simple and obvious, but the quantitative connection between heat and work, as first discovered by Count Rumford, played a key role in understanding the operation of steam engines in the 19th century and similarly for all energy-conversion processes today.
Total internal energy
Although classical thermodynamics deals exclusively with the macroscopic properties of materials—such as temperature, pressure, and volume—thermal energy from the addition of heat can be understood at the microscopic level as an increase in the kinetic energy of motion of the molecules making up a substance.
This is the proper way to define then the system not as one that looses energy but one which transfers it to ∆-1 scales.
For example, gas molecules have translational kinetic energy that is proportional to the temperature of the gas: the molecules can rotate about their centre of mass, and the constituent atoms can vibrate with respect to each other (like masses connected by springs). Additionally, chemical energy is stored in the bonds holding the molecules together, and weaker long-range interactions between the molecules involve yet more energy. The sum total of all these forms of energy constitutes the total internal energy of the substance in a given thermodynamic state. The total energy of a system includes its internal energy plus any other forms of energy, such as kinetic energy due to motion of the system as a whole (e.g., water flowing through a pipe) and gravitational potential energy due to its elevation.
It is then when the concept of a ‘dying entropic universe’ makes no longer sense, reason why the 2nd law should be abolished in its present format. Let us now study in more depth each of those laws with its added corrections.
The first law of thermodynamics
The laws of thermodynamics are deceptively simple to state, but they are far-reaching in their consequences. The first law asserts that the total energy of a system plus its surroundings is conserved; in other words, the total energy of the universe remains constant as keep extending a nested series of worlds in ∆, S and T scales and symmetries, to infinity.
Yet since energy and its symmetric form information is a combination of entropy and form≈curvature, we deduce that the total curvature and entropy of the Universe is also conserved, on the total cycle of times, in which a world cycle does not make work. Work is non-existent in the long term in the Universe. Work never happens because all cycles are closed so the total energy and information of the five-dimensional block of existences is completed. And yet it never ceases to create its details.
The first law is put into action by considering the flow of energy across the boundary separating a system from its surroundings. Consider the classic example of a gas enclosed in a cylinder with a movable piston. The walls of the cylinder act as the boundary separating the gas inside from the world outside, and the movable piston provides a mechanism for the gas to do work by expanding against the force holding the piston (assumed frictionless) in place. If the gas does work W as it expands, and/or absorbs heat Q from its surroundings through the walls of the cylinder, then this corresponds to a net flow of energy W − Q across the boundary to the surroundings. In order to conserve the total energy U, there must be a counterbalancing change
in the internal energy of the gas. The first law provides a kind of strict energy accounting system in which the change in the energy account (ΔU) equals the difference between deposits (Q) and withdrawals (W).There is an important distinction between the quantity ΔU and the related energy quantities Q and W. Since the internal energy U is characterized entirely by the quantities (or parameters) that uniquely determine the state of the system at equilibrium, it is said to be a state function such that any change in energy is determined entirely by the initial (i) and final (f) states of the system: ΔU = Uf − Ui. However, Q and W are not state functions. Just as in the example of a bursting balloon, the gas inside may do no work at all in reaching its final expanded state, or it could do maximum work by expanding inside a cylinder with a movable piston to reach the same final state. All that is required is that the change in energy (ΔU) remain the same. By analogy, the same change in one’s bank account could be achieved by many different combinations of deposits and withdrawals. Thus, Q and W are not state functions, because their values depend on the particular process (or path) connecting the same initial and final states. Just as it is only meaningful to speak of the balance in one’s bank account and not its deposit or withdrawal content, it is only meaningful to speak of the internal energy of a system and not its heat or work content.
From a formal mathematical point of view, the incremental change dU in the internal energy is an exact differential (see differential equation), while the corresponding incremental changes d′Q and d′W in heat and work are not, because the definite integrals of these quantities are path-dependent. These concepts can be used to great advantage in a precise mathematical formulation of thermodynamics (see below Thermodynamic properties and relations).
The classic example of a heat engine is a steam engine, although all modern engines follow the same principles. Steam engines operate in a cyclic fashion, with the piston moving up and down once for each cycle. Hot high-pressure steam is admitted to the cylinder in the first half of each cycle, and then it is allowed to escape again in the second half. The overall effect is to take heat Q1 generated by burning a fuel to make steam, convert part of it to do work, and exhaust the remaining heat Q2 to the environment at a lower temperature. The net heat energy absorbed is then Q = Q1 − Q2. Since the engine returns to its initial state, its internal energy U does not change (ΔU = 0). Thus, by the first law of thermodynamics, the work done for each complete cycle must be W = Q1 − Q2. In other words, the work done for each complete cycle is just the difference between the heat Q1 absorbed by the engine at a high temperature and the heat Q2 exhausted at a lower temperature. The power of thermodynamics is that this conclusion is completely independent of the detailed working mechanism of the engine. It relies only on the overall conservation of energy, with heat regarded as a form of energy.
In order to save money on fuel and avoid contaminating the environment with waste heat, engines are designed to maximize the conversion of absorbed heat Q1 into useful work and to minimize the waste heat Q2. The Carnot efficiency (η) of an engine is defined as the ratio W/Q1—i.e., the fraction of Q1 that is converted into work. Since W = Q1 − Q2, the efficiency also can be expressed in the form
If there were no waste heat at all, then Q2 = 0 and η = 1, corresponding to 100 percent efficiency. While reducing friction in an engine decreases waste heat, it can never be eliminated; therefore, there is a limit on how small Q2 can be and thus on how large the efficiency can be. This limitation is a fundamental law of nature—in fact, the second law of thermodynamics (see below).
Isothermal and adiabatic processes
Because heat engines may go through a complex sequence of steps, a simplified model is often used to illustrate the principles of thermodynamics. In particular, consider a gas that expands and contracts within a cylinder with a movable piston under a prescribed set of conditions. There are two particularly important sets of conditions. One condition, known as an isothermal expansion, involves keeping the gas at a constant temperature. As the gas does work against the restraining force of the piston, it must absorb heat in order to conserve energy. Otherwise, it would cool as it expands (or conversely heat as it is compressed). This is an example of a process in which the heat absorbed is converted entirely into work with 100 percent efficiency. The process does not violate fundamental limitations on efficiency, however, because a single expansion by itself is not a cyclic process.
The second condition, known as an adiabatic expansion (from the Greek adiabatos, meaning “impassable”), is one in which the cylinder is assumed to be perfectly insulated so that no heat can flow into or out of the cylinder. In this case the gas cools as it expands, because, by the first law, the work done against the restraining force on the piston can only come from the internal energy of the gas. Thus, the change in the internal energy of the gas must be ΔU = −W, as manifested by a decrease in its temperature. The gas cools, even though there is no heat flow, because it is doing work at the expense of its own internal energy. The exact amount of cooling can be calculated from the heat capacity of the gas.
Many natural phenomena are effectively adiabatic because there is insufficient time for significant heat flow to occur. For example, when warm air rises in the atmosphere, it expands and cools as the pressure drops with altitude, but air is a good thermal insulator, and so there is no significant heat flow from the surrounding air. In this case the surrounding air plays the roles of both the insulated cylinder walls and the movable piston. The warm air does work against the pressure provided by the surrounding air as it expands, and so its temperature must drop. A more-detailed analysis of this adiabatic expansion explains most of the decrease of temperature with altitude, accounting for the familiar fact that it is colder at the top of a mountain than at its base.
The second law of thermodynamics
The first law of thermodynamics asserts that energy must be conserved in any process involving the exchange of heat and work between a system and its surroundings.
A machine that violated the first law would be called a perpetual motion machine of the first kind because it would manufacture its own energy out of nothing and thereby run forever.
Such a machine would be impossible even in theory. However, this impossibility would not prevent the construction of a machine that could extract essentially limitless amounts of heat from its surroundings (earth, air, and sea) and convert it entirely into work. Although such a hypothetical machine would not violate conservation of energy, the total failure of inventors to build such a machine, known as a perpetual motion machine of the second kind, led to the discovery of the second law of thermodynamics. The second law of thermodynamics can be precisely stated in the following two forms, as originally formulated in the 19th century by the Scottish physicist William Thomson (Lord Kelvin) and the German physicist Rudolf Clausius, respectively:
A cyclic transformation whose only final result is to transform heat extracted from a source which is at the same temperature throughout into work is impossible.
A cyclic transformation whose only final result is to transfer heat from a body at a given temperature to a body at a higher temperature is impossible.
The two statements are in fact equivalent because, if the first were possible, then the work obtained could be used, for example, to generate electricity that could then be discharged through an electric heater installed in a body at a higher temperature. The net effect would be a flow of heat from a lower temperature to a higher temperature, thereby violating the second (Clausius) form of the second law. Conversely, if the second form were possible, then the heat transferred to the higher temperature could be used to run a heat engine that would convert part of the heat into work. The final result would be a conversion of heat into work at constant temperature—a violation of the first (Kelvin) form of the second law.
Central to the following discussion of entropy is the concept of a heat reservoir capable of providing essentially limitless amounts of heat at a fixed temperature. This is of course an idealization, but the temperature of a large body of water such as the Atlantic Ocean does not materially change if a small amount of heat is withdrawn to run a heat engine. The essential point is that the heat reservoir is assumed to have a well-defined temperature that does not change as a result of the process being considered.
|-$t: Gas states:Entropy
Entropy and efficiency limits
The concept of entropy was first introduced in 1850 by Clausius as a precise mathematical way of testing whether the second law of thermodynamics is violated by a particular process. The test begins with the definition that if an amount of heat Q flows into a heat reservoir at constant temperature T, then its entropy S increases by ΔS = Q/T. (This equation in effect provides a thermodynamic definition of temperature that can be shown to be identical to the conventional thermometric one.) Assume now that there are two heat reservoirs R1 and R2 at temperatures T1 and T2. If an amount of heat Q flows from R1 to R2, then the net entropy change for the two reservoirs is
(3) ΔS is positive, provided that T1 > T2. Thus, the observation that heat never flows spontaneously from a colder region to a hotter region (the Clausius form of the second law of thermodynamics) is equivalent to requiring the net entropy change to be positive for a spontaneous flow of heat. If T1 = T2, then the reservoirs are in equilibrium and ΔS = 0.The condition ΔS ≥ 0 determines the maximum possible efficiency of heat engines. Suppose that some system capable of doing work in a cyclic fashion (a heat engine) absorbs heat Q1 from R1 and exhausts heat Q2 to R2 for each complete cycle. Because the system returns to its original state at the end of a cycle, its energy does not change. Then, by conservation of energy, the work done per cycle is W = Q1 − Q2, and the net entropy change for the two reservoirs is
This is the fundamental equation limiting the efficiency of all heat engines whose function is to convert heat into work (such as electric power generators). The actual efficiency is defined to be the fraction of Q1 that is converted to work (W/Q1), which is equivalent to equation (2).The maximum efficiency for a given T1 and T2 is thus:
Entropy and heat death
The example of a heat engine illustrates one of the many ways in which the second law of thermodynamics can be applied. One way to generalize the example is to consider the heat engine and its heat reservoir as parts of an isolated (or closed) system—i.e., one that does not exchange heat or work with its surroundings. For example, the heat engine and reservoir could be encased in a rigid container with insulating walls. In this case the second law of thermodynamics (in the simplified form presented here) says that no matter what process takes place inside the container, its entropy must increase or remain the same in the limit of a reversible process. Similarly, if the universe is an isolated system, then its entropy too must increase with time. Indeed, the implication is that the universe must ultimately suffer a “heat death” as its entropy progressively increases toward a maximum value and all parts come into thermal equilibrium at a uniform temperature. After that point, no further changes involving the conversion of heat into useful work would be possible. In general, the equilibrium state for an isolated system is precisely that state of maximum entropy. (This is equivalent to an alternate definition for the term entropy as a measure of the disorder of a system, such that a completely random dispersion of elements corresponds to maximum entropy, or minimum information. )
Entropy and the arrow of time
The inevitable increase of entropy with time for isolated systems plays a fundamental role in determining the direction of the “arrow of time.” Everyday life presents no difficulty in distinguishing the forward flow of time from its reverse. For example, if a film showed a glass of warm water spontaneously changing into hot water with ice floating on top, it would immediately be apparent that the film was running backward because the process of heat flowing from warm water to hot water would violate the second law of thermodynamics. However, this obvious asymmetry between the forward and reverse directions for the flow of time does not persist at the level of fundamental interactions. An observer watching a film showing two water molecules colliding would not be able to tell whether the film was running forward or backward.
So what exactly is the connection between entropy and the second law? Recall that heat at the molecular level is the random kinetic energy of motion of molecules, and collisions between molecules provide the microscopic mechanism for transporting heat energy from one place to another. Because individual collisions are unchanged by reversing the direction of time, heat can flow just as well in one direction as the other. Thus, from the point of view of fundamental interactions, there is nothing to prevent a chance event in which a number of slow-moving (cold) molecules happen to collect together in one place and form ice, while the surrounding water becomes hotter. Such chance events could be expected to occur from time to time in a vessel containing only a few water molecules. However, the same chance events are never observed in a full glass of water, not because they are impossible but because they are exceedingly improbable. This is because even a small glass of water contains an enormous number of interacting molecules (about 1024), making it highly unlikely that, in the course of their random thermal motion, a significant fraction of cold molecules will collect together in one place. Although such a spontaneous violation of the second law of thermodynamics is not impossible, an extremely patient physicist would have to wait many times the age of the universe to see it happen.
The foregoing demonstrates an important point: the second law of thermodynamics is statistical in nature. It has no meaning at the level of individual molecules, whereas the law becomes essentially exact for the description of large numbers of interacting molecules. In contrast, the first law of thermodynamics, which expresses conservation of energy, remains exactly true even at the molecular level.
The example of ice melting in a glass of hot water also demonstrates the other sense of the term entropy, as an increase in randomness and a parallel loss of information. Initially, the total thermal energy is partitioned in such a way that all of the slow-moving (cold) molecules are located in the ice and all of the fast-moving (hot) molecules are located in the water (or water vapour). After the ice has melted and the system has come to thermal equilibrium, the thermal energy is uniformly distributed throughout the system. The statistical approach provides a great deal of valuable insight into the meaning of the second law of thermodynamics, but, from the point of view of applications, the microscopic structure of matter becomes irrelevant. The great beauty and strength of classical thermodynamics are that its predictions are completely independent of the microscopic structure of matter.
Most real thermodynamic systems are open systems that exchange heat and work with their environment, rather than the closed systems described thus far. For example, living systems are clearly able to achieve a local reduction in their entropy as they grow and develop; they create structures of greater internal energy (i.e., they lower entropy) out of the nutrients they absorb. This does not represent a violation of the second law of thermodynamics, because a living organism does not constitute a closed system.
In order to simplify the application of the laws of thermodynamics to open systems, parameters with the dimensions of energy, known as thermodynamic potentials, are introduced to describe the system. The resulting formulas are expressed in terms of the Helmholtz free energy F and the Gibbs free energy G, named after the 19th-century German physiologist and physicist Hermann von Helmholtz and the contemporaneous American physicist Josiah Willard Gibbs. The key conceptual step is to separate a system from its heat reservoir. A system is thought of as being held at a constant temperature T by a heat reservoir (i.e., the environment), but the heat reservoir is no longer considered to be part of the system. Recall that the internal energy change (ΔU) of a system is given by
where Q is the heat absorbed and W is the work done. In general, Q and W separately are not state functions, because they are path-dependent. However, if the path is specified to be any reversible isothermal process, then the heat associated with the maximum work (Wmax) is Qmax = TΔS. With this substitution the above equation can be rearranged as
Note that here ΔS is the entropy change just of the system being held at constant temperature, such as a battery. Unlike the case of an isolated system as considered previously, it does not include the entropy change of the heat reservoir (i.e., the surroundings) required to keep the temperature constant. If this additional entropy change of the reservoir were included, the total entropy change would be zero, as in the case of an isolated system. Because the quantities U, T, and S on the right-hand side are all state functions, it follows that −Wmax must also be a state function. This leads to the definition of the Helmholtz free energy
such that, for any isothermal change of the system,
is the negative of the maximum work that can be extracted from the system. The actual work extracted could be smaller than the ideal maximum, or even zero, which implies that W ≤ −ΔF, with equality applying in the ideal limiting case of a reversible process. When the Helmholtz free energy reaches its minimum value, the system has reached its equilibrium state, and no further work can be extracted from it. Thus, the equilibrium condition of maximum entropy for isolated systems becomes the condition of minimum Helmholtz free energy for open systems held at constant temperature. The one additional precaution required is that work done against the atmosphere be included if the system expands or contracts in the course of the process being considered. Typically, processes are specified as taking place at constant volume and temperature in order that no correction is needed.Although the Helmholtz free energy is useful in describing processes that take place inside a container with rigid walls, most processes in the real world take place under constant pressure rather than constant volume. For example, chemical reactions in an open test tube—or in the growth of a tomato in a garden—take place under conditions of (nearly) constant atmospheric pressure. It is for the description of these cases that the Gibbs free energy was introduced. As previously established, the quantity
is a state function equal to the change in the Helmholtz free energy. Suppose that the process being considered involves a large change in volume (ΔV), such as happens when water boils to form steam. The work done by the expanding water vapour as it pushes back the surrounding air at pressure P is PΔV. This is the amount of work that is now split out from Wmax by writing it in the form
where W′max is the maximum work that can be extracted from the process taking place at constant temperature T and pressure P, other than the atmospheric work (PΔV). Substituting this partition into the above equation for −Wmax and moving the PΔV term to the right-hand side then yields
This leads to the definition of the Gibbs free energy
such that, for any isothermal change of the system at constant pressure,
is the negative of the maximum work W′max that can be extracted from the system, other than atmospheric work. As before, the actual work extracted could be smaller than the ideal maximum, or even zero, which implies that W′ ≤ −ΔG, with equality applying in the ideal limiting case of a reversible process. As with the Helmholtz case, when the Gibbs free energy reaches its minimum value, the system has reached its equilibrium state, and no further work can be extracted from it. Thus, the equilibrium condition becomes the condition of minimum Gibbs free energy for open systems held at constant temperature and pressure, and the direction of spontaneous change is always toward a state of lower free energy for the system (like a ball rolling downhill into a valley). Notice in particular that the entropy can now spontaneously decrease (i.e., TΔS can be negative), provided that this decrease is more than offset by the ΔU + PΔV terms in the definition of ΔG. As further discussed below, a simple example is the spontaneous condensation of steam into water. Although the entropy of water is much less than the entropy of steam, the process occurs spontaneously provided that enough heat energy is taken away from the system to keep the temperature from rising as the steam condenses.A familiar example of free energy changes is provided by an automobile battery. When the battery is fully charged, its Gibbs free energy is at a maximum, and when it is fully discharged (i.e., dead), its Gibbs free energy is at a minimum. The change between these two states is the maximum amount of electrical work that can be extracted from the battery at constant temperature and pressure. The amount of heat absorbed from the environment in order to keep the temperature of the battery constant (represented by the TΔS term) and any work done against the atmosphere (represented by the PΔV term) are automatically taken into account in the energy balance.
Gibbs free energy and chemical reactions
All batteries depend on some chemical reaction of the form
for the generation of electricity or on the reverse reaction as the battery is recharged. The change in free energy (−ΔG) for a reaction could be determined by measuring directly the amount of electrical work that the battery could do and then using the equation Wmax = −ΔG. However, the power of thermodynamics is that −ΔG can be calculated without having to build every possible battery and measure its performance. If the Gibbs free energies of the individual substances making up a battery are known, then the total free energies of the reactants can be subtracted from the total free energies of the products in order to find the change in Gibbs free energy for the reaction,
Once the free energies are known for a wide variety of substances, the best candidates for actual batteries can be quickly discerned. In fact, a good part of the practice of thermodynamics is concerned with determining the free energies and other thermodynamic properties of individual substances in order that ΔG for reactions can be calculated under different conditions of temperature and pressure.In the above discussion, the term reaction can be interpreted in the broadest possible sense as any transformation of matter from one form to another. In addition to chemical reactions, a reaction could be something as simple as ice (reactants) turning to liquid water (products), the nuclear reactions taking place in the interior of stars, or elementary particle reactions in the early universe. No matter what the process, the direction of spontaneous change (at constant temperature and pressure) is always in the direction of decreasing free energy.
Enthalpy and the heat of reaction
As discussed above, the free energy change Wmax = −ΔG corresponds to the maximum possible useful work that can be extracted from a reaction, such as in an electrochemical battery. This represents one extreme limit of a continuous range of possibilities. At the other extreme, for example, battery terminals can be connected directly by a wire and the reaction allowed to proceed freely without doing any useful work. In this case W′ = 0, and the first law of thermodynamics for the reaction becomes
where Q0 is the heat absorbed when the reaction does no useful work and, as before, PΔV is the atmospheric work term. The key point is that the quantities ΔU and PΔV are exactly the same as in the other limiting case, in which the reaction does maximum work. This follows because these quantities are state functions, which depend only on the initial and final states of a system and not on any path connecting the states. The amount of useful work done just represents different paths connecting the same initial and final states. This leads to the definition of enthalpy (H), or heat content, as
Its significance is that, for a reaction occurring freely (i.e., doing no useful work) at constant temperature and pressure, the heat absorbed is
where ΔH is called the heat of reaction. The heat of reaction is easy to measure because it simply represents the amount of heat that is given off if the reactants are mixed together in a beaker and allowed to react freely without doing any useful work.The above definition for enthalpy and its physical significance allow the equation for ΔG to be written in the particularly illuminating and instructive form
Both terms on the right-hand side represent heats of reaction but under different sets of circumstances. ΔH is the heat of reaction (i.e., the amount of heat absorbed from the surroundings in order to hold the temperature constant) when the reaction does no useful work, and TΔS is the heat of reaction when the reaction does maximum useful work in an electrochemical cell. The (negative) difference between these two heats is exactly the maximum useful work −ΔG that can be extracted from the reaction.
Thus, useful work can be obtained by contriving for a system to extract additional heat from the environment and convert it into work. The difference ΔH − TΔS represents the fundamental limitation imposed by the second law of thermodynamics on how much additional heat can be extracted from the environment and converted into useful work for a given reaction mechanism. An electrochemical cell (such as a car battery) is a contrivance by means of which a reaction can be made to do the maximum possible work against an opposing electromotive force, and hence the reaction literally becomes reversible in the sense that a slight increase in the opposing voltage will cause the direction of the reaction to reverse and the cell to start charging up instead of discharging.As a simple example, consider a reaction in which water turns reversibly into steam by boiling. To make the reaction reversible, suppose that the mixture of water and steam is contained in a cylinder with a movable piston and held at the boiling point of 373 K (100 °C) at 1 atmosphere pressure by a heat reservoir. The enthalpy change is ΔH = 40.65 kilojoules per mole, which is the latent heat of vaporization. The entropy change is
representing the higher degree of disorder when water evaporates and turns to steam. The Gibbs free energy change is ΔG = ΔH − TΔS. In this case the Gibbs free energy change is zero because the water and steam are in equilibrium, and no useful work can be extracted from the system (other than work done against the atmosphere). In other words, the Gibbs free energy per molecule of water (also called the chemical potential) is the same for both liquid water and steam, and so water molecules can pass freely from one phase to the other with no change in the total free energy of the system.
Thermodynamic properties and relations
In order to carry through a program of finding the changes in the various thermodynamic functions that accompany reactions—such as entropy, enthalpy, and free energy—it is often useful to know these quantities separately for each of the materials entering into the reaction. For example, if the entropies are known separately for the reactants and products, then the entropy change for the reaction is just the difference
and similarly for the other thermodynamic functions. Furthermore, if the entropy change for a reaction is known under one set of conditions of temperature and pressure, it can be found under other sets of conditions by including the variation of entropy for the reactants and products with temperature or pressure as part of the overall process. For these reasons, scientists and engineers have developed extensive tables of thermodynamic properties for many common substances, together with their rates of change with state variables such as temperature and pressure.
The science of thermodynamics provides a rich variety of formulas and techniques that allow the maximum possible amount of information to be extracted from a limited number of laboratory measurements of the properties of materials. However, as the thermodynamic state of a system depends on several variables—such as temperature, pressure, and volume—in practice it is necessary first to decide how many of these are independent and then to specify what variables are allowed to change while others are held constant. For this reason, the mathematical language of partial differential equations is indispensable to the further elucidation of the subject of thermodynamics.
Of especially critical importance in the application of thermodynamics are the amounts of work required to make substances expand or contract and the amounts of heat required to change the temperature of substances. The first is determined by the equation of state of the substance and the second by its heat capacity. Once these physical properties have been fully characterized, they can be used to calculate other thermodynamic properties, such as the free energy of the substance under various conditions of temperature and pressure.
In what follows, it will often be necessary to consider infinitesimal changes in the parameters specifying the state of a system. The first law of thermodynamics then assumes the differential form dU = d′Q − d′W. Because U is a state function, the infinitesimal quantity dU must be an exact differential, which means that its definite integral depends only on the initial and final states of the system. In contrast, the quantities d′Q and d′W are not exact differentials, because their integrals can be evaluated only if the path connecting the initial and final states is specified. The examples to follow will illustrate these rather abstract concepts.
Work of expansion and contraction
The first task in carrying out the above program is to calculate the amount of work done by a single pure substance when it expands at constant temperature. Unlike the case of a chemical reaction, where the volume can change at constant temperature and pressure because of the liberation of gas, the volume of a single pure substance placed in a cylinder cannot change unless either the pressure or the temperature changes. To calculate the work, suppose that a piston moves by an infinitesimal amount dx. Because pressure is force per unit area, the total restraining force exerted by the piston on the gas is PA, where A is the cross-sectional area of the piston. Thus, the incremental amount of work done is d′W = PAdx.
However, Adx can also be identified as the incremental change in the volume (dV) swept out by the head of the piston as it moves. The result is the basic equation d′W = PdV for the incremental work done by a gas when it expands. For a finite change from an initial volume Vi to a final volume Vf, the total work done is given by the integral
Equations of state
The equation of state for a substance provides the additional information required to calculate the amount of work that the substance does in making a transition from one equilibrium state to another along some specified path. The equation of state is expressed as a functional relationship connecting the various parameters needed to specify the state of the system. The basic concepts apply to all thermodynamic systems, but here, in order to make the discussion specific, a simple gas inside a cylinder with a movable piston will be considered.
The equation of state then takes the form of an equation relating P, V, and T, such that if any two are specified, the third is determined. In the limit of low pressures and high temperatures, where the molecules of the gas move almost independently of one another, all gases obey an equation of state known as the ideal gas law: PV = nRT, where n is the number of moles of the gas and R is the universal gas constant, 8.3145 joules per K. In the International System of Units, energy is measured in joules, volume in cubic metres (m3), force in newtons (N), and pressure in pascals (Pa), where 1 Pa = 1 N/m2. A force of one newton moving through a distance of one metre does one joule of work. Thus, both the products PV and RT have the dimensions of work (energy). A P–V diagram would show the equation of state in graphical form for several different temperatures.
To illustrate the path-dependence of the work done, consider three processes connecting the same initial and final states. The temperature is the same for both states, but, in going from state i to state f, the gas expands from Vi to Vf (doing work), and the pressure falls from Pi to Pf. According to the definition of the integral in equation (22), the work done is the area under the curve (or straight line) for each of the three processes. For processes I and III the areas are rectangles, and so the work done is
respectively. Process II is more complicated because P changes continuously as V changes. However, T remains constant, and so one can use the equation of state to substitute P = nRT/V in equation (22) to obtain
for an (ideal gas) isothermal process,
WII is thus the work done in the reversible isothermal expansion of an ideal gas. The amount of work is clearly different in each of the three cases. For a cyclic process the net work done equals the area enclosed by the complete cycle.
Heat capacity and specific heat
As shown originally by Count Rumford, there is an equivalence between heat (measured in calories) and mechanical work (measured in joules) with a definite conversion factor between the two. The conversion factor, known as the mechanical equivalent of heat, is 1 calorie = 4.184 joules. (There are several slightly different definitions in use for the calorie. The calorie used by nutritionists is actually a kilocalorie.) In order to have a consistent set of units, both heat and work will be expressed in the same units of joules.
The amount of heat that a substance absorbs is connected to its temperature change via its molar specific heat c, defined to be the amount of heat required to change the temperature of 1 mole of the substance by 1 K. In other words, c is the constant of proportionality relating the heat absorbed (d′Q) to the temperature change (dT) according to d′Q = ncdT, where n is the number of moles. For example, it takes approximately 1 calorie of heat to increase the temperature of 1 gram of water by 1 K. Since there are 18 grams of water in 1 mole, the molar heat capacity of water is 18 calories per K, or about 75 joules per K. The total heat capacity C for n moles is defined by C = nc.
However, since d′Q is not an exact differential, the heat absorbed is path-dependent and the path must be specified, especially for gases where the thermal expansion is significant. Two common ways of specifying the path are either the constant-pressure path or the constant-volume path. The two different kinds of specific heat are called cP and cV respectively, where the subscript denotes the quantity that is being held constant. It should not be surprising that cP is always greater than cV, because the substance must do work against the surrounding atmosphere as it expands upon heating at constant pressure but not at constant volume. In fact, this difference was used by the 19th-century German physicist Julius Robert von Mayer to estimate the mechanical equivalent of heat.
Heat capacity and internal energy
The goal in defining heat capacity is to relate changes in the internal energy to measured changes in the variables that characterize the states of the system. For a system consisting of a single pure substance, the only kind of work it can do is atmospheric work, and so the first law reduces to
(28)Suppose now that U is regarded as being a function U(T, V) of the independent pair of variables T and V. The differential quantity dU can always be expanded in terms of its partial derivatives according to
(29) where the subscripts denote the quantity being held constant when calculating derivatives. Substituting this equation into dU = d′Q − PdV then yields the general expression:
The above equation then gives immediately
For a temperature change at constant pressure, dP = 0, and, by definition of heat capacity, d′Q = CPdT, resulting in:
represents the additional atmospheric work that the system does as it undergoes thermal expansion at constant pressure, and the second term involving:
represents the internal work that must be done to pull the system apart against the forces of attraction between the molecules of the substance (internal stickiness). Because there is no internal stickiness for an ideal gas, this term is zero, and, from the ideal gas law, the remaining partial derivative is:
for the molar specific heats. For example, for a monatomic ideal gas (such as helium), cV = 3R/2 and cP = 5R/2 to a good approximation. cVT represents the amount of translational kinetic energy possessed by the atoms of an ideal gas as they bounce around randomly inside their container. Diatomic molecules (such as oxygen) and polyatomic molecules (such as water) have additional rotational motions that also store thermal energy in their kinetic energy of rotation. Each additional degree of freedom contributes an additional amount R to cV.
Because diatomic molecules can rotate about two axes and polyatomic molecules can rotate about three axes, the values of cV increase to 5R/2 and 3R respectively, and cP correspondingly increases to 7R/2 and 4R. (cV and cP increase still further at high temperatures because of vibrational degrees of freedom.) For a real gas such as water vapour, these values are only approximate, but they give the correct order of magnitude. For example, the correct values are cP = 37.468 joules per K (i.e., 4.5R) and cP − cV = 9.443 joules per K (i.e., 1.14R) for water vapour at 100 °C and 1 atmosphere pressure.
Entropy as an exact differential
Because the quantity dS = d′Qmax/T is an exact differential, many other important relationships connecting the thermodynamic properties of substances can be derived. For example, with the substitutions d′Q = TdS and d′W = PdV, the differential form (dU = d′Q − d′W) of the first law of thermodynamics becomes (for a single pure substance)
The advantage gained by the above formula is that dU is now expressed entirely in terms of state functions in place of the path-dependent quantities d′Q and d′W. This change has the very important mathematical implication that the appropriate independent variables are S and V in place of T and V, respectively, for internal energy.
This replacement of T by S as the most appropriate independent variable for the internal energy of substances is the single most valuable insight provided by the combined first and second laws of thermodynamics. With U regarded as a function U(S, V), its differential dU is
A comparison with the preceding equation shows immediately that the partial derivatives are:
Furthermore, the cross partial derivatives,
(42) must be equal because the order of differentiation in calculating the second derivatives of U does not matter. Equating the right-hand sides of the above pair of equations then yields:
The other three Maxwell relations follow by similarly considering the differential expressions for the thermodynamic potentials F(T, V), H(S, P), and G(T, P), with independent variables as indicated. The results are
As an example of the use of these equations, equation (35) for CP − CV contains the partial derivative:
which vanishes for an ideal gas and is difficult to evaluate directly from experimental data for real substances. The general properties of partial derivatives can first be used to write it in the form:
Combining this with equation (41) for the partial derivatives together with the first of the Maxwell equations from equation (44) then yields the desired result:
comes directly from differentiating the equation of state. For an ideal gas
(47) and so:
is zero as expected.
The departure of:
The Clausius-Clapeyron equation
Phase changes, such as the conversion of liquid water to steam, provide an important example of a system in which there is a large change in internal energy with volume at constant temperature. Suppose that the cylinder contains both water and steam in equilibrium with each other at pressure P, and the cylinder is held at constant temperature T. The pressure remains equal to the vapour pressurePvap as the piston moves up, as long as both phases remain present. All that happens is that more water turns to steam, and the heat reservoir must supply the latent heat of vaporization, λ = 40.65 kilojoules per mole, in order to keep the temperature constant.
The results of the preceding section can be applied now to find the variation of the boiling point of water with pressure. Suppose that as the piston moves up, 1 mole of water turns to steam. The change in volume inside the cylinder is then ΔV = Vgas − Vliquid, where Vgas = 30.143 litres is the volume of 1 mole of steam at 100 °C, and Vliquid = 0.0188 litre is the volume of 1 mole of water. By the first law of thermodynamics, the change in internal energy ΔU for the finite process at constant P and T is ΔU = λ − PΔV.
The variation of U with volume at constant T for the complete system of water plus steam is thus
A comparison with equation (46) then yields the equation:
(49) However, for the present problem, P is the vapour pressure Pvapour, which depends only on T and is independent of V. The partial derivative is then identical to the total derivative
(50) giving the Clausius-Clapeyron equation:
This equation is very useful because it gives the variation with temperature of the pressure at which water and steam are in equilibrium—i.e., the boiling temperature. An approximate but even more useful version of it can be obtained by neglecting Vliquid in comparison with Vgas and using
(52) from the ideal gas law. The resulting differential equation can be integrated to give:
For example, at the top of Mount Everest, atmospheric pressure is about 30 percent of its value at sea level. Using the values R = 8.3145 joules per K and λ = 40.65 kilojoules per mole, the above equation gives T = 342 K (69 °C) for the boiling temperature of water, which is barely enough to make tea.
The sweeping generality of the constraints imposed by the laws of thermodynamics makes the number of potential applications so large that it is impractical to catalog every possible formula that might come into use, even in detailed textbooks on the subject. For this reason, students and practitioners in the field must be proficient in mathematical manipulations involving partial derivatives and in understanding their physical content.
One of the great strengths of classical thermodynamics is that the predictions for the direction of spontaneous change are completely independent of the microscopic structure of matter, but this also represents a limitation in that no predictions are made about the rate at which a system approaches equilibrium. In fact, the rate can be exceedingly slow, such as the spontaneous transition of diamonds into graphite. Statistical thermodynamics provides information on the rates of processes, as well as important insights into the statistical nature of entropy and the second law of thermodynamics.
The 20th-century English scientist C.P. Snow explained the first three laws of thermodynamics, respectively, as:
- You cannot win (i.e., one cannot get something for nothing, because of the conservation of matter and energy).
- You cannot break even (i.e., one cannot return to the same energy state, because entropy, or disorder, always increases).
- You cannot get out of the game (i.e., absolute zero is unattainable because no perfectly pure substance exists).
Now this if truth will be a very badly rigged dying Universe, but 2 is false. Yes, you cannot win, all is ultimately a zero sum, as energy returns and you die for others to live. Yes, you cannot get out, as motion is eternal and so there is always a remnant ‘yang’ in the ‘yin’… a seed of thermal energy which when the quantum or mass state disorders becomes the ‘reproductive seed’ for new ‘Boltzs’ of temperature to activate. But you can break even through present reproduction, as we all will be repeated again.
The concept of Entropy: thermodynamic parameters, order and emergence
The relationship between ∆-1, the molecular state and ∆º, the temperature state uncovers basic relationships and parameters of ∆ºst molecular, matter systems of physics.
In the translation of sciences to stiences, we always depart of a theoretical minimum GST knowledge of the ternary ∆ºst±1 symmetries of the being and its parts, which is what we shall find described in a non-orderly way in science. So happens in thermodynamics, which describe the ternary parts of physical systems in both levels through…
Boltzmann’s principle and the concept of entropy.
In Boltzmann’s definition, entropy is a measure of the number of possible microscopic states (or ∆-1: molecular microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties (or ∆º Temperature macrostate).
To understand what microstates and macrostates are, consider the example of a gas in a container.
At a microscopic level, the gas consists of a vast number of freely moving atoms, which occasionally collide with one another and with the walls of the container.
Then the ∆-1 microstate of the system is a description of its Γenerator equation in terms of:
∆-1: Spe: positions in the field of the system & momenta≈ St-wave>T-particle parameters of all its ∆º=∑∆-1 atoms.
In principle, all the physical properties of the system are determined by its micro state, because physicists without knowing are indeed using the position in the entropic field and the momenta (st: body wave>tiƒ: particle head), which explains the whole ternary elements of each molecular ‘unit’ of the ∑∆-1:∆º atomic ensemble.
So what thermodynamics will do is to ‘translate’ the parameters of ∆-1 micro states into the parameters of ∆º macro states to be useful for human-scale ab=use of the happy, free atoms in its chaotic micro-state.
And here, humans will find a natural resistance of ∆º atomic entities to be ‘herded, ordered and extracted of its vital energy’ to do ‘work’ for human ginormous a$$holes – to put some irony to the human inverse pov – atoms do NOT want to enslave and behave and give us all their energy and drop dead, alas! that is a sure sign the Universe is imperfect, entropic and it will die.
Not so – merely atoms will try to conserve as humans under an orderly dictator a minimum freedom and just get ‘hot’, disordered as long as they can, while humans will try to do as all farmers: Limit the entropy of its herd, encircling into an external Spe-membrane which puts some pressure on them to become ordered. And this game of entropy at micro-state vs. Pressure/encircling order at macro-state is what thermodynamics studies (of course without any vital, organic Maxwellian demons involved).
Now for a herder to work, because the number of atoms or any ∆-1 ensemble of finitesimals is so large, for the whole to work, the ∆º being will consider all its individual finitesimals indistinguishable, which is a pre-requisite to organise huge herds as generic ‘ fields’ susceptiable to be ordered by the ∆º-mind being with ‘huge’, simple changes in the parameters of S, St, T (Entropy, energy, form) of the herded mass.
In brief, the ∆º element orders wholesale with general homogenous changes on the S, T, and ST parameters that affect the whole. This we can observe in history in the 800 cycles of nomadic, entropic weapon-makers destroying fertile crescent cultures when the earth which might be macro-managing its evolution into the age of metals or mechanocene produces huge heat changes of climate that affect homogeneously the whole; it is how physical systems manage herds of atoms with changes in magnetic domain ‘walls’ that influence ensembles of one million atoms.
So the first element for the herders whole-singularities is TO encircle with a closed wall the ensemble – the EARTH we forecast and has been recently found manages with its singularity core and ‘chimneys’ to the surface, the weather cycles and glaciations that define its evolution; the electric charge singularity controls with the magnetic encircling walls the structure of its atomic ensembles, and humans control heat-energy with pressure walls that encircle an ensemble of atoms. And voila! suddenly we have the ‘self-similar’ ∆º,¹ macro-state equivalents to ST>t momenta and Spe: position parameters to do ‘equations of thermodynamics’, which will use the ∆nalysis maths (diffusion equations from the ∆º perspective, entropy equations from the ∆-1 perspective) to compare both scales.
And what makes this fascinating for ∆ºst systems of any kind is that we can extract by the isomorphic method general laws of GST from thermodynamics, as it is mathematically the most profound analysis of an ∆º±1 exchange of flows of Sp-entropy, ST-energy and T-form once we correct the conceptual understanding of its equations of micro and macro states.
The macro-state parameters.
It follows from all of this that for the ginormous man or human mechanism trapping little darling atoms, the details of the motion of those individual atoms is mostly irrelevant to the behavior of the system as a whole… provided the system is in thermodynamic equilibrium, that is the behaviour of each atom is rather indistinguishable, with a similar mean ‘energy=temperature’ in its body-wave actions.
Then the system can be adequately described by a handful of macroscopic quantities, called “thermodynamic variables”:
The total energy E=st, which will be equivalent to the product of its volume V=vital space, and pressure P of the vital space on its outer membrane… and finally its temperature, T. So what those 4 elements mean in ∆ºst terms? Remember we need three only to describe a being in a space-time ternary symmetry as long as we keep it ‘simple’ in a single plane.
So indeed, the three first elements define what ‘ENERGY body-waves’ ARE always: the vital space or open ball (topologically speaking) of the system that does NOT include the singularity which ‘traverses’ the system along ∆±1 co-existin scales as the soul-brain that ‘perceives’ it all in scales, as a ‘scalar’ wholeness. Neither the encircling Spe-membrane.
Yet we do measure ENERGY = PRESSURE X VOLUME, because the volume is the vital space of the system and the pressure, the energy manifestation of that energy in the herder’s wall. So he can extract work, from the vital energy pressing on the wall (not included, so we are still measuring energy, the vital space, but absorbing that pressure). Thus we have successfully parametrised the micro state into a macro state in ‘equilibrium’ with the wall of the herder through pressure.
And so it is only left to describe the ‘y’ singularity, which connects the ∆-1 and ∆º scales, as a whole and so must be a ‘scalar parameter’ or ‘quantitative value’ that explains the whole. And alas! this is indeed the scalar fourth parameter of ‘temperature’.
The macrostate of the system is then a description of its four thermodynamic variables.
As we said we can use those crystal clear (: now 🙂 concepts to other less ‘observable’ scales, and we can indeed apply them to mechanics, where the ∆º±1 scalar parameter will be mass, and its geometric point-locus will be the centre of mass, which maintains its fixed-singularity stable position, equivalent to the concept of ‘thermodynamic equilibrium (measuring the same temperature in the whole ensemble) through any motion, including rotary motions of the whole, and as long as it is in balance, (centre of gravity below torque) the whole system will be in gravitational equilibrium – themes those retaken in the ∆+1 post on mechanics and gravitation.
While all what we have said applies to the quantum state, regarding the ‘full description’ we can do of a quantum system by considering its position and momenta, which encloses (with minor corrections on abstract quantum copenhagen bullshit) all the information about the being (position= field; momenta: wave-particle duality). And the equivalent ‘four vector’ formalism, which is a modern homologous way to describe all kind of physical systems, first born on Relativity:
In the graph, a 4 vector: In relativity, space-time coordinates (external field position) and the energy/momentum of a wave-particle (internal com≈position) are often expressed in four-vector form. They are defined so that the length of a four-vector is invariant=in equilibrium under a coordinate transformation.
Ultimately all those different formalisms of physics are homologous and always describes the 3 ST or 4 ∆ºST±1 elements of a being: its field position, wave-particle duality and scalar ∆º ‘wholeness that balances in equilibrium across ∆±1 scales’ the being (scalar parameter). The 4-vector is just the ‘geometric’ version according to the math duality of temporal, numerical algebraic solutions with symmetric spatial, topological ones.
Back to thermodynamics out of the isomorphic, homologic method.
Let us stress again that the ∆-1 (position-momenta)>∆º (e,p,v,t) equivalence simplifies the higher information (5D metric) of the ∆-1 microstate, for which we need to write down an impractically long list of numbers, whereas specifying a macrostate requires only a few numbers (E, V, T, P.), AS larger wholes have paradoxically less information (see above 5D graph).
However, thermodynamic equations only describe the macrostate of a system adequately when this system is in equilibrium – has a mind-point=scalar parameter (temperature, centre of mass) that balances it. Non-equilibrium situations can generally not be described by a small number of variables. And this works for all systems of nature, which need an Tiƒ parameter of balance for them to organise – yet entropy-only physicists consider thermal equilibrium the ‘death of the system’, from the human observer, as obviously an ordered system cannot die=release entropy for a larger ∆º system to ab=use it. On the contrary the existence of Tiƒ-scalar self-centred numbers/points, thus prove the sentient, vital, orderly, fractal, organic nature of the Universe, in all its scales, which always tend to balance the s,st,t parameters across all its citizens-cells-atoms (S,B,P systems)
As a simple example, consider adding a drop of food coloring to a glass of water. The food coloring diffuses in a complicated matter, which is in practice very difficult to precisely predict. However, after sufficient time has passed the system will reach a uniform color, which is much less complicated to describe. Actually, the macroscopic state of the system will be described by a small number of variables only if the system is at global thermodynamic equilibrium.
Thirdly, because there is more information in ∆-1 complex assemblies of micro points, more than one microstate can correspond to a single macrostate.
In fact, for any given macrostate, there will be a huge number of microstates that are consistent with the given values of E, V, P, T.
But this again is a feature of all systems, whose ‘languages, micro-points and ∆-1 states have more freedom, variations than the ∆+i states’ and it is one of the most beautiful proofs of the existence of a scalar god, on top of the previous graph-pyramid of ∆-scales, which I have been writing for 30 years to the chagrin and mockery of scientists (when I cared to try to enlighten them:), as 0-mind of the Universe, since at the end in all systems there will be a ‘whole, Tiƒ’ with as little information as a scalar number/ratio/constant that resumes what all the other scales have in common – and ultimately is the GST of this ‘unification theory.com’
Entropy in classic physics… is thus related only laterally to entropy=lineal expansive motion=disorder, which is the definition of GST (taken from the wider vague concept of philosophical entropy of physicists and its arrow of time).
But to the fascinating concept of ‘all the possibilities’ of evolution of a system into the future, which basically are 3±0, moving along the 3 arrows of s, st, t of a single plane, and/or the ∆±1 arrows of emergence and dissolution out of a given ∆º plane.
So when we put together the 2 concepts of entropy in mathematical physics, entropy of thermodynamics and entropy of theory of information, we realise in the midst of its philosophical mantras, physicists are tinkering with the ‘time garden of bifurcations’ (Borges beautiful tale) and/or possible choices of future, seeking to ‘eliminate’ entropy=find a deterministic future path to their inquires.
Now, how from this rather ‘scholastic’ concept akin to how many ‘angels fit dancing on a pin’ ‘heated’ arguments of Sorbonne’s first scholar≈University dogmatism, which brought Middle Age Aristotelian christian thinkers to sword and dagger debates, physicists have come to enthusiastic battles of the absolute (big bang entropy universes, disorder arrows, multiple quantum path solutions of parallel Universes, etc.) is a theme which frankly does not interest me more than the ‘∞ angels’ fit on the pin.
But those poor souls do not have much more to deal with in the fog of their misunderstanding of the thoughts of God. So we shall clarify their statements.
We are now ready to provide a definition of entropy in classic physics and properly interpret it. The entropy S – a MACROSTATE, ∆º parameter – is defined in terms of the micro-state parameters, as:
S=k ln Ω
where k is Boltzmann’s constant (never mind it was found by my admired colossus Mr. Planck) and Ω the number of microstates consistent with the given macrostate.
So the interest here in GST terms is the realisation ‘once more’ that we can either:
∆-1>∆: reduce the possible paths of future, of an ensemble of ∆-1 micro states to its smaller future whole information. As the set of sets is larger than the whole set (Cantor paradox homology).
THUS, a social group of ‘numbers=events’ (Ω), which represent the paths of future of a series of ‘space-time quanta≈actions (k) that have more spatial population=informative-time events in the ∆-1 scale than the whole; can be reduced to its ∆º states, by means of a slow growing logarithmic curve (ln), inverse to its ‘exponential function of growth’ , which will be the opposite perspective from ∆ to ∆º:
∆<∆-1 decay from wholeness into its parts, is thus the inverse famous decay exponential equation, showing in this manner an essential symmetry between ∆-1 and ∆ parts and wholes, in terms of its ‘quantity of possible future formal paths and degrees of freedom’.
So this is important and valuable for what it is (: more than the angels dancing on the pin, as this is somewhat more real 🙂
But all the rest of hyperbolic philosophy of entropy sponsored by retarded (conceptually speaking) physicists is nonsense and should be erased along the big-bang theory from text books.
And keep the ‘bare bone’ facts, which we find in any wikipedia-like text, and it is what matters about entropy (and inversely in theory of information).
Now the mathematical beauty of it is this: as we said the paths of the future are 3 for a single plane ‘approximately’, in fact are e, whereas e is the ‘ignoramus’ little secret of the euler number which is exactly 3=e+e/10!! (2 chess kudos for this serendipitous finding):
Alas, here we have true GST ∆-magic: e is 10 when we ad to its 10 parts an 11th ‘hour/part’ as the Tiƒ element is both the 10th part of the tetraktys (the system in ∆) and the 1th of the ∆+1 wholeness, so the number e has an e/10 element ‘twice’ doubling as the ‘black ball’ soul of the system and its ‘wholeness’ single unit ∆+1 existential form, as it co-exist like your mind does in the cellular and whole outer world scales.
The graph shows what we mean, 10 is 11, as the ego-tiƒ doubles ‘across the ∆º±1 scales. So the whole system is inversely 3 minus e/10 when we measure it with energy body-wave parameters, as the soul tiƒ is sucking in one tenth of the vital form of the system, given to the ‘future’ ∆+1 whole state that ‘warps’ as charges and masses do, the lower field or body wave resting it entropy/energy to emerge in the upper being, as your mind sucks in your body energy.
Just get the feeling of it. The mathematics then are obvious: A system diminishes and grows in infinitesimal 1/n, 1/10 parts increasing or decreasing in its wholeness and order and the function that does it is the e/ln dual function of ‘motion from ∆+1 into ∆: exponential growth of information’ and the inverse logarithmic reduction of ∆+1 into ∆.
Thus it turns out that S is itself a thermodynamic property, just like E, P, T or V. Therefore, it acts as a link between the microscopic world and the macroscopic. AND SO NOW WE DO HAVE THE FIFTH ELEMENT for a full description of the thermodynamic system, as we can consider ‘entropy’ to be the ∆±1 ‘partner’ parameter of ‘temperature’, which gives to thermodynamics, as we said at the beginning of this post, the ‘wholeness’ of close range observation, with the full 5 parameters of the 5 relative dimensions of any ∆º±ST reality:
Volume(Spe), Energy (st), Pressure (Time closed cycle), Temperature (scalar parameter of whole order: ∆-1>∆) and entropy (scalar parameter of whole disorder, ∆-1<∆).
To notice on the side of quantitative terms: since Ω is a natural number (1,2,3,…), S is either zero or positive (ln(1) = 0, ln Ω ≥ 0.)
FOGGY physicists who don’t understand either the meaning of ±numbers tend to consider this a proof of the Universal growing disorder as they don’t either understand entropy and the ternary e-growing trifurcations of the future paths of any system. It really means only what we said: that the ∆-1 scale has always more information than the ∆-scale and so if we rest the final deterministic single path of the whole from the many entropic paths of the parts, we get a positive entropy number.
A second consideration more technical, which we shall tackle whenever we widen this barebones article concerns the 2 uses of entropy in thermodynamics, statistical and Boltzmann’s entropy, which consider a less number of ‘variations’ of ∆-1 micro states. Thus statistical entropy reduces to Boltzmann’s entropy when all the accessible microstates of the system are equally likely.
It is also the configuration corresponding to the maximum of a system’s entropy for a given set of accessible microstates, in other words the macroscopic configuration in which the lack of information about the future is maximal.
As such, according to the second law of thermodynamics, it is the equilibrium configuration of an isolated system. Boltzmann’s entropy is the expression of entropy at thermodynamic equilibrium in the canonical ensemble; which is so useful as all systems do tend to have an isomorphic internal indistinguishable ensemble of ∆-1 cells/citizens/atoms for the whole Tiƒ system to treat them wholesale with their ∆º parameters.
This postulate, which is known as Boltzmann’s principle, may be regarded as the foundation of statistical mechanics, which describes thermodynamic systems using the statistical behavior of its constituents. And so from here on, even if I never, lazy cow, return to this article you can just understand the whole discipline; which as all physics is fascinating if physicists stick to their guns: mathematical physics and ask humble advice to us, philosophers of science, regarding what they do (-: that would be the day 🙂
The laws of thermodynamics.
We can now tackle in reverse order the three laws of thermodynamics, as expressions of the three states of matter:
Tiƒ: Solid crystals. 0 entropy, pure till mind-information:
The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero, or 0 kelvin is zero. This means that in a perfect crystal, at 0 kelvin, nearly all molecular motion should cease in order to achieve ΔS=0. A perfect crystal is one in which the internal lattice structure is the same at all times; in other words, it is fixed and non-moving, and does not have rotational or vibrational energy. This means that there is only one way in which this order can be attained: when every particle of the structure is in its proper place.
The mind is indeed a o-mapping of all reality where motion is ‘expelled’ for the form to be absolute and reflect by the ‘determined’ actions of the being, which sees its mind as the deterministic still universe – what it is.
ST: Energy: The first law, we agree states that present energy is conserved.
Spe: the Second Law of Thermodynamics, corresponds to the ‘entropy-disorder’ state, hence it is tautological:
The total entropy of a thermodynamic system tends to increase over time, approaching a maximum value. It is the one we just commented; as it merely refers that as time passes, the branching of ternary e-states or paths of the future increase, and the information from ∆º of the system becomes more confuse and less deterministic (in a ceteris paribus analysis that does not consider the order inflicted by the membrane and the singularity).
Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems; which again is a confusing term, which really means fields or open balls that do not have into account its interaction with the central singularity of order, or the enclosing membrane that constrains and further orders the system. For example, the Earth is not an isolated system because it is constantly receiving entropy>energy in the form of sunlight and it constantly receives order from the cycles of heat and cold weather and magnetic and continental drift ‘programming’ of history and evolution and further on it receives order from all the mind-points of its neural network aka life-beings on the surface.
And in any case what physicists talk about here is their confusions of order vs. disorder, that is they should say, ‘A system tends to isomorphic equilibrium in its wave-entropy, St<S elements, as a pre-condition for the singularity-mind and its tiƒ ‘upper, ∆º+1 wholeness scale, to ‘order’ them and extract its e/10 ‘share’ to ’emerge’ in the ∆+1 conscious of the whole, unit-scale.
Now, equipped with all those confusing motions, the physicist comes as an amateur to the field of philosophy of the wholeness, and affirms that the universe may be considered an isolated system (why? does he have a ginormous googleian size-number to see it all? or an infinitesimal, smallness to perceive its potential tiƒ)…, so he comes then to state that its total entropy is constantly increasing… as he fits an ever increasing number of angels on a pin.
The scalar magnitudes.
Time only passes towards the ‘future’ when things change, and change in the sense of a growth of information. THIS FOLLOWS immediately of our definition of the three arrows of time. In biology it means to get older, and grow your information faster than your energy, once passes the first age of entropy, after your seed of information emerges in the upper scale, ∆+1 of you world cycle of existence.
In physical systems it means that time clocks accelerate towards the future, as they shrink in size and increase its attractive force (of all ∆+§). We can then understand the constant emergence of ‘new clocks of time’, of the three specific arrows (hence lineal time clocks, wave-like time clocks and curved, vortex-like time clocks towards the future). How can we trace this ‘evolution of timespace clocks of physical systems on its three arrows?
The study on how those first time clocks of the gravitational dark space and entropy (the limit of perception in the lower scales) evolve through all the scales of existence to become vortices of black holes (the limit of perception of the upper scales) could be considered the meaning of physics in GST, and in the process it follows the same laws and isomorphisms of all other species.
Of course in this adventure of ‘living physical systems’, performing its actions of existence in huge herds as they move from lower to future tighter scales, there are many deviations, contours and sub-species, which do not make it, elliptic clocks and functionals which in mathematical physics are described as herds of herds across several scales of the ∆-dimensions of the being.
But in all of them we shall find three arrows of time-space with its form and function, topological and bio-logical organic description of phenomena, which when plugged into ‘vital ¬Æ mathematics further reveals many details and beautiful, darwinian process among the quanta of space-energy (whole being measured in instantaneous space as an energy amount, in detail as a ternary topology with a finite time and space size connected by the 5D metric of the being):
All this said an interesting element to explain before going further is the application/meaning of the 3 scalar magnitudes of physics, temperature in ∆º, frequency in ∆-1 and mass in ∆+1. As they must be judged to belong to the ternary elements and the ternary ∆º±1 scales:
-Temperature (∆º scale, ST-open ball) measures the ‘wave-body’ equilibrium of the thermodynamic human scale, and as such it happens in all the delocalised vital space of the wave body.
-Frequency, (∆-1 scale, cyclical clock-like membrane) measures the smallish quantum wave frequency of the cyclical or sinusoidal ‘surface-membrane’ (electron, wave packet envelope) of the smallish physical scale, in as much as we are the larger observer so we see better from the lower smallish being its ‘outer cyclical-clock like closed membrane). So that is the scalar we measure.
– Mass, (∆+1 gravitational scale, Tiƒ magnitude) is inversely the Tiƒ measure of the largest galactic or Earth’ system in as much as it is the huge ‘being’, in which we are enclosed; and so we feel the mass curvature of our space-time as we are part of its ginormous system (and ultimately of the galaxy’s singularity black hole, which determines the G-constant of curvature for each specific galaxy, as Mach had it).
So the reader starts to see how we do fit and order, departing from the ternary structure of reality in space-time (equivalent to the ternary ‘Cartesian’ res-extensa+vortex+mind structure) and in scales (equivalent to Leibniz’s triad elements of reality : the tiƒ-monad, ST: ∑ 1/n finitesimals parts & ∫∫dtsdt integral/derivatives into wholes… to quote two founding fathers of ‘serious’ philosophy of science that mirrored it).
Needless to say if we were an atom we would see other parameters from other ‘monad’s mind perspective’ (ab. Pov).
This said what the equations of energy in those three scales mean is now more clear. As usual they will represent a function of present energy in each of those scales with:
-maximal detail (∆º thermodynamic scale) or
-limiting detail (c-limit of speed perception of information in ∆+1), or
– uncertainty, given the fact that we must ‘absorb’ some h-quanta (Heisenberg uncertainty) of angular momentum≈present information to ‘learn’ about the quantum observable (as e/10 is the toll we rest from 3 to emerge as an 11 tiƒ point in ∆+1 wholeness).
So as we have understood energy, common to the three, and the scalar, as in a puzzle we just explain the ‘third element’, the ratio constants, h, k and v or c (limit), with slightly different meaning to match the symmetry with the T, ƒ, m elements just described.
In the larger scale of gravitation, as Mass is the ‘scalar’ tiƒ element (vortex of quarks, centre of gravity, black hole of stars) and we are in the middle of the system, as the ‘momenta-energy’ element it follows that v is the field-potential-related Spe-motion; and the second version, e=mcc must be read as ‘entropy-disorder’ expansion and destruction of mass into entropy, in its maximal possible motion; that of the structure ∆-i final scales of galactic substrata, light space-time.
In the lower scale, however we are talking of a complete ‘reversal’ of perspective at all levels, from topology – elliptic in gravitation and relativity, hyperbolic or lineal in quantum – to function/form (essential concept in GST, paradoxical always, coexisting in multiple elements, with an ∆º perspective that changes the parameters we measure; and certainly with a much more complex logic than the human obsession for absolute one-dimensional truths/perspectives).
So H is a constant of the external membrane (still view) angular momentum (dynamic view) of the particle/wave we observe. It is not the unobservable tiƒ centre; and the interest of it, is its multifunctional roles, a theme studied in our posts on quantum.
Now it is important also to bear in mind constantly the fact, so little understood even if it is accepted since Einstein and Planck, that the Universe is more about motion=events in time, not form in space.
So we are talking of actions, of ‘present momentum’, of world cycles, and merely state that H is the ‘minimal world cycle’ of ‘energy’ of the quantum scale.
As energy and entropy are so often confused in physics since they are relatively similar (energy is conserved, entropy tends to disorder; energy though has specially in kinetic energy the most used concept a tendency towards entropy more than towards form, which would be better represented as in-form-ation, a present-future state vs. energy a present-past, forming both the dual components of the wave, so for example the magnetic and electric field can be treated as the relative energy vs. information duality married through the wave speed, and its µ, k, constants: c²=k(t-curvature)/µ(s-gravito magnetic constant).
It is then a key concept of the Universe of multiple clocks of time that an energy ‘conservative world cycle’ which represents the whole existence of an ∆-i scale, becomes for an ∆+i scale a ‘quanta of time’, perceived in a ‘synchronous moment(um) of space’, as fast cycles become ‘fixed forms of space’ for a slow observer (see key article on synchornicities).
this is specially truth of standing waves
What this means basically is that the H-planck event is absorbed as a quanta of spatial energy for a slow being as we are. energy in that sense is a ‘memorial tail’ of time world cycles, ‘frozen’ as a space piece of ‘planckton’ or an ‘entropy Bolt’ for an ∆+i slow informative being.
Yes the Universe has its complexity in its repetitions and synchronicities that transform space forms into time functions and vice versa.
But we can consider of the many perspectives (quantum as ∆-i, has more information, less perception and so it is deservingly the more complex of all forms of human knowledge) its dimensions of angular momentum, which ad to the cyclical π motion of the particle’s momentum (p) the radius, r, or distance to the Tiƒ, hence it is the best way to observe without ‘seeing it’, both the coordinated relationship between the Tiƒ and the membrane, which are so often in constant relationship through invaginated paths (as in cells, where the DNA -center connects through the golgi membranes with the eternal membrane).
So h, we might state is an excellent key parameter as it includes information on the Tiƒ x Spe membrane-informative nuclei of the quantum system. And so as S x T = St (meaning entropy x information = present energy) we really have in such simple formula, e=hƒ=h/t->h=ext a packed information on a system, whose energy – the parameter that emerges in our scale of existence, and we have already related to the world cycle of existence of a being – is quantised in as much as h represents the world cycle’ of existence of the light space-time at the minimal scale of the ‘observable human Universe.
We could say that h is the minimal quanta of space-time, the minimal being, the ‘plankton’ of the gravitational sea, reason why in our texts we call it ‘planckton’ the minimal unit of life of the galactic, light space-time universe encoding in its 3 parameters r (ST) x m (tiƒ) x v (Spe) the needed ternary structural information of it. As we can define a carbohydrate with the CNO(+h) elements. How many variations of h-species there are is the zoo description of quantum physicists, the details…
So in brief in the gravitational scale, m is the tiƒ fixed scalar and v the entropy element, with the momentum being the present parameter of us, the beings within the open ball/vital space of the system; in quantum we revere elements, now the fixed element is not the tiƒ but the membrane (as perceived by us in quantised h-quantities of angular momentum) and so the variation is not an v-lineal speed but a Tƒ, frequency.
And by the same rules of ternary symmetries in our ∆º intermediate scale, the concept that shall dominate is neither of those spe/tiƒ extremes outside our ‘equilibritum’ but the present wave, which is in the thermodynamic equation, E=pv (resolved) = KnT, the true meaning of Temperature as a measure of a ‘heat wave’, specifically the ‘amplitude of the vibration’.
But for the understanding of it, as this is a post on ∆º thermodynamics we can finish here the introduction and work with a bit more of sophistication on the basic concepts we have learned.
∆º SCALES +ST LAWS: THERMAL EQUILIBRIUM
For a full understanding of the workings of matter systems we have though to combine scale laws and ternary s-st-t laws and treat THE SYSTEM, regardless of size and complexity, when it is a ‘whole’ as a supœrganism which will balance the three parts, S-past operandi T-future≈st-present, of the system, within the ‘limits of the ∆º thermodynamic scale’. And so we find as the most important laws of thermodynamics, the…
Equipartition theorem on Thermodynamic equilibrium.
The name “equipartition” means “equal division,” in Latin.
The original concept of equipartition was that the total kinetic energy of a system is shared equally among all of its independent parts, on the average, once the system has reached thermal equilibrium, as all supœrganisms are in energetic balance between its limbs/fields, body-waves and heads-particle, where the st-body ‘absorbs’ the energy that it will share in equal parts with the limbs/fields and body/head system.
HENCE THE ENORMOUS RANGE OF EQUATIONS FOR ALL SYSTEMS OF THE FORM: S± T = ST, which in physics tend to be written in terms of potential Tƒ energy due to position/form and kinetic, moving energy.
This is then the origin of the viral equation, and in the quantum scale the schrodinger equation, basically an expression of ST (WAVE left side o the equation) equal to Potential + Kinetic energy (left side).
The viral equation however refines the concept establishing that the potential ‘tƒ’ energy of the system (often energy means in human conceptual fog, motion, so entropy but that is irrelevant now) is 1/2 of the kinetic energy, which comes to say that the limb/field system tends to consume twice the energy of the head-system: E (ð) = 2 E ($), a quantitative relationship that holds surprisingly for many systems of the Universe in all its scales.
Since, the virial theorem also holds for quantum mechanics, as first shown by Fock, giving further ground to Einstein’s dictum that both statistical mechanics and quantum mechanise are the same…
An interesting isomorphism of scales: Viral theorem in the quantum, thermodynamic and human scale.
Let us consider this insight in more detail, as a key concept to fusion all scales of physics is to return to Broglie->Einstein->Bohm realist formulation of quantum physics.
In the graph, when Bohm wrote in polar coordinates Schrodinger’s ‘present st-wave’ equation voila! the particle appeared feeding in a quantum potential field faster than light – the underlying gravitational field, dQ/dt (t), or guiding equation.
Yet since Schrodinger’s equation merely writes ST (WAVE state) i
h∂/∂t Ψ = Potential + Kinetic energy, it must be easily related to the viral equation, whereas d Q/ dt must be the field/limb system of the quantum entity. And so it must also hold the viral theorem according to which potential energy will be 1/2 of the kinetic energy of the system. And that is the case as Fock found:
The left-hand side of this equation is just dQ/dt according to Heisenberg’s equation of motion. So the viral theorem for quantum physics tell us that the $-field which guides and ‘feeds of entropic motions’ the complementary body-particle, kinetic-potential energy system shares its extracted entropy between the kinetic body wave of the system and the particle-head-potential form in the same proportion that all other systems of nature: twice for motion, one for perception. Does this law rule also human metabolic systems? Do your brain uses 1/2 of the energy of your body or limbs?
This of course would seem far stretched as the head/particle of the system is very small, but as it is faster in time-energy it uses much more of it, and as it is internally connected and synchronised (Broglie’s inner clocks in quantum, nervous system in physiology) with the rest of the body-limbs it controls, it indeed uses a lot of energy.
It is indeed well established that the brain uses more energy than any other human organ, accounting for up to 20 percent of the body’s total haul.
So in a ‘classic’ human/animal being, which is ‘all the time running’ we can consider a simple proportion: Spe (40%) + ST (40%) + tƒ (20%), which will be the expression of the viral theorem for biological systems, whereas the balance between limbs and bodies obviously is extremely variable as it is conditioned by the external world and actions of the being, it cannot control, but the 20% proportion of brain systems in fully developed (no longer evolving towards higher information) species, as humans, the ‘summit’ of life evolution before we transfer our information to robots, is, is stable as it is used in internal energy tasks, homeostatic and relatively shielded from the external world by the equilibrium and membrane of the inner ∆-1 world of the mind.
Then we find that of that energy again a ‘classic dual or ternary quantitative equipartition’ takes place:
Until now, most scientists believed that it used the bulk of that energy to fuel electrical impulses that neurons employ to communicate with one another. Turns out, though, that two thirds of the brain’s energy is used to help neurons or nerve cells “fire” or send signals to control the body-limbs. So the equipartition here is even more precise as 1/3rd goes for the brain and then again ±1/3rd should go to the body and ±1/3rd to the limbs…
Equipartition laws are thus fundamental structural laws of ternary and dual nature, which establish the harmonious working together of the 3 ‘GENERATOR’ subsystems of any entity of the Universe, and as such are part of the ‘core’ equations of GST.
OF COURSE, All those elements of GSThermodynamics can be expressed as usual with different equations as EACH OF the ∆ºst perspectives have a ‘slightly biased’ form of expressing its laws. Hence the need to reference them to the ‘simpler, streamlined’ partial equations of the Generator.
So the extension of ∆º physics is immense, and we just shall consider a few samples of theorems and translate them to the laws of balance of GST. A very interesting part of it are the equations that combine ∆-issues of scaling and S-st-T balances and symmetries between lineal, cyclical and wave-like motions. They represent the laws of balance and harmony between the parts of a system applied to the molecular scale.
The first of those laws is the concept that a supœrganism tends to an homeostatic, ‘just’ distribution of its present energy among all the elements of the system. While the system might be in a predatory relationship with the external world loosing or gaining energy within it will tend to find a thermodynamic equilibrium, which is in thermodynamics the fundamental law that translates the balances between the three elements of the supœrganism.
The original idea of equipartition is thermal equilibrium: the energy (read motion and form, e x i) of the system is shared equally among all of its various forms. This means the average kinetic energy per degree of freedom in the translational motion of a molecule should equal that of its rotational motions, which is just an expression of the balance between the complementary ‘Tƒ-particle-rotary motion states’ ≈ ‘$p-field-lineal motion states”: |≈0.
The interest of those laws all turning around thermal equilibrium is that they combine ∆ and st elements, allowing quantitative predictions, for all the ∆§cales of growth between the two limits of quantum and gravitation, where the laws break, as we enter into a ‘Lorentzian’ discontinuum where the Sp x Tƒ=Konstant 5d metric reshuffle itself in two opposite paths (of larger information weight for larger motion). So the equilibrium breaks to be reinstore under the slightly changed metric of ∆+1 gravitational systems or ∆-1 quantum ones.
The equipartition theorem is therefore very good to make detailed quantitative predictions, for each decametric ∆§cale and molecular system in the ‘lineal zone of balanced metric’.
Like the virial theorem, it gives us the total average kinetic and potential energies for a system at a given temperature, which will tend to become balanced, o≈|.
But, equipartition also gives the average values of individual components of the energy, such as the kinetic energy of a particular particle or the potential energy of a single spring. For example, it predicts that every atom in a monatomic ideal gas has an average kinetic energy of (3/2)kBT in thermal equilibrium, where kB is the Boltzmann constant and T is the (thermodynamic) temperature.
It follows from this fundamental Sp≈Tf balance that the equipartition theorem can be used to derive the ideal gas law, as both are expressions in the ∆-1 and ∆o (ideal gas law) of the same 5D metric balance.
How far it can be stretched in scaling of molecular systems – or in other terms how far the thermodynamic scale of ‘matter’ stretches is shown in the fact that the law can also be used to predict the properties of stars, even white dwarfs and neutron stars, since it holds even when relativistic effects are considered. It is precisely at those ‘upper, gravitational and lower, quantum’ levels when the transition between scales distorts the balances between ‘kinetic vs. potential, | vs. O parameters’, where we can observe the most interesting dynamic ‘events between two waters.’
So we observe as we move upwards in scaling and the system becomes ‘cooler’ (neutron stars) how the ‘upper scale’ of gravitation ‘sips in’, predates the thermal energy and finally cools down near 0, where the ‘lower part/scale of thermodynamics’ fades away, ‘encased’ in the gravitational tensor of energy-matter stress which becomes the new ‘parameters’ of the ∆+1 scale.
And viceversa, although the equipartition theorem makes very accurate predictions in certain conditions, it becomes inaccurate when quantum effects are significant, in the inverse ∆-1 plane, also at low temperatures, when thermodynamics fades away:
When the thermal energy k T is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition.
Such a degree of freedom is said to be “frozen out” when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition.
Such decreases in heat capacity – equipartition’s failure to model black-body radiation— the ultraviolet catastrophe— as we have seen led Max Planck to suggest quantum physics.
∆+1 enclosure, ∆-1, eaten inside out…
It IS for us more telling to express the duality of this ‘fading away’ of a thermodynamic state-being, when we move upwards or downwards into the gravitational or quantum scale in topological scalar terms:
When we grow in excess of size, the effects of ‘pressure’ due to gravitational attraction slow down the thermodynamic motions of the system, while the curved distortion of space-time finally ‘encases’ through ‘gravitomagnetic forces’, the orderly transformation of thermal energy into a vortex-like structure with angular momentum≈membranes that ‘freeze out’ expansive thermodynamic heat into its inverse implosive gravitational force. So we suffer an inversion of roles as we emerge upwards: ∆-1 thermodynamic $pe kinetic energy-> gravitational ‘potential energy’.
So the larger gravitational whole ‘extracts’ the thermal energy and transforms it through the dual singularity/curved membrane of a mass system.
The process of ‘freezing out’ and predating on thermal energy at the lower quantum scale, which we have treated above when considering Planck’s analysis, from a s-st-t perspective appears now from the scalar view as an ‘inside out’ ’emerging’ process, where the disorder of thermal energy and Kb quanta is now caused by the ’emergence’ as ‘chaotic, individual, free units’ of the H-quanta of action, which no longer are a smooth, inner continuous ‘invisible’ support for the Kb quanta in harmonic collective behaviour but appear as distinguishable ‘quantifiable units’, and hence SUPRESS-DISORGANIZE THE UPPER THERMAL SCALE, which simply disappears…. as K=∑h and T=∑ƒ, that is ‘Bolts’ (Boltzman constants of action-entropy) become ‘Plancktons’ and temperatures ‘frequencies’.
This somehow physicists intuitively understand as they translate temperatures into frequencies, reason why they talk of thousands of million ‘impossible’ degrees in particle collisions (using the frequency translation).
In both cases though the result is the same, temperature no longer is the parameter to measure time frequencies (masses or wave frequencies are) and K bolts stop being the measure of action (speed or h-planckton is).
It must be noticed also that the Tƒ potential energy due to position or ‘informative energy’ will be easier to ‘transform’ into the lower scale of informative, angular momentum – Plancktons – than the kinetic energy, of motion, naturally related to the upper scale of higher entropy-motion.
So we observe that ‘mass feeds on motion’ and ‘quanta’ feeds on position, since from our human pov, according to 5D metric lower scales process more information and upper scales have more energy of motion (kinetic energy).
Hence the difference of both equations: kinetic energy is equal to 1/2(mass)(velocity)², whereas mass and velocity can be transferred into each other on the c-limit of scaling between both planes.
And since temperature is basically ‘motion of molecules’, we see the mechanism of transference of ‘temperature into mass’ at work in its simplest terms. I.e, when comparing two similar entities, the heavier atoms of noble gas xenon have a lower average speed than do the lighter atoms of noble gas helium at the same temperature – mass is taking over speed, but it does so eliminating fast the thermal motion of the being – the key point being that the kinetic energy is quadratic in the velocity.
So NOT all the motion of the being transfers upwards into mass (loss of efficiency of thermal machines), as neither all the energy transfers to the lower ‘planckton’ scale, (zeroth law of thermodynamics), ultimately meaning that neither motion nor the ∆-scales of reality ever disappear in the immortal Universe.
Now, the limits of the upper bound and lower bound should be obvious in its inverse nature to the reader, as physicists do translate the loss of temperature into an increase of frequency in the quantum real, which goes up to infinity, while on the black hole realm, temperature do really freezes to zero; it is obvious both processes are really inverse: the loss of the thermodynamic ‘scale’, which ‘feeds’ on the lower world of plancktons means a ‘liberation’ of the frequency≈temperature and the release of the stored capacities of quantum systems. On the other extreme however it is the thermal scale the one that ‘dies’ away to feed the grow of the black hole.
Let us then consider the quantum case first – the loss of temperature related energy as it is transferred to the emerging h-quanta, and then the upper bound case – the thermodynamics of black holes, which also reduce to zero temperature, converting thermodynamic energy into gravitational one.
THE PHYSICAL 5D±1 SCALES OF MATTER. FROM CHEMISTRY TO GEOLOGY
In the 4th scale the language changes. Molecules are no longer externally guided by quantum laws but by thermodynamic laws, as its clocks of time, and rods of space have changed. So the species that come after the molecular scale, matter species transition to these new clocks of time and rods of space.
We enter therefore in an entire new Universe as the language of time – clocks of information – and space – distances and motions, change. Let us remember first in which consists this change:
In the graph, self-centred in the thermodynamic scale of biological beings and matter, we see how the clocks of time accelerate between scales downwards and the rods of space or radius of those cycles grows in lineal fashion, but both remain constant through a long stretch of at least 3 ∆ scales. Temperature then becomes the intermediate language, departing from assemblies of quantum numbers into assemblies of thermodynamic ranges.
If the correct Bohmian pilot wave theory of quantum physics ad the guiding equation of the ∆-1 field-wave, which in turn determines the position of particles, considering then both the present, wave, Schrodinger’s view an the Past to Future, field-particle interaction of the Bohmian, Heisenberg’ matricial point of view for a full description of quantum events, we obtain a closer similarity on the equations of quantum physics and thermodynamics, which shows how essentially those statistical concepts (also applied to socio-biological systems) can describe rather deterministically, the structure of a system of any 2 such dual scales.
We thus conclude that matter is thermodynamics. That is in space we see a super organism of molecular atoms ensembled into a ternary system, we can define with a generator or in classic thermodynamic equations. This space-system will in turn have a development in time, which will be thermodynamic, guided by parameters of time related to temperature and space, related to state. But in essence the laws of those systems will obey the ensemble laws of social evolution of parts into wholes.
A thermodynamic system thus will be initially a precisely defined region of the universe under study.
In classic theory, everything in the universe except the system is known as the surroundings – the ∆+1 world-universe of GST.
A system is separated from the remainder of the universe by a boundary which may be notional or not, but which by convention delimits a finite volume. Exchanges of work, heat, or matter between the system and the surroundings take place across this boundary; which therefore becomes the GST membrane of the system of max. Spe (topology-function).
Boundaries thus define an internal region where the thermodynamic cycles of exchanges of energy and negantropy≈formal state takes place. Energy is ‘stored’ as information, when the state of the system changes. Ensembles follow decametric laws that create new ’emerging domains’, and so on. Let us investigate the fundamental equivalences between thermodynamics and GST.
In practice, a thermodynamic boundary is simply an imaginary dotted line drawn around a volume when there is going to be a change in the internal energy of that volume. Anything that passes across the boundary that affects a change in the internal energy needs to be accounted for in the energy balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824; it can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics; it could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics, which essentially studies those regions of the universe in which the laws of quantum and thermodynamic planes of existence converge and can be studied together.
Boundaries are of four types: fixed, moveable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position; as such, a constant volume process occurs. In that same engine, a moveable boundary allows the piston to move in and out. For closed systems, boundaries are real while for open system boundaries are often imaginary.
Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries:
In the graph, classic thermodynamics classifies systems according to boundaries (|-Spe region). When boundary does not exist (open system), we cannot fully talk of a thermodynamic system as one of it ternary elements is not’, so a whole is not created.
Then we find in the other extreme an isolated system, which does not have in a sense boundary, as it does not communicate with the external fractal Universe and becomes a whole in itself. So the more interesting cases are those in which the ‘monad’ is not fully isolated but does communicate and exchanges energy in the form of work and heat, and information in the form of state and shape of the system. Those systems who do not move: ∆e=0 (mechanically isolated), those who do not deform (thermally isolated), ∆i=0, are partial cases of the full thermodynamic system, a closed system, which allows both changes on e and i, work and heat-state.
Now the fundamental thermodynamic law is that time never stops, so when one arrow of time stops – such as entropic motion – another arrow of time, growth of information, of form-in-action, of form, grows. In the case of a thermodynamic ensemble, the external motion ceases and the internal motion, or evolution of information starts. We can talk of locomotion and evolution as the two internal and external, ∆+1, ∆-1 forms of motion more fundamental to the Universe, and in that sense understand thermodynamics as the change from external to internal motion states.
States and processes
When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed. Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair. Let us then first define the parameters of Spe and Tiƒ in thermodynamics which are:
- Pressure, which is inverse to the volume and hence inverse to the Spe, a parameter of Tiƒ relative order, such as ∆P->∆Tiƒ.
- Temperature inverse to preasure, hence initially the parameter of Spe, parallel to volume≈temperature≈Spe.
- Volume: the space-time configuration underlying the thermodynamic process both in spatial extension and form.
So once those 3 relative parameters, Pressure (Tiƒ), Temperature (Spe) and Volume (static, present ST) are understood, classic studied thermodynamic processes are ‘ceteris paribus’ cases which are partial equations of the Γ(states) Generator:
- Isobaric process: occurs at constant pressure
- Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
- Isothermal process: occurs at a constant temperature
- Adiabatic process: occurs without loss or gain of energy by heat
- Isentropic process: a reversible adiabatic process, occurs at a constant entropy
- Isenthalpic process: occurs at a constant enthalpy
- Steady state process: occurs without a change in the internal energy
So let us also start parallel to physics with the analysis of…
The zero sum of entropy. The program of Existence as a zero sum.
The entropy of the Universe is constant as the worldcycle is a zero sum of entropy. Tƒ fully understand this we must rewrite the equation of entropy in terms of 5D functions.
∆S= entropy dS is equal to this amount of energy for the reversible process divided by the absolute temperature of the system:
We have assumed the temperature is constant but what truly is constant is entropy because it is a reversible closed cycle with the process of death and the asymptotic arrows between planes of existence that balance the missing quantities in a single plane, when we consider the ‘inverse loop’ taking place in the i±2 transfers and actions of death:
Thus dS=0, and d Q r/T=0 becomes a Lagrangian where the active clock of time is temperature and Qr the energy of the system.
From where we can consider different equations of thermodynamics.
Thermodynamic equations as expressions of the life-death cycle and its quantum actions of ‘exist¡ence’.
Thus the main difference of cyclical time, vs. lineal time is the concept of entropy, in the cyclical Universe the entropy of the Universe for the whole system is zero. And so it is for all relative systems, as they are balanced by transfers between planes of existence, notably the positive order acquired in the long arrow of informative social evolution, ∆+1, and then in the fast explosive death reversed process, when the wave falls fast into the past, erasing all its information of future, higher planes of existential organization: ∂’’œ<<∑∑∆-2
Now to understand quantum jumps through 2 Universal ∆-planes, an amazing feat that puts in contact a macrocosms with its lowest cellular/atomic planes, we should consider this astounding explosion of expansive Entropy to be balanced with the total gain in entropy by the same system during its arrow of Life:
Life Cycle: Tƒ-1: Seminal wave: ∑ S (life semi-action-cycle, h/2) = ∑ ∆-1> ∆<Max. Energy (youth) > Reproductive ∑œ wave > Max. Information: Tƒ+1 = ∆S-2: Death
In the act of death the expansion of entropy which jumps down at exponential rhythms the social creative process that lasted an entire life, the growth of information and order, or arrow of future of life becomes disintegrated in an instant.
So it happens in physical matter in the process of nuclear bombs, novae, quasars and big bangs. The difference is that the creative process from biochemical molecules to cells and fetus emerged into the ∆-scale for a long life which lasted so much time, but changed so little the volume of vital space of the system, explodes now in a quantum of time, dissolving all its networks.
It is precisely that difference of speed of life <<<< to death in the acceleration and deceleration of the process of collapse what creates an apparent winning arrow for entropy. But this idea would be misleading since for a much longer period a particle, a soul of a living, biological entity, a crystal in a planet has created order, at a very slow tempo. Time might be considered subjective in this orderly long time processes and perhaps it is. That is we can construct a frame of reference in which time cycles do not suffer accelerations and decelerations in positions of change of state, from relative past, to present to futures and back.
But the overwhelming evidence is this:
– The maximal speed of destruction of information is in the negative max. ∆S moment of death, which lasts hardly any time: ∆S x ∏ Tƒ = Death, disorder
– The creation of information in slow, cold environments is the opposite arrow that lasts much more in time but hardly consumes any external information.
Both should be balanced completing the conservative laws of fields with equipotential, immortal paths of eternal motion.
A Space-Time cycle of finite Duration, or life-death cycle is thus defined as:
∆-1, ∑∆-1>S∆<ST>Tƒ<<S∆-2 if we are to use algebraic notation of the different partial equations of a world cycle.
If we define the function, W=∫∫∂ (a,e,I,o,u) dtds=0, as the worldcycle or function of existence of a system between birth and extinction, this is the fundamental equation we seek to resolve for any being.
Now when the meaning of temperature is understood we can consider again, the fundamental events of all the scale of physics with an enhanced understanding of the meaning of its parameters.
Tƒ start with the frequency of the light wave, the charge an accelerated vortex like the mass, with its curvature Q, and G, form a system of clocks of time which define perfectly a beat, related to an energy, which is inverse. And so we find ourselves with different degree of integration worldcycle or minimal forceful action, of such parameters, as:
H = E x V, T = d Q/ d S= d Q x dO, which we call Order, the inverse function to entropy.
And so on: , we define a simple law of the least action, to unify the fundamental equation of ach of those scales:
F= A x E, -∂H/∂q=dp/dt and its inverse function, ∂H/∂p=dq/dt
Indeed, in cyclical time, we consider that entropy has as a secondary feature the quantity of information of the system, multiplied by conserving its actions in the multiple hyperbolic S∆-1,2, fields of existence
Thus in the scales of matter we find, the functions of existence, in the actions, H, T,
Those 5 actions determine therefore a Program of existence, which all systems follow in a mechanical, vegetative or conscious manner. And thus we can also write the 5th isomorphism, as a Generator Equation of all the actions of the Universe, written as a dynamic, temporal feed back equation or a static, organic, structural, spatial equation:
∑Se <=> Tƒ ….. or Se x Tƒ= ST±4
∑ represents the social gathering of individuals into Universals (∂u).
S signifies the motions in space, (∂a).
e signifies the increases of structural energy of the system, ∂e.
<=> in the dynamic expression or X in the structural one, signifies the combination of Se and Ti which gives birth to ‘reproductive’ actions and offspring: ∂œ.
And Ti signifies, the perceptive actions of the being, ∂i
And this 5 actions are represented also in the ‘philosophical equation’ Sp x It = ST, whereas E≈ ∂e, S≈∂a, I≈∂I, x≈∂o, ST≈ ∂U
Again we use any of the different expressions of the Generator Equation, despite the initial confusion it might create, so the readers learns to ‘change its chip’ from the quantitative, single, exact analysis of reality to a conceptual, qualitative, multiple, ‘fractal’, iterative perception based in similarities and homologies, parallelisms and conceptual complexity.
Ultimately only when he changes his mind’s frame and accepts those enriching dualities and isomorphisms he will be able to acquire the ‘enlightened’, ‘Buddhist’ perception of the pantheist Universe required to fully integrate itself within the whole.
This is the Program of Existence that all systems follow to survive, deduced from the structural elements of any fractal Superorganism of scalar space and cyclical time, which will try to maintain stable its ST-form and reproduce it beyond death in small ‘actions’ and full ‘reproductions’ of the being, such as:
∑œ Reproductions of a being across its ST±4 planes is the existential game we all play.
We can write the Generator equation, as ∑ExI=ST, this alternative ‘old expression’ of the Generator means that to “Exist’ is to absorb and emit entropy/energy and information in a series of actions, ∑, which try to maintain a system in balance with its space-time environment. And all this is managed by the relative zero-point, Tƒ, of the system:
To Exi=st is therefore a tautological word. The rules of existence are in that sense common to all beings. What will vary is the specific form in which a certain particle, head or informative centre, Tƒ, manage to survive by its actions. Or fail.
III. FREE ENERGY, ENTHALPY, TEMPERATURE AND ENTROPY EQUATIONS
Worldcycles of energy integrated in all its entropic future branching.
Now, returning to classic thermodynamics, we shall change our perspective from ∆±i to the S, T, ST elements of the ∆º scale, which as usual being the human scale is the one we are more interested in.
First we must reconsider how humans translate the different arrows of futures into a quantitative parameter. And for that aim we can consider the key equation that relates the 4 parameters of temperature, volume, entropy and energy.
In GST, entropy defines all the possible paths of future of a system, which develop in sequential processes, and increase in Tiƒ, solid states, whose smaller volume implies a faster speed of time cycles (∆ metric: Spe x Tiƒ = K). On the other hand energy integrates all the momenta from past to future, closing the entire world cycle of the being.
And so since solid states have more modes of vibration, they store more energy, in a faster manner (the same can be found in the ∆-1 translation of energy = h v, which increases with the vibrations.
So ultimately energy integrates both, the time-events or frequencies and temperatures, and the parallel splits of time in different futures all of which happen to be accounted into the energy volume. And the question we shall find once and again in different systems is if they are ‘parallel universe’ (no really, that is just physicists imagination), or they happen in sequential time and become ‘stored’ as past tails (sometimes) or they branch out as the branches of a tree, simultaneously and give birth to different ‘resonances’ of the same being, In brief, energy integrates all, but all can be a split of populations in space, or an acceleration of frequencies and temperatures in time. And humans sometimes cannot distinguish both. I.e are the three colours of gluons different gluons or the same gluon evolving in time?
This said in a general manner we can now understand one of the fundamental equations of thermodynamics, the so-called Maxwell relationship, which basically expresses the law of conservation of energy, we have already analysed in terms of pressure and volume in terms of temperature and entropy:
where E is the energy, S is the entropy, and the partial derivative is taken at constant volume. So we can also write T ∂ S = ∂ E, where the increase of entropy and energy are closely related, in as much as a larger entropy – more possible paths of future, has a larger energy, since those paths of future must ALL take place, which can only be the case if there is either consecutive causal time sequence of them (in a system faster in time than one which only stores a determined path without entropy), or a branching of futures.
This a profound result with interesting results in all sciences, as in all there will be branching, giving birth to a ternary ‘being’ in space; or partition of a whole into different futures, as each chaotic group takes a different path.
In its most profound meaning, it means that the Universe for any ‘partition of species’, will try all possible paths of future, regardless of the quantity of populations≈probabilities (a space-time symmetry, similar to the just described), each one happens. In history it will mean there are as many fractal planets as possible histories there are: in some humans survive; in most, robots and strangelets take over… In biology that all mutations do happen in a chaotic way by ‘chance’ but then only certain paths survive.
So yes, once we understand with certain conceptual finesse the laws of physics they give us interpretations for far more complex systems, in as much as its simplicity leave the first principles of the homological Universe crystal clear (except for physicists, it seems 🙂
Schwarz’s identity, Maxwell relations, and the reversible symmetry of space anytime forms and functions.
Next, we can consider the relationships between mathematical physics and entropy, since the previous equation and the whole set of Maxwell relationships between thermodynamic parameters respond to a larger homological mathematical equation the Schwarz’s theorem – which establishes for an enormous range of functions the fact that if we derivate in space and then in time, due to the symmetry of space forms and time functions, the result is equivalent to derivate in time first and then in space. So we shall find this hidden jewel of ‘experimental mathematics’ in many different equations:
In the graph schwarz’s theorem expresses one of the fundamental laws of the Universe, the symmetry of process which evolve together first in space and then in time or vice versa, as ∆nalysis shows a derivative to be an evolution/densification of a system of parts into a ‘whole’ whose ‘present parameter, ∆s/∆t is the derivative in time and its inverse ∆t/∆s the derivative in space.
This understood properly in terms of informative time; entropic space; and its bidimensional combinations (holographic principle) implies that for systems of minimal order, it does not exist a distinction between spatial herds of populations – a mere slice of present in a flow of time – and equivalent stochastic process in time with ‘no causal memory’ (entropic, ‘markovian’ processes as those involved in heat and thermodynamics, or in brownian systems, where past events do not influence present ones ) and time truly becomes a space dimension or vice versa
Its application once we understand its S≈T meaning will guides the analysis of many stientific concepts and parameters not yet fully understood. In the case of thermodynamics, they are the origin of the maxwell relations, which encode; as the Maxwell equations of electromagnetism encode a ternary system with a magnetic membrane a tiƒ charge and its dual ST relationships; the basic structure of an ∆º thermodynamic system. In the next graph, we have arranged them roughly according to the fundamental law of GST:
Energy (ST) = Spe x Tiƒ… so the reader can easily assess, both the cyclical nature of space-time process and the exact matching of thermodynamic equations with the Generator of all ternary systems of reality
Reversible vs. irreversible processes: how to turn back the arrow of entropy into an orderly form.
Now as always we are here more interested in the philosophical, conceptual, organic, causal nature of time and space and its symmetries. So we are not going to copycat further the mathematical physics of entropy, dissecting all those equations. Of more interest to us is to consider a HUGE QUESTION FOR PRAXIS in all sciences SPECIALLY TODAY as we enter in History the AGE OF ENTROPY and death of our civilisation, if the process can be reverted.
So we shall consider another huge field of thermodynamics, which as we said, is the queen of physics not because its second law is universal (it is not, and this was somehow recognised when the third law: zero entropy for a crystal mind, was accepted) but because the ∆º detail is maximal and the systems simple enough to deduce patterns in its most essential components – akin to mathematics and universal grammar in that sense.
So what we learn here will apply to all other sciences, whose errors should be corrected to match the thermodynamic knowledge specially in quantum physics – for which Gibbs canonical ensembles and statistical mechanics works fine. It is indeed mimetic (but quantum physicists love to ‘feel different’ – so they stress only their errors as ‘novelties’ which they are not).
The same can be said of mechanics, where perception however is close enough to have little errors. So in essence we can talk on terms of Lagrangians and Hamiltonians in Mechanics, to understand reversible processes, akin to those concepts in thermodynamics (and when we get rid of the bullshit of copenhagen interpretations and understand the ‘slightly’ different view from the human ∆º of the three scales, given the asymmetry of 5D parts and whole arrows, with a balanced view in the same ST level, a view of lesser information of the mechanical, whole ∆+1 level and a massive amount of information coming from the faster more abundant fractal scales of the parts, in quantum physics.
So what is the key to a reversible process? 2 are in fact those keys, which can be expressed in a simple sentence:
“For a reversible process in time to happen, the level of control of time events and spatial individuals must be at ∆-1, infinitesimal parts and minimal frequencies micro-management’.
As a corollaries all this of course implies that the macro-being, the elephant in the room, shall NOT control BUT MERELY dissuade the micro-parts to take their decisions, since the infinitesimal infinite micro-events and time frequencies can only be managed internally. In praxis it means the system needs a Tiƒ internal knot of order, or mind invainginated by nervous/informative fractal networks that touch all cells to create simultaneous behaviour which only happens in crystal solids, through van der waals and magnetic field control or in biologic systems with nervous control of simultaneous cell motions – and we imagine in the less observed invisible world of galaxies by gravitational ‘DNA-like informative black holes’ micro-managing the evolution, and feeding of its dark matter cells, on the star mitochondria.
And this is expressed in mechanics by the Lagrangian function whose minimalist derivatives on time tend to zero (least time actions), and hence in the whole conservative energy world cycle integral of the being, the Hamiltonian becomes also a conserved zero-sum.
In thermodynamics, the same concept means a reversible process is a process whose direction can be “reversed” by inducing infinitesimal changes to some property of the system via its surroundings, while not increasing entropy.
Throughout the entire reversible process, the system is in thermodynamic equilibrium with its surroundings. Since it would take an infinite amount of time for the reversible process to finish, perfectly reversible processes are impossible. However, if the system undergoing the changes responds much faster than the applied change, the deviation from reversibility may be negligible. In a reversible cycle, a reversible process which is cyclic, the system and its surroundings will be returned to their original states if the forward cycle is followed by the reverse cycle.
So it is understood why humans cannot as elephants on a very subtle scalar Universe make most process reversible just with their huge, dull methods of control. Simply speaking the ‘will of individual atoms’ refuses to yield to the bullies and so heat and entropy ensues; as it does in brutish social dictatorships as opposed to subtle placebo democracies where the capitalist masters of the Financial-media system merely ‘suggest’ the mass from its advantageous point of view, what to do, and the citizen, ‘believer’ will therefore create order without understanding the networks of money and mass-media have suggested him what to do.
So the Universe reaches order and reversibility only when it is build a system with efficient, superorganic structures.
All other systems will have entropy as synchronicity is lacking.
IN THAT REGARD THE ENTIRE CONCEPT OF AN IRREVERSIBLE ENTROPY ARROW FOR THE WHOLE UNIVERSE, DOES NOT EVEN WORK ON MATTER SYSTEMS, AS IT IS MERELY YET ANOTHER EGO-TRIP OF THE HOMUNCULUS ON THE LOOSE, thinking that if he cannot ‘get it’ by himself, the universe is guilty and wrong- those damn atoms that misbehave (:
So that is a fast overview of the ∆º matter scale from the perspective of its ∆±1 relationships.
The other great field for a bulky description of any system being the three topological functions of space-time (S-st-T) symmetries (its world cycles in sequential 3 ages time and topological trinity organs in space), we shall now deal with an introduction to the so-called ‘state physics’ which studies those three ‘crystal clear’ formal arrows, S-gas, st-liquids and Tiƒ ‘clear crystal solid minds’ (:
∆-1: QUANTUM STATES
‘Quantum physics will be cast in the future, in a manner similar to statistical Mechanics’ Einstein
atom the organism of the lower scales of physics. The ternary, i-logic topological structure of atoms.
An atom is a space-time field divided in 3 species, informative masses or quarks, energetic gravitational and electromagnetic networks and an intermediate space-time, the electronic nebulae, which bends light into ‘fractal’, ultra-dense ∆-1 photons of light. As such it can be studied with the same 5D space-time structural ‘generator’ of any other scale of the 5th dimension.
In the graph, an atom is a space-time field divided in 3 space-time zones: its informative quark center, the nucleus; the external reproductive membrane, made of electrons, which evolve socially in bigger Spe x Tiƒ membranes when atoms become molecules; while informative, gravitational and energetic, light networks shape their intermediate space-time.
The topology of the atom is thus clear. The electron acts as an external ‘spherical plane’, a membrane of Entropy. In the center quarks are the informative vortices. In between Entropy and form is transferred with forces, which often decouple, reproducing new particles and antiparticles. There are 3 informative families of quarks-mass, due to the evolution of information in 3 ages or horizons of increasing form: each quark family is thus an age in the evolution of informative matter.
Astrœ-physics, by the unification of charges and masses, into a single group, that of time vortices, with a symmetry of scale, based in Log10 functions, is both a cosmological and atomic science.
In the strict sense, unlike astrophysics, which studies only the external motions of physical systems in space, and dedicates to the study of the other 5 motions, generation, growth, evolution, diminution and extinction, ‘partial theories’ (big bang theories of explosive deaths and births, H-R diagram of evolution of stars, etc.), atomic systems have been studied in more detail due to the closeness of observations. So molecular atomic systems, are perfectly described with an architectonical precision as they build up our scales of existence. Trouble in definition start within the atomic scale when our observing instruments, which CANNOT be smaller than the electronic mind that observes, try to analyze the minimal quanta of information of our universe, h-quanta, and of our space, c², light space times.
Now, the study of the full 6 motions is better expressed with the jargon and language of 5D sciences, which includes clear changes of paradigm, such as the study of the worldcycle of which any worldline is obviously a partial ‘element’ (as all lineal geometries of space, Sp, are mere ‘parts’ of a larger cyclical geometry of time, Tƒ, to which they will tend in any ‘future function’, sub-tended into the future.)
Now physics is NOT really more than the study of lineal motions and all other elements of generation, growth, diminution, etc, with mathematical equations, specifically, since Newton, differential equations. So it is important to run a parallel analysis of what they mean.
And in that sense we can talk of 3 great ages of mathematical physics, in a full cycle:
- The beginning with numerical systems of calculus of ‘quanta of physics’
- The age of differential equations
- The age of imaginary Complex equations
- The age of numerical calculus of quanta of physics.
In as much as Physics has not a qualitative purpose, it easily reduced to the study of physical parameters with partial differential equations, which are merely the lineal version of the more complex ‘curved geometries’, in which the ß ∆x difference between the curve and the lineal function is considered a variable constant.
As part of the plan of this blog, which is ultimately a philosophy of science that tries to establish the isomorphisms of scales, topologies, time ages, mind languages and entangled ¡logic laws of the Universe of fractal T.œs, we shall study in this post under the generic name of Thermodynamics, the isomorphic laws of 3 scales of the Universe, regarding the ‘STATE’ of entropic Dimotion of quantum, molecular and gravitational species.
As such Thermodynamics as it is studied by æntropic man is NOT the whole picture of the states of matter of those 3 scales, but mainly its analysis of the gaseous, entropic state of maximal disordered motion; and as such a ‘sub discipline of state physics, which also can be studied for all scales’.
As the blog advances and I manage to clean up and reorder al the blocks I copy-past and further introduce 30 years of notebooks, I hope to be able to dedicate 3±i different posts to those states=dimotions of physical systems.
IN 5D ALL SYSTEMS perform 3 simpler actions, perception, motion and feeding on energy to reproduce the system.
And 2 scalar action: social evolution and its inverse entropic death.
So when those 3±I actions=dimotions are studied we have a whole understanding of the system.
What are then the 5D UNDERLYING VITAL, organic principles of physical systems that make them akin to those of any scale?
- 1,2,3D: Locomotion, which embodies the simpler actions through its fundamental concept, that of AN ACTION of a particle, in its path through a field of forces in which IT FEEDS, reproduces and AS A RESULT OF BOTH, MOVES. Reason why locomotion is the physical EXISTENTIAL ACTION that embodies all other simpler actions, as the graph shows..
- ∆±¡:4,5D: Entropic scattering, according to collision (loss of vital momentum) and angle (4th postulate of perpendicularity) vs. social evolution under an informative force (normally gravitation), which are the scalar actions of the system
So because Locomotion implies for physical systems, ‘perception in particle state-stop’; feeding on the energy of the lower field and reproduction by adjacent imprinting in wave state-motion; for PHYSICAL SYSTEMS, locomotion embodies the 3 simplest actions of the being; to which we just must add its ‘changes of state’ as its actions of physical d=evolution, to fully have an organic analysis of the species.
This is why Thermodynamics which studies the human ∆º scale of physical systems, and its entropic dissolution and crystal evolutions matters so much to human systems.
As such once the philosophical errors of describing reality with a single arrow of time, entropy, are solved, the laws of epistemology dictate that as the closest -hence better observed- scale of physical systems, its laws have the highest accuracy in the description of reality of all physical ones, and should be the example to model other systems, notably, the quantum systems of the lower scales, where perception is limited and errors of philosophical nature many. We shall thus consider here several enlightenments achieved by casting thermodynamics on terms of 5D metric, starting from the blueprint for an unification of statistical mechanics and quantum physics, as we did in mass theory with charges and masses.
As we have stated ad nauseam those corrections will have to come ‘naturally’ from the new ‘properties’ of space, as made of ∆§cales, and of time, as made of ternary arrows, ages. So what are the new insights established by those 2 new properties of Non-Euclidean space and Non-Aristotelian ternary time? 2 Fascinating solutions, to two long awaited questions of physics:
- ∆§cales and 5D metric will allow the unification of quantum physics and thermodynamics, with 5D metric scales.
- Ternary time arrows, will allow us to unify and explain the meaning of ‘local entropy=gaseous arrows’ which are only one of the 3 time ages=states of matter, such as, gas is the entropy arrow, liquid the balanced one and solid crystal the informative one, giving us the generator of matter states:
∆ð (entropic fast moving gases) < exi: Liquid states > §@: crystal solid minds
1.The path to unify statistical thermodynamics and quantum mechanics.
Einstein’s most original contribution to twentieth-century philosophy of science lies elsewhere, iis his distinction between what he termed “principle theories”, we shall call ‘constrains’ and “constructive theories.”
A constructive theory provides a constructive model for the phenomena of interest. Examples include the first and second laws of thermodynamics.
Ultimate understanding requires a constructive theory, but often, says Einstein, progress in theory is impeded by premature attempts at developing constructive theories in the absence of sufficient constraints by means of which to narrow the range of possible of constructive. It is the function of principle theories to provide such constraint, and progress is often best achieved by focusing first on the establishment of such principles.
According to Einstein, that is how he achieved his breakthrough with the theory of relativity, which, he says, is a principle theory, its two principles being the relativity principle and the light principle.
We consider therefore the 11 Ðisomorphisms and 5 Ðimotions of scalar, time space, the ‘principles’ that shall constrain all other theories of stience.
While the principle theories-constructive theories distinction first made its way into print in 1919, there is considerable evidence that it played an explicit role in Einstein’s thinking much earlier. Nor was it only the relativity and light principles that served Einstein as constraints in his theorizing. Thus, he explicitly mentions also the Boltzmann principle, S = k log W, as another such:
This equation connects thermodynamics with the molecular theory. It yields, as well, the statistical probabilities of the states of systems for which we are not in a position to construct a molecular-theoretical model. To that extent, Boltzmann’s magnificent idea is of significance for theoretical physics … because it provides a heuristic principle whose range extends beyond the domain of validity of molecular mechanics. (Einstein 1915, p. 262).
Einstein is here alluding the famous entropic analogy whereby, in his 1905 photon hypothesis paper, he reasoned from the fact that black body radiation in the Wien regime satisfied the Boltzmann principle to the conclusion that, in that regime, radiation behaved as if it consisted of mutually independent, corpuscle-like quanta of electromagnetic energy. The quantum hypothesis is a constructive model of radiation; the Boltzmann principle is the constraint that first suggested that model.
In that regard perhaps the clearest expression of this ‘epistemological law’ was a comment of Einstein, considering that as human knowledge of those scales increases the homology between thermodynamics and quantum physics will become evident:
“The statistical quantum theory would … take an approximately analogous position to the statistical mechanics within the framework of classical mechanics”. Einstein
This is indeed the case providing for the corrections of the main errors derived of using a single time arrow and a single space-time continuum, and the variations that each ‘scale’ of the fractal universe experiences in its ‘evolution of reality’:
SINCE in 5D Metric there IS no DISPUTE between Einstein and Bohr. The Universe is not EITHER probabilistic in time (the 0-1 mathematical unit sphere after normalization of parameters) OR STATISTICAL IN SPACE (the 1-∞ thermodynamic plane). BOTH ARE EQUIVALENT MATHEMATICAL FORMULATIONS (measure theory). The 1-∞ plane is better for slower LARGE thermodynamic ensembles that occupy more space. THE time o-1 description IS BETTER FOR faster ‘clocks of smaller particles’, according to the 5D scalar metric: smaller scales in space run faster time clocks – time, not space thus become dominant in quantum physics, hence better described probabilistically.
Yet in both scales together S-izes x ð (clocks) = Constant. They are co-invariant, which is the definition of a dimension of space-time, as per Klein’s XIX c. mathematician: ‘a dimension of space-time exists, when there is a mathematical metric equation that shows both parameters co-invariant, so we can move through them…
So you move through the 5th dimension ohs pace-time of scales, GROWING IN SIZE AND DIMINISHNG YOUR TIME CLOCKS, FROM BIRTH, AS A FAST SEED OF time, small in space, that slow downs its rhythms as it grows in size and emerges in this scale you call 4D space-time continuum… LIFE IS A TRAVEL through those SCALES of the fifth dimension… AND THAT IS THE MEANING OF ‘EXISTENCE AS A SPACE-TIME ORGANISM’, the ULTIMATE QUESTION to know… which DO APPLY also to the fascinating mathematical study of the quantum and thermodynamic world as the ∆±¡ planes of atomic systems, the particle-quantum scale, and the molecular social scale.
Now, for ‘III millennia researchers’, as I am afraid I won’t be able to complete these posts – a simple ‘main theorem’ of choice of jargon for the Unification of those scales, similar to the one we have established between the charge and mass planes, choosing the Newton>Poison>Einstein formalism OVER the confusing, artificial electromagnetic one, by describing charges as vortices of space-time; the obvious jargon to choose for the Unification of quantum and thermodynamic formalisms, with the use of Theory of Measure ideally is the one of thermodynamics, because the quantum jargon is artificial due to the need of renormalization to accommodate it to the 0-1 equivalent sphere of time probabilities.
This said obviously as routine matters more than clarity, in the same manner that nobody is going to translate properly and streamline electromagnetic jargons, nobody is after a century long of quantum probability jargons to rewrite all the books of physics, algorithms, programs of machines, etc. to streamline it. But it is important to stress those immediate equivalences in 5D metric (not identities but self-similarities, as a fractal scale is never equal in each scale).
So what we shall do in this post is explain thermodynamics with the excellent jargon established by statistical mechanics of population in space. And whenever I complete the more complicated articles on quantum physics we shall merely ‘explain conceptually’ many of the mathematical formulae of quantum physics. It is though a pity that 5D metric were not found earlier so the founding fathers of those strange jargons could have made the proper choice and save billions of time hours to the past, present and future students of the disciplines (we can compare this to the duality of apple and Microsoft underlying computer systems – apple being the right, streamline cleaner program, even if most people use Microsoft computers – so on the long term the simpler, more truthful apple iOS dominated the market).
$t-Electromagnetic field>ST-Light-wave>§ð: Charge
The key to understand quantum physics formalism in terms of thermodynamics and state physics is to realize that the o-1 probabilistic time sphere is mathematically equivalent (theory of measure) to the 1-∞ plane in spatial populations (statistics). So its difference is merely formal, due to the use of two equivalent scales of mathematics, and the symmetry of all forms of time-motion when perceived as spatial forms=populations. (Statistics then is equivalent to probability, both with the same Bell curve to expose mean populations and mean probabilities.)
Radiation – Planck’s law
How the inverse processes happen is the law of radiation. As TEMPERATURE dissipates into radiation, it does so in Tˆ4 power law, showing how the system ‘goes down’ two ∆-scales to de as entropy (e=hv radiation); a theme treated elsewhere. It is interesting to notice though that nature is always limited. Beyond 10.000 degrees, 10ˆ4 we talk really of plasma, an electromagnetic quantum phenomena, hence no longer of temperature. Below 0 degrees we talk of mass… So temperature limits also are limits for the discontinuum between the scales. 0 temperature as in black holes and quarks of perfect order brings a ‘still, tiƒ mind like parameter of ∆+1 mass which emerges to become protagonist. Beyond 10.000 degrees in opposite fashion we enter a regime of ∆-1 quantum dominant electric/magnetic phenomena and temperature makes no sense (in fact physicists translate the ƒ parameter of e=hƒ into degrees; but curiously on the other extreme do NOT use negative temperature numbers as a measure of mass-order, as negative and imaginary numbers that reverse scales or st-ages/topologies are not understood).
In the graph, Planck and Einstein, the 2 only colossus of XX c. physics – an overrated discipline.
All this brings us a landmark of physics which GST can also enlighten a bit further in meanings, the colossus adventure of Mr. Planck which shall connect further down (no longer as we have done continuous scales) the ∆-2 scale (relative to the molecular scale) of quantum radiation, with the heat scale, and its reversed processes of death and growth, and usual constants and exponential/logarithmic functions always present in transitions of scales.
We cannot in this forcefully introductory site for all stiences, overextend in any discipline. So we shall comment only on the key element of Mr. Planck’s work on the radiation of blackbodies; his masterly use of the key formulae explained above:
Now the equation is part of a ginormous number of equations ultimately related to the logistic curve.
What we witness indeed is a ratio, which means a ‘transfer’ of two parameters (one of entropy and one of energy) between two scales of reality, and this is treated as a ‘limit’, in the same way that the logistic curves ‘limit’ the growth in the ‘population’ of a system. So we can compare the equations:
The comparison is relevant in as much as the third equation is a population growth limited by a parameter (K, maximal population fit in a system), which cannot be overpassed.
Now we know the origin of Planck’s law is a case of the Bose-Einstein distribution of equally indistinguishable particles. Quantum theoretical explanation of Planck’s law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium.
Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution.
Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium, which is simplified unlike the case of distinguishable molecules which interact with a chemical potential – zero for the Bose-Einstein and Planck distribution.
So basically photons do not grow because as in the case of any logistic curve of population they are annihilated back into the gravitational quantum potential, ∆-1 field; when they ‘saturate’ the ecosystem – the inner vacuum space-time which in the refurbished model of Bohm pilot wave theory, guides and feeds photons (and electrons), which ‘DO’ feed on them:
In the ‘competitive’ environment of the black body, the saturated radiation dissolves back and so energy does NOT constantly increase but ENTROPY causes the death of light photons.
So following the comparison the ‘limiting carrying capacity’ of an environment, which is given by K, the saturation constant of the ecosystem in the radiation equation is given by nhv/c² in the equation of radiation; which essentially means that for each ‘dimensional sheet of light space-time, c², we ca fit nhv, ‘plancktons’ of energy. And no more. Beyond those limits the photons dissolve in competition for the limited ‘neutrinos’ of the gravitational scale, they are made of.
There is though an inverse path, as always 3 paths of future, 3 solutions (seen normally as 2 since the conservation of present is the same state, often disregarded), which if you start to be familiar with the ‘ternary method’ you must alway seek in stience, as the alternative path of future: the informative solution.
Alas, this is precisely the bose-einstein distribution: the indistinguishable particles become a much tighter highly evolved ‘bose-einstein condensate’ but for that to happen, the thermodynamic equilibrium must expel the entropy of heat, hence it must go the opposite direction in the ∆±1 variable which we have already defined as TEMPERATURE.
So you get at frozen temperatures, the bose condensate, which has this beautiful shape, which indeed looks exactly as the perfect tetraktys of the fifth dimension.
And what about hv/kT. Well if you are following, it is self-evident, the speed or ratio of transformation of temperature ‘bolts’ into the ∆-1 ‘planckton’ frequencies, actions of space/time≈events of the thermodynamic scale which became actions of space-time≈events of the quantum scale.
Of course the equation has in the 3 x3 + 0, multidimensional, multifunctional Universe, a few more perspectives, and encodes many other beautiful secrets in that first constant of Universal constants.
It includes among other equations, one worth to notice for its beauty and simplicity:
The Stefan–Boltzmann law that states the total energy radiated per unit surface area of a black body across all wavelengths per unit time,
j, is directly proportional to the fourth power of the black body’s thermodynamic temperature T:
j =σ T^4.
As energy falls 2 ‘entropic scales from atoms through particles into radiation: ∆º<<∆-2 – a fact which is derived from the ‘derivative and integral operandi’ used to translate ‘quantities’ between scales of wholes and parts (as per the articles on the meaning of mathematical equations).
But even more beautiful is the constant of proportionality σ, called the Stefan–Boltzmann constant derives from other known constants of nature. The value of the constant is:
σ=2 π 5 kˆ4/ 15 c ˆ2 hˆ3 = 5.670373…
There you have all the main constants of nature again, and so we finish here, letting you wonder, what is the reason they are each one dimensionally elevated from 2 to 5 potencies: cˆ2, hˆ3, kˆ4, πˆ5…
∆+1: Galactic worldcycles.
Now because somehow physicists feel intuitively that with thermodynamics they are ‘hitting’ the right spot≈scale when studying matter, they have used the theories of thermodynamics to expand their worldview to the entire Universe, with the faulty errors due to a single time arrow, that of death, and a single scale. The result are some of the most ‘in-famous’ mishaps of science in the last century. As we have dealt in cosmology with the big-bang error of an entropy only dying Universe, we shall here only consider a specific case of that error, which on view of the present experiments with black holes and big-bang replication can have dare consequences for mankind, we talk of the erroneous, Hawking’s theory regarding…
THERMODYNAMICS OF BLACK HOLES
I. WHY BLACK HOLES DON’T EVAPORATE FOR LAYMAN PEOPLE.
We study here yet another error of the conception of an entropic only Universe. In this case the misuse of the concept of thermodynamic single arrows of time (2nd laws of thermodynamics), to define out of nowhere, the thermodynamics of black holes. Indeed, we have already denied the Universal validity of the second law, balanced by gravitation at cosmic level and by cold crystals of solid order in the matter scale. It is then interesting that while physicists have expanded entropy well beyond their ‘realm’ – gaseous states, entropic processes and the ‘acceptable’ translation of similar events into ‘frequency≈temperature’ at the quantum scale (plasma, etc.), they denied the law in a local effect in which it applies: the birth of ultra-hot black holes at the ∆-1 thermodynamic level, as seeds of a gravitational species, which will finally emerge into the gravitational scale, once it cools down and feeds on the thermodynamic world.
The case is of enormous beauty and illustrates multiple laws of 5D metric, when properly interpreted. And yet we have instead a conceptual crass error by Mr. Hawking, who affirms the ultra-hot black hole keeps getting hotter, cooling down its surroundings and finally evaporates, braking locally – where it applies, the…
Heat. ‘Heat is energy that is transferred from one body to another as the result of a difference in temperature. If two bodies at different temperatures are brought together energy is transferred – i.e. heat flows – from the hotter body to the colder. The effect of this transfer of energy is an increase in the temperature of the colder body and a decrease in the temperature of the hotter body’.
Britannica, 1st paragraph, article on ‘Heat’, Macropaedia; Volume 8, page 701
We are just a mush in the surface of a rock lost in the corner of the Universe, departing of those facts we can talk about man’ Schopenhauer, father of modern philosophy.
I. WHY BLACK HOLES DO NOT EVAPORATE FOR NON-SCIENTISTS
The hard facts of known-known science.
The evaporation of black holes, depends on this formula:
As the formula is filled with Universal Constants (latter understood in more depth0, and only two variables we shall from now on write it in a simplified manner as:
±ΔMass ≤ Konstant/±ΔTemperature.
Or easier to Understand moving Temperature to the other side:
±ΔMass x ±ΔTemperature = Constant.
Simple, isn’t? It is the formula that defines the changes in temperature and mass of black holes.
We know both can change but in which direction? If the black hole mass increases, temperature must diminish for the product to remain constant. If the black hole mass diminishes, then temperature must increase for the product to remain constant.
If we write as Hawking does:
-Δ Mass X +Δ Temperature = Constant…
Black holes t increase temperature, getting hotter and diminishing mass, evaporating.
If we write:
+Δ Mass x -Δ Temperature = Constant
Hyperhot black holes will cool down, transferring heat to the surrounding star or planet, evaporating them, into a cosmic explosion, a Nova.
So wwhat chooses those symbols?
The Universe and its second thermodynamic law, which merely states that a hot object cools down and transfers heat to a surrounding colder environment. Thus it is obvious that Mr. Hawking got his sign wrong, and arbitrarily decided the second law of thermodynamics was wrong and the black hole will evaporate.
The problem is that the Universe and his fundamental laws, when they apply locally as the case of the Laws of entropy and heat imply exactly the opposite and what we observe: an ultra-active baby-born black hole, as all systems born in an ∆-1 ‘faster scale’ of the 5th dimension, have accelerated time clocks – which in this case translate into ‘faster dynamic temperature/frequency’ and so it is the black hole that evaporates the cold surroundings, transferring its matter and energy (bottom of the equation) through an intermediate state o pure entropic radiation, into the black hole, which absorbs it, and then inside, we theoretically assume, convert part into mass and part into ≥c dark energy shot through its poles.
Indeed, any ultra-hot object as a black hole, born in a cold environment as the Earth is, according to the laws of entropy cools down and transfers heat to the environment, evaporating us. And this is what we see in the Universe happening, always, when a black hole is born. It cools down and evaporates its surroundings into a big explosion, a Nova.
When you take an iron rod from the oven and put it in cold water, the water evaporates and the iron cools down, ALWAYS. And in the Universe whenever we see the birth of an ultra-hot black hole it evaporates its surrounding electromagnetic world (us) and gets colder, ALWAYS, till it reaches as a mature huge black hole a thrermodynamic balance with the cold vacuum that surrounds it.
In this principle, that heat moves from the hot source to the cold one, are based all the laws of thermodynamics, all the machines of the planet. If this principle would not exist you could made a heat machine of eternal motion, the biggest hoax of science.
Now we stress, the 2nd law applies to heat processes, entropy processes, and those processes of the quantum realm where is licit to translate temperature into frequency, as this case. It does NOT apply globally to the entire Universe, and in upper ‘colder’ scales of ‘zero temperature’, and ‘higher order’, the gravitational and dark energy scales. So the example is a good illustration to deal also with concepts such as ‘what is truly mass’, how energy and information transfer asymmetrically between ∆-scales and so on.
Now, within the local symmetries where entropy applies there is NOT a single exception to the laws of Thermodynamics in the Universe. Everything in heat science and its frequency equivalencies in quantum scales is based in this law.
So Mr. Hawking an enfant terrible – a child of thought, to be more precise – with utter disregard of the known-known laws of the Universe, just chose to break the ‘law’ – one might imagine to shock the audience- changed wrongly the arrow of time, in one of the few cases it did apply (: and wrote the inverse symbols:
– Δ Mass X + Δ Temperature = Constant…
It is Hawking’s formula of evaporation of black holes…that defies all the laws of entropy, all the laws of time, all the laws of Einstein and all the laws of the 5th dimension, the expansion of Einstein’s relativity that I study.
And that is the problem.
What the equation means is easy to understand:
Hawking affirms that black holes do exactly the opposite that all other entities of the thermodynamic Universe:
They are born very hot, the hottest objects of the Universe (in this we all agree), but then instead of cooling down in our cool Universe, burning us into hell, they will ‘magically’ absorb heat from the cold environment (+Δ), getting even hotter, breaking all the laws of entropy!
It is like if you throw a flame into water and the flame would get hotter and the water would become ice!
It is like if you get a cup of hot coffee and the cup keeps getting hotter ‘evaporating’, while it freezes your hand!
This has never happened and the mere idea was for very long in science a laughing matter. Since the laws of entropy are crystal clear. When a hot system is put by the side of a cold system, temperature moves from the hot system that cools (in this case the black hole) to the cold one that heats and evaporates.
But Hawking insisted for decades with an obsessive mind and charm. When a black hole is born, hotter than the environment, instead of evaporating the environment as any hot object does, it will become hotter and evaporate!
How he figures the black hole does that? Well it can’t according to the laws of time and entropy. So alas! He figured out that the black hole travels back into the past instead of traveling to the future. And that is why it evaporates. It is like if a baby will travel back into the past and enter the womb of the mother evaporating. Easy.
Indeed, he also muses after that astounding discovery that he could enter a black hole and come into the past and kill his grand-father. Seriously.
Of course here the error is that he doesn’t understand ‘time arrows are ALWAYS LOCAL, relative past entropy, relative future ∆information and conserved relative momentum, all of them adding to a zero sum (as past entropy and future information cancel each other leaving the integral of momentum or ‘energy world cycle’ that elongates the eternal present of the Universe). Simple and beautiful:
∫p ±(Spe≈tiƒ) = Present Energy, which is a zero sum that conserves momenta and energy and cancels entropy and information.
It is worth to repeat it such a blunder of the many we will expose in simple terms using the basic laws of ∆ºst here, harmonising all sciences:
-Δ Mass x +Δ Temperature = Constant
Defines ‘imaginary black holes’ that break the laws of entropy and will get hotter. Then for all the constants of the Universe to remain constant their mass must diminish and balance the increase of heat.
But the Universe has never done this choice.
On the contrary the Temperature of a hot mass always diminishes in a cold environment.
So mass always increases. Let us then respect the laws of the Universe, proved ad nauseam in all systems and write instead the symbols right:
+Δ Mass x -Δ Temperature= Constant.
The hot black hole as all the systems of the Universe will decrease its temperature as it is born much hotter than our Universe, and so the heat will be transferred to our electromagnetic world and evaporate it, and the black hole will absorb it as energy in its event horizon collapsing that energy into mass at the speed of light: M=E/c²
This is what Einstein say, what we observe in the Universe, what every black hole born in that Universe proofs:
The black hole is born very hot, evaporates the surroundings and absorbs it, exploding the world into a Nova at the speed of light.
Why then he broke the laws of thermodynamics, of Einstein’s gravitation – and the yet to be known laws of the 5th dimension and balances of the arrows of time, and consideration of momentum and energy…. running into the paradox of information, etc. etc.
Obviously it cannot be because of science and the delights of true knowledge, but for all the spurious, wrong, unscientific causes of our civilisation: fame, provocation, faked news, etc. What is far more worrisome is that science after initial derision accepted it, and now incidentally physicists are truing to do such black holes at CERN, with the obvious risks of one being born, growing fast, swallowing the Earth and killing us all.
The error of Hawking is like the tale of the emperor’s naked clothes. One day an emperor forgot to dress and went into a parade. And none would say anything till a child pointed out that the emperor was naked. His errors are so evident and absurd that nobody dares to contradict him. But the emperor walks naked and black holes will cool down and swallow the Earth if they are born at LHC.
And yet the entire situation is so absurd that nobody will shout, ‘cover the emperor’s with decent cloths’. The emperor is Mr. Hawking invested with so much authority by the celebrity P.R.ess that only an intelligent child – that is a new born model as 5D dares to point out the error without fear of making a fool of myself.
Still with the proper arrow the formula is a beautiful one, so we can now study what it means, once we have corrected.
II. THE STRUCTURE OF NON-EVAPORATING BLACK HOLES & ITS TERNARY SYMMETRY.
Science is truth when 3 elements are met: experimental evidence, correspondence with previous theories known to be truth and only then mathematical analysis with the proper dynamic evolution towards the future.
Now, we observe that the smaller the black holes are the faster they grow in a furious swallowing path. And yet the Fermi Satellite sent in orbit to find the signature of evaporating black holes has NOT found the slightest signature of black hole evaporation.
So what to make of this? Obviously, the 3 elements of science are not met:
- Experimental evidence (none) ->Correspondence principle with previous theorems (Einstein’s hole always grow, no hair theorem gives no temperature to black holes) -> Mathematical equations (improper interpretation of the symbol of time).
So both fundamental laws of scientific truth, falsification in the negative, and experimental evidence in the positive, prove that baby black holes do not evaporate, breaking the essential laws of thermodynamics, and Nature stubbornly imposes its cosmic order.
Today the fundamental theory of the Universe, the big-bang, is based in an expansion backwards in time of a lineal equation, similar to Hawking’s:
Hubble established a cosmological velocity-distance law: velocity = Ho × distance. Whereas the variables are here instead of mass and temperature, speed and distance and the constant is Ho.
According to this Hubble law, the greater the distance of a galaxy, the faster it recedes. Modern estimates place the value of H around 22 km/s per 1 million light years. While the reciprocal of Hubble’s constant lies between 13 billion and 14 billion years, and this cosmic time scale serves as an approximate measure of the moment of birth of the universe.
Thus the same procedure of running backwards in time a black hole mass growth, pattern would give us travelling to the past, the point of birth of the black holes at the Planck scale at enormous temperature.
Does this mean Hawking’s work lacks scientific merit?
Not at all, as if we respect the laws of thermodynamics, the ratio mass-temperature of black holes, describes the birth of a small black hole, which as all systems of Nature show an enormous activity and rate of growth, in its initial stages, feeding in the energy that surrounds them – the placenta of our seminal seeds, the nest in which the parental system feeds them, the environment in which the new species becomes the dominant predator; in the case of the black hole the rich field of electromagnetic energy that surrounds it in stars where they are born by gravitation collapse and/or planets against which they collide.
It is then easy to understand the importance of the formula to interpret the genesis of black holes, the mechanisms of Nova explosions, and the way in which the energy and information of the lower scales of nature (the quantum and thermodynamic scales emerges as mass in the larger cosmological scale), since and this is the second merit of the equation of Hawking, which stirred the imagination of scientists, to the point of accepting his peculiar interpretation is the fact that it uses all the main constants of space and time of Nature, whose meaning is still no understood in theoretical physics.
And so it hinted to the solution of the fundamental ending question of physics – what is the relationship and meaning of those constants, and how can we unify mathematically the 3 fundamental scales of physical systems, the smaller quantum scale of electromagnetic charges, the human scale of thermodynamic molecules and the larger scale of cosmological masses:Now we have written the equation with the proper symbol and alas, it is a very important equation, as now IT FOLLOWS THE CORRESPONDENCE PRINCIPLE.
Why that beautiful equation is an equality?
Because the event horizon is a discontinuum, which act as an osmotic membrane, with 2 sides, which have the same ‘surface’ area, so to speak.
The event horizon has two sides as all membranes. In one side you are inside the black hole (8π GM), and there is NO temperature here (no hair theorem, black holes are defined only with 3 parameters, angular momentum, mass and charge). So the black hole DOES not evaporate because it is NOT made of temperature. It is impossible to evaporate.
On the other side of the membrane, there is though temperature, because we ARE in the Thermodynamic and quantum world, which the black hole swallows.
This duality of a membrane is perfectly understood in terms of Topological laws. The fundamental theorem of topology states that a closed circle, any n-dimensional membrane breaks the continuum into an internal world and an external Universe, with 2 different surfaces, an internal elliptic, implosive, in-formative surface and an external hyperbolic, expansive, entropic geometry,
And those are the 2 sides of the black hole: Internally the black hole creates pure information. Externally the black hole increases the entropy and disorder of our world just before it swallows it.
Consider any other ‘organic system’ absorbing energy and trans-forming it into its own substance. You eat, and first you disorder, increasing the entropy of the food you eat down to amino acids. This is what the black hole does in the side of the quantum-thermodynamic scales of the Universe. But then you stomach evolves the amino-acids back into proteins of your ‘own DNA code’. This is what the black hole does in the inside of the membrane: converts entropy into information.
But we can equal both sides, because they have the same surface, one side is gravity surface, the other side is entropy surface.
This is the beauty of the equation of Hawking, when we respect the 2 ‘fundamental proofs’ of truth in science:
Experimental evidence (black holes always swallow our world in nova explosions that increase the entropy of our galaxies but absorb and grow in mass internally).
Correspondence principle (no hair theorem: black holes do not have temperature; they can be described with only 3 parameters, mass, angular momentum and in some cases charge, if they do have it; Einstein’s equations: black holes do always increase mass, do not evaporate, do not emit radiation).
Thus when we put properly the symbol, ≥, black holes grow according to the beautiful formula of Hawking, which in this manner respects the 2 fundamental proofs of truth in science – experimental Nova evidence and Correspondence principle; further clarifies the process of creation of mass, in a more detailed version of E=Mc², explains why the Universe is fractal, with discontinuous membranes between the entropic side of the Universe (electromagnetic-thermodynamic membrane) and the in-formative gravitational side that in-forms reality (mass-black hole side).
And why there are 2 ‘geometrical description’ of space-time (elliptic curved space-time of Einstein’s gravitation, made of accelerated vortex like informative mass clocks) and the hyperbolic, entropic, expansive description of quantum physics. Which ARE 2 DIFFERENT discontinuous sides of the fractal Universe that balance each other and MUST NOT be unified in simple terms because BOTH ARE NEEDED to balance the Universe.
On one hand you have what the black hole eats, our space-time world, on the other the gravitational space-time world.
There are in fact 3 scales of space-time in reality and that is what the Universe shows.
On the side of mass, we have a beautiful equation, which is identical to the tensor of Einstein’s Relativity, that describes a gravitational world, 8 πG Mass.
It must be noticed that unlike Einstein’s tensor, here there is no energy-entropy, but only the other 2 elements of Einstein’s equation: 8 π G, the curvature of the black hole and Mass, the substance of gravitational space.
On the other side, we have the same equation above for quantum space-time where h is the angular momentum of its clocks of information (so quantum systems code with h-planck quanta its informative spin and form) and c is obviously the speed-distance of space, as we see light space. Space is made of light, as impressionists painters realized and Einsteins’ relativity principle that cannot distinguish motions from distances, or the spatial expansion of intergalactic space, homologous to the red-shift elongation of light space proves.
Our human electronic mind perceives light-space, through, its time clocks, Tƒ=
h-planck, is the minimal quanta perceived by an electron, and Sp the speed of light, and H/c is so small that we have a mind that processes very little information and in terms of Lobachevski’s parameter of geometry displays a flat Euclidean World:
In the graph, you can see what i talk about. You see light and light has 3 perpendicular Euclidean dimensions. That is why your mind is flat and Euclidean. This product
h c³ is thus the time-space clocks and volume of the quantum world, and its ratio Tƒ/Sp=h/c³, is the ‘ratio of information/space’ of the human mind, which in terms of Geometry is called the Gaussian/Lobachevski ratio of curvature that defines a flat mind. Beautiful isn’t. So many things hidden in an equation (-; Saper vedere.
On the other side, we have a ratio between the simplest quantum minimal, larger world and our intermediate thermodynamic scale, meaning that the black hole absorbs the quantum space-time (hc³), and first it converts it into pure entropy and temperature (KT) – hence the ratio; and then it moves it to the other side of the membrane, converted into 8 π G Mass.
Now, a brief lesson on modern cosmology. The Universe has shown to be fractal, structured in relative scales of size and with different time clocks of different speed. This we know since Einstein. We do NOT have mechanical clocks around the Universe measuring time. As obvious as this is people do not seem to understand it.
And we co-exist in several scales, with membranes and discontinuous between them, such as the Lorentz transformations that define transitions between the electromagnetic light and mass scales, the quantum equations of Planck (violet catastrophe) that define the transition regions between the thermodynamic and radiation scales.
There are 3 fundamental of such scales which we show in the next graph, the larger gravitational scale, the human thermodynamic scale and the quantum scale, with different space-quanta and time clocks:
In the graph we show them, with different space and time quanta. But all of them have a common co-invariant formula, which defines them as space-time planes of reality, with a ‘different time clock’ that measures the information of the system in the frequency and form of its cycles, and a given space quanta that measures its pieces of space.
And there are transfers of motion between them, called ‘angular momentum and ‘lineal momentum, or ‘information and ‘space’. There are many formula on physics to describe this space-time planes, and the easiest ones are those of Energy, and those that define its forces. They vary slightly (we show on the left side the formula for energy, and slight variations show formulae for momentum and forces in each scales).
This is the new stuff on physics, the formalism of the 5th dimension that this writer pioneered before this activism, as it explain the ‘whys’ of Universal constants, and the structure between scales, the reason of the exponential discontinuous equations between scales. And Hawking’s equation is a fundamental equation that relates the 3 scales together. In fact it is the only equation that relates them, as the black hole swallows it all. Swallows both the quantum and electromagnetic world and converts it into mass; it is the true devouring monster of the Universe:
∆+1(cosmological scale) : M ( 8 π G) ≥ ∆-1: hc³/ k T: ∆: Thermodynamic scale)
Now the formula has the 3 clocks of time, (G-curvature, h-angular momentum, T-emparture) and 3 space quanta of the 3 scales of the fractal Universe (c3 volume of light space, M-ass and K-entropy quanta).
So his work is essential to the fractal Universe structure and its 3 5D scales.
As such is a ‘beauty’ and it is a pity that Mr. Hawking was not happy enough finding it, and that he said ‘philosophers of science envy me, because they don’t know mathematics, and that is why they criticise the evaporation of black holes’. Of course, we do know mathematics, we invent mathematics. People like Descartes and Leibniz invented analytic geometry and calculus the two main branches of modern mathematics and they defined themselves as philosophers of science, NOT physicists. That is why we can have rigour with truths.
So to the beauty of it. First we must ask. Why there are two sides on that equality, which when properly written with the positive arrow of time is M ≥ k/T, that is the black hole mass always grows?
Because this equation is taken from the event horizon, the membrane that separates the discontinuous fractal scale of gravitation – the inside of the hole, or M (8π G) – which creates mass-information; from the outer region, the entropic, thermodynamic and quantum world, which the Hole is swallowing.
So we have an equation that shows how ∆+1 gravitational mass information is created by accretion of the lower scales of quantum light space-time (Hc3) and thermodynamic space-time (KT).
And it is precisely the simplicity and perfection of the equation, which relates the time clocks and space-quanta of the 3 scales of the Universe, what makes it so important. As it truly writes in the formalism of the fractal Universe:
∆+1 Gravitational Scale: Space quanta: M x Tƒ(curved clock): G = ∆-1: quantum scale/∆ Thermodynamic scale: Sp:c³ x Tƒ: h / Sp: K x Tƒ (T)
That is, the mass quanta of the gravitational space, whose frequency or curvature or attractive accelerated clock is given by G, is swallowing the Space-time of the electromagnetic, quantum Universe, whose clocks of time are here measured by the angular momentum h and its quanta of space, by the volume of 3-dimensional Euclidean space; after converting it first into thermodynamic entropy, in its process of ‘rising it up’ and transforming it upwards through the scales of information of the 5th dimension, into thermodynamic space quanta, k-constants, and its time clocks, temperature.
This is the analytical, algebraic understanding of the accretion of black holes, which absorb the space-time, c-h quanta of electromagnetic scales, convert it by this ∆-1/∆ space-time ratio first into entropic heat and then swallow it up into gravitational mass.
And the beauty of it is that we DO have here the equations in perfect form. we have in the quantum scale on top, the h-angular momentum, the minimal clock of time-information of the quantum scale, multiplied by c³, the volume of space (as we are in a light space-time membrane, with the 3 euclidean perpendicular co-ordinates of light, the electric magnetic and c-speed arrows). And we have below the thermodynamic space constant, k-entropy, as entropy is an expansive space, and the time clocks of the thermodynamic world which is temperature.
So the equation allow us to understand for the first time, properly interpreted the meaning of the 3 Universal constants of space of the Universe, in its 3 relative scales of size, Mass, Entropy and c‚ light space-time. And its 3 relative cyclical clocks of time, G-curvature,h-angular momentum and temperature:
This equation is therefore a fundamental equation, which properly understood with the 2 arrows of the future, entropy and information and the laws of the fractal Universe duality.
We can now compare it with the previous equation of radiation to further illuminate it:
A simplified analysis, shows both equations to have similar terms above hcˆ3 and hcˆ² and below Kb T, and so without doing an exhaustive analysis of the two forms, one a mere ratio the other a ratio ‘passed over an exponential decay’, they confirm our earlier model of radiation in black bodies: the thermodynamic ∆-scale ‘evaporates’ into the lower ∆-1 quantum scale of frequency radiation.
The differences beyond the mathematical scripture though are relevant:
- In the black hole the radiation is swallowed inward into Tiƒ-mass. In the black body is emitted outwards as Energy/Entropy (B). And so we see once more the inverse relationship between inward masses and outward radiation.
- In the black hole c is three dimensional, meaning the whole volume of ternary space-time is swallowed. In the blackbody the form is bidimensional, which we shall constantly find explained by the holographic principle: bidimensional sheets of space-time, which write c² are at the core of all formulae of light constants.
- Finally related to 1) in the radiation what disappears are the excessive photons of the empty space, dissolved into the entropic arrow of the quantum field of non-local dark energy (Spe, ∆-4), while in the black hole disappear into the evolved ∆+4 scale of top quark masses (the true content of those black holes).
III. TOP QUARK BLACK HOLES
Yes, this fast turning, heavier, more attractive particle that can be the atom of black holes that gives them substance do exist, and it is called the 3rd family of heavy quarks, which are amazingly heavier than our quarks, thousands of time heavier and correspond to the exact parameters of a black atom.
Now, we shall elaborate this with the new physics of duality and the fractal paradigm to further understand why this is the most logic, symmetry, natural solution according to the laws of the scientific method (economicity, simplicity) to explain the structure of dark matter, the Halo and the galaxy.
It is all there, again. Physicists ask for a new 5th dimension to make black holes, and we find it as a relativistic motion, which is proved experimentally (the fastest clock of time of the Universe are the bottom quarks, as top quarks have not yet produced in enough numbers to measure its no doubt even faster, more attractive rotational clock). And this superstring, supermassive quark is the cut off substance of black holes.
In brief, if we define mass by the speed and frequency of its gravitational vortices, according to the equivalence principle of Einstein between gravitation and acceleration, those fast turning particles are vortices of space-time, similar to attractive tornadoes (E=mc2 +E=hv-> M=k ƒ, so the faster the particle, black hole or mass turns as a space-time vortex the more it attracts:
In the graph, the evolution of the concept of mass in relativity, from the initial image of an abstract substance in the center of an accelerated vortex of space-time, proper of the abstract, pre–world war age, when Einstein first published his work on gravitation, to the first pictures obtained in bubble chambers in the postwar age, to the realization that each mass is a fractal space-time made of smaller cyclic motions proper of twenty-first century.
In the graph, for the pedantic observer, which rejects Newton as too simple, over seeded by Einstein, a final note on the multiple perspectives we can have of any event of space-time according to the ∆±1 or Sp, Tƒ, ST perspective we adopt, we classify the 4 standing models of gravitation as relative truths, belonging to the 4 perspectives of reality: SP< =>TO.
In the graph the 4 obvious descriptions of mathematical physics regarding gravitation: It (relativity)≈ Tƒ (Newton) <-> Sp (Poisson)≈ Es (Lagrange):
Tƒ- Newton is a moving CLOCK LIKE vortex of the same mass-motion regions)
Sp-Poisson is a potential static field of energy gradients..
Es- Hamiltonian-Lagrangian is a dynamic description of the conservative energy of the system.
It: Einstein’s simultaneous measures in relativity are a still, formal description in ‘present’ of the gravitational space-time.
And so all of them are equivalent. In fact Einstein derived his work via Poisson from Newton. Today Relativity’ informative mappings of the galaxy space-time are transformed in AMD theory into a Lagrangian-Hamiltonian ‘bidimensional model’, for computer calculations, proving once more the bidimensional structure of space-time.
All are in fact derived one from each other, as Einstein took its beginnings from Poisson, who elaborated on Newton; and the more important of them all the Hamiltonian).
All this of course can be turned with algebra into very complicated equations to describe those vortices as masses with different mathematical equations. The graph shows 4 of those ‘formalisms’, which are equivalent with more or less ‘finesse’ in the degree of detail, they have.
The 5D model.
Now the 5D model of the fractal, organic Universe, does not deny Einstein, it merely expands its views, and adds fractal organic properties to the galaxy explaining better the function of those black holes, its atomic substance and working according to the known-known facts of astrophysics, in which black holes have come to dominate most of the creative processes of the galaxy. In that regard, you can compare the galaxy to a cell in a much larger scale, with mitochondria stars, of light yd-matter which end up being devoured and becoming the energy for the creation of black holes, the informative vortices, equivalent to the DNA, that swarm in huge numbers in the centerl nuclei and control its shape and provoke the reproduction of stars, with its in-formative gravitational waves.
Einstein had asked for a cut-off substance, or ‘atom of black holes’, which he could not guess at the time as quarks had not been found, but now we hint that black holes are as he thought ‘frozen stars’ of the heaviest quark families (bct quarks), and that should be the ‘realist modeling’ of black holes. As those quarks will appear in increasing numbers in accelerators, which are crossing the dark matter barrier is very likely they will be produced on those accelerators, in any of its possible varieties of dark matter that range from strangelets (s-quarks) to toplets (TTT-quarks), through Higgs decay (H->Top and anti top).
Now, all this in mathematical physics implies a constant growth of the mass of the black hole along the previous equation: M=k/T, that is as the black hole cools down, it converts via the weak force, lighter matter into heavier quarks that increase its volume and hence its area.
The last great advances in classic black hole theory were done by Kerr, a new Zealander which defined rotary black holes with or without charge. Those would be the top quark frozen stars with positive charge and the same density at macro-scale than a black hole acting as a relative ‘proton’ acts in an atom in the center of the galaxy. While the halo should be made of strangelet quarks (negative charged), acting in this symmetry between the 3 families of mass and 3 regions of the galaxy, as a relative negative ‘electron’ cover, with the ud-stars and planets in the middle, as seen in the next graph, greatly expanded in its detailed explanation on the post of 5D astrophysics:
In the graph the ‘sane’ understanding of black holes, which are born exceedingly hot and active, as all ‘seminal species’ in a lower scale of size, on the compton wavelength as a heavy quark particle, and the similar form of the halo of strange matter. this simple scheme, following Einstein’s search for cut off substances for black holes and Witten hypothesis of a halo made of strangelets, now again all the rage in astrophysics
Now, once we had the ‘solutions’ (Kerr black holes), the study of those holes was limited by the c-speed turning event horizons, which ‘absorbed’ the light of electromagnetic matter after exploding it, and ‘digested’ it via the weak force, creating heavier particles. But the maths were worked out by Christodoulou:
‘He had shown that no process whose ultimate outcome is the capture of a particle by a Kerr black hole can result in the decrease of a certain quantity which he named the irreducible mass of the black hole, M . In fact, most processes result in an increase in M, with the exception of a very special class of limiting processes, called reversible processes, which leave M unchanged.
It turns out that M, is proportional to the square root of the black hole’s area.”
So it is all clear and nice, regardless of what ‘language you prefer’, that of 4D Einstein or the added scalar meaning for a proper, scientific real understanding of 5D (as we indeed do have infinite proofs of the fractal structure and scales of size and different speeds of time clocks of physical systems from galaxies to particles). As all is obeying the laws of symmetry and Relativity of the Universe (call it Einstein’s relativity or Absolute relativity, its expansion into several 5D planes).
BUT THOSE are themes expanded further in the posts on quark matter. So we stop here.
S=T: CALCULUS IN STATE PHYSICS.
DIFFERENTIAL EQUATIONS ACCORDING TO DIMOTIONS:
3 TYPE OF STATES: FORMAL STATE, MOTION STATE, ENTROPIC STATE FUNCTIONS
When considering the 5 Dimotions of existence in mathematical physics, it is convenient first to understand the geometry of the 3 fundamental dimotions in which locomotion takes place, which is provided by the understanding of the 3 ages/horizons of time, and the rule of parallelism in social evolution and perpendicularity in entropic feeding. When we combine those concepts we can then interpret physical chains of dimotions in Nature in terms of 5D vital space-time:
If we escape the Dimotions of Complex informative Reproduction and palingenesis or generation, which obviously happens in a still point with no remarkable locomotion – but is essential to understand the weak force that transforms particles into higher masses; while reproduction of minimal information is ultimately the adjacent nature of locomotion that imprints a lower scale with the form of the system, studied in other posts – it becomes crystal clear that two of the dimotions are inverse in form, Entropy and Informative social evolution; and often become ‘coupled’ together to create a balance between them; with the proviso that the perpendicular sudden inflection of a flow in a vortex, from a flat circular motion to an axis motion, as in black holes expelling jets, eddies with a curl ascending or descending column or an entropic explosion, radially perpendicular to the ‘fractal point in its moment of death’ are clear signs that the ‘Informative’ implosive force has changed to an entropic explosive force, and both balance each other:
This duality is thus expressed in mathematical physics with the curl and divergence functions, as in electromagnetism, black hole/mass, charges and thermodynamic eddies:
So a terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component. Since when we decompose the field (for example a Fourier transform), at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k.
So this is indeed the 5D meaning of the Helmholtz decomposition.
SINGULARITY-SCALAR ∆@¡ equations:
In the graph we see the vital interpretation of those vortices and its equations.
THEY Are equations in which we study motions on the 5th dimension (social evolution, perception) and so they require to have a relative fixed point towards there is a convergent symmetric flow (for perceptive systems to focus properly a mirror image of the external world), and will often require 2 derivatives to obtain the ‘lower ∆-2’ plane pixels for the system to absorb them into its bit-mirror. In those general cases the ¡logic isomorphism is Si(∆º)<Te(∆-2).
§ð: Static ‘spherical states’, where an S-component, which REPRESENTS THE WHOLE, MEMBRANE=ENCLOSURE AND a ð-SINGULARITY, become the ¬Limits of the system of vital energy they enclose.
So the equations will be of the type §ð (whole membrane x singularity)= ST (∑∆-1 points)
In those equations the 0 point will most likely be the symmetric center of a divergent differential equation of ∑-¡ points, which turn arounda the singularity of the System, and often near the center will be subject to a ‘curl’ equation, as it ’emerges’ and ascends to other plane of the fifth dimension.
I.e. in thermodynamic vortices, there are two planes, one of liquid and one of gas, which turns around a center that curls upwards the water, feeding the cyclone.
So the S-T equations it can also be in the mirror image of mathematics with frames of reference, the 0-point of the reference frame, reason why so many equations will have the form: F(x)= Equation of motion = 0 (Singularity Point of Si form).
And so we can ‘separate’ the system (fundamental theorem of vector calculus) in its divergent and curl components, which correspond to a flat plane of ∆-1 vital energy particles turning inwards till reaching the proximity of the singularity when they will ‘suffer’ a hyperbolic Belgrami curve of ‘perpendicular’ transformation of state, becoming something else, normally breaking apart its two components, one $t element becoming entropy of the singularity an the other ‘still part’ becoming pixels of information for the system.
So we can interpret such systems as dual mathematical events, whose paradigm are the curl and divergence equations of electromagnetism:
And we can always break down, as they are ‘composite of two motions’, according to the fundamental theorem of vectorial calculus the dual motion into a curl component, which will GROW AS WE MOVE towards the singularity that will ‘feed on the information of the system’ and deflect the entropic ‘remains’ through it axis, and the divergence, which will either be open if the potential runs to infinity, or if the system is a full fractal point with a membrane, will start to accelerate passed the enclosure, or limiting ‘border’ of the system, beyond which the ‘form’ is free, more entropic and disordered before its ‘capture’ by the physical system of membrane and singularity.
All in all the key element of such set of equations are to be informative, that is shrinking space and increasing time, responding ultimately to the vortex law: Vo x Ro = K, as the spatial radius diminishes and the speed-acceleration moves towards the future, increasing. Then in the singularity point, information somehow (it is obviously not clearly understood for vortices in which the force of information is invisible such as gravitation is), INFORMATION must stop forming an image in the zero point of the vortex (0 motion in thermodynamic eddies, time goes to zero in gravitational black hole equations).
So those are the concepts behind the fundamental dual type of motion of physical systems, combination of 1Dimotion of informative perception with a flat divergence field that culminates in an unobservable still mapping of perception in the physical center and a4Dimotion of entropy perpendicular to the plane of divergence, with a curl, which is represented mathematically by curls and divergent operators.
TRANSLATIVE =CONSERVATION LOCOMOTIONS
To understand the equations of locomotion however a different mind-frame is needed, which resumes in this:
The dimotions of information and entropy are transformative analysis of the system, from an internal point of view; so the internal space and time parameters of the system, change. Translative locomotions only change the position of the system in space but DO NOT change its internal parameters, hence they are restricted by the conservation of those 3 internal parameters, lineal singularity momentum, cyclical, membrane momentum and vital energy.
So they IMPLY A MAXIMAL conservation of the 3 simultaneous spatial parts of the being.
Then we must add the second principle to calculate all those equations, the principle of least time, which implies that as in all other scales the system will try to reach its destination through the shortest path in time, the fundamental variable of existence, minimizing its expenditure of the ‘limited time-duration’ of its existence.
So locomotions try to conserve both the spatial form of the being and its time-existence.
Another set of simple mathematical equations are those related to lineal locomotion, in any of its different pentalogic expressions, from momentum equations to Lagrangians and Hamiltonians, which we also study in other posts dedicated to mechanics, and Energy conservation laws. So we shall not be redundant.
ULTIMATELY ALL THOSE ENERGY-MOMENTUM EQUATIONS RELATE TO THE METRIC CONSERVATION OF THE 3 ELEMENTS OF THE GRAPH ABOVE, ENERGY, (VITAL CONTENT OF THE SYSTEM), LINEAL MOMENTUM OR SINGULARITY PARAMETER AND ANGULAR MOMENTUM, AND THEY ARE IN BALANCE, SO THE SYSTEM CONSERVES ITS.3 ELEMENTS.
Related to both we find the Hamiltonian and Lagrangian equations, which are metric equations of the 3 conserved quantities of the system; whereas in Lagrangians, ∂L=0, the law of ‘least time’ in the form of a zero minimal quanta of time, with no ‘variational form’ represents the Si (0)=Te (motion) elements, and in the Hamiltonian, the Position (potential energy) and Motion (kinetic energy) represent the Si (position) and Te (Motion) elements.
Integrals of derivatives.
Another elementary truth of all those equations is that as we are ‘calculating’ present states, a present equation requires to calculate a ‘single minimal quanta of time’ for each equation, which is the definition of a derivative in 5D. It is then when we can integrate the whole path, as in Lagrangians, where the derivative is a step of minimal least time action that tends to zero (the infinitesimal or action, which is the ∆-1 minimal time quanta of the 3 scales of time) from where we can then integrate a sum of those actions to obtain quantities of the ∆º plane (worldcycles of time or populations of space).
So all present equations of physics have a differential form. Then the Integral of motion of the system will add all those quanta of time, to obtain a parameter which is longer in time but as it is a sum of those quanta, ultimately will still be a present equation, often REACHING a higher scale, moving from the dimotion=action of the derivative, to another ∆+1 scale, as when we integrate to obtain an energy parameter of a world cycle, departing from the quanta of action=momentum of the being.
So we integrate quanta of time actions=dimotions, based in momentum, to obtain a full world cycle of a larger scale based in energy.
So this leaves us a third type of equations, THOSE WHICH EXPRESS waves of balanced communication that reproduce information closely related to those of locomotion, (which in fact in a full 5D analysis reproduce the form of the being as it translates in space, in its lower scales) . This type of waves are the ones that truly embody the equation of present balance, S=T, OF THE PREVIOUS GRAPH. So they will represent WAVE FUNCTIONS AND HARMONIC MOTIONS in which the SPACE and Time PARAMETERS, ARE IN CONSTANT balance:
The simplest example being the wave equation in one dimension, which is perfectly symmetric between its second derivative of time and space:
- This is an ordinary differential equation for which to obtain exact wave solutions.
(Historically, the problem of a vibrating string such as that of a musical instrument was studied by d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.)
Many of the equations of the mathematical physics of the nineteenth century can be treated this way. And it is easy to see its difference with the equations of entropy, in which the spatial expansion is scattering, much greater, so the scalar function spreads rapidly as the system descends two scales of the fifth dimension, according to the equation of death: ∆º<<∆-2, so if we interpret each devolution into a lower plane as a derivative finitesimal, the spatial side of the s=t function will suffer two derivatives, for each ‘time derivative’ of change. For example, the Fourier entropic heat equation, in one dimension and dimensionless units is:
So as a rule entropic functions require a higher derivative order in space than in time, reproductive wave functions that communicate information are equal in the time and space derivative, and informative equations inverse to those of entropy, have a higher derivative or dimension in time (acceleration) than in space.
FUNDAMENTAL RULE OF DIMOTION IN CALCULUS
It is then departing from those 3 simple rules, how we can interpret regardless of the enormous complexity analysis has reached, the type of Dimotions those equations express:
Information accelerates in time, $t>>§ð
Entropy accelerates in space, §ð<<$t
And reproduction of balanced information (wave equations) have equal parameters in space and time, S≈T,
Type of Differential equations.
A differential equation is thus a mathematical equation that relates some function with its derivatives. And in the process studies one of the fundamental Dimotions of a physical system. What kind of Dimotion studies, then becomes the field of 5D, which adds to the formal known-known nature of those equations.
As we are not interested, neither we can be exhaustive, we shall consider some of the fundamental equations of mathematical physics as reflection of those Dimotions, such as the heat equation, the continuity equation, the Poisson equation, the wave equation, the equations of Simple Harmonic Motions and Newtonian mechanics… remembering the fundamental rule of 5D physics, the 3 dimotions of time represented by those equations are either dominant In cyclical time (1D) hence with more time derivatives, or in entropy with faster space growth or balanced – most – as waves that rePREsent an equality S=T.
The following equations may serve as examples:
In the graph, some of the key partial differential equations, which measure the laws of motion for particles and wave states and potential fields; their behaviour depends on the ‘topological $, §, States’ and number of T.œ components (membrane, singularity, vital space) involved. It is then possible to establish a one to one correspondence between the states of the Γ generator and the main equations of mathematical physics for its simple systems, ternary parts and basic actions of |-motion, O-vortex form, and ≈wave Complementary state.
In the first three of these, the unknown function is denoted by the letter x and the independent variable by t; in the last three, the unknown function is denoted by the letter u and it depends on two arguments, x and t, or x and y.
Let us illustrate by a simple example. Consider a material particle of mass m moving along an axis Ox, and let x denote its coordinate at the instant of time t. The coordinate x will vary with the time, and knowledge of the entire motion of the particle is equivalent to knowledge of the functional dependence of x on the time t. Let us assume that the motion is caused by some force F, the value of which depends on the position of the particle (as defined by the coordinate x), on the velocity of motion ν = dx/dt and on the time t, i.e., F = F(x, dx/dt, t).
According to the laws of mechanics, the action of the force F on the particle necessarily produces an acceleration w = d2x/dt2 such that the product of w and the mass m of the particle is equal to the force, and so at every instant of the motion we have the equation: m d²x/dt² = F (x, dx/dt,t).
This is the differential equation that must be satisfied by the function x(t) describing the behavior of the moving particle. It is simply a representation of laws of mechanics. Its significance lies in the fact that it enables us to reduce the mechanical problem of determining the motion of a particle to the mathematical problem of the solution of a differential equation.
Later in this chapter, the reader will find other examples showing how the study of various physical processes can be reduced to the investigation of differential equations.
To describe in general terms the problems in the theory of differential equations, we first remark that every differential equation has in general not one but infinitely many solutions; that is, there exists an infinite set of functions that satisfy it. For example, the equation of motion for a particle must be satisfied by any motion induced by the given force F(x, dx/dt, t), independently of the starting point or the initial velocity. To each separate motion of the particle there will correspond a particular dependence of x on time t. Since under a given force F there may be infinitely many motions the differential equation (2) will have an infinite set of solutions.
Every differential equation defines, in general, a whole class of functions that satisfy it. The basic problem of the theory is to investigate the functions that satisfy the differential equation. The theory of these equations must enable us to form a sufficiently broad notion of the properties of all functions satisfying the equation, a requirement which is particularly important in applying these equations to the natural sciences. Moreover, our theory must guarantee the means of finding numerical values of the functions.
If the unknown function depends on a single argument, the differential equation is called an ordinary differential equation. If the unknown function depends on several arguments and the equation contains derivatives with respect to some or all of these arguments, the differential equation is called a partial differential equation. The first three of the equations in (1) are ordinary and the last three are partial.
The theory of partial differential equations has many peculiar features as it describes more complex ST which make them essentially different from ordinary differential equations.
In applications, most of the ‘dominant’ functions usually represent physical quantities as perceived in a given simultaneous structure of space, which will often be an Œ-function (description of a super organism or the ðƒ linguistic mapping of it).
Meanwhile the derivatives represent their rates of change, which is therefore a temporal function; usually NOT the whole world cycle of the system, but an ∆-1, minimal action – which makes this type of differential equations according to the spatial bias of physics in particular and human thought in general, more concerned with the changes on the spatial parameters and pov of the system; which the differential equation defines as a temporal relationship between two ‘fixed images’ of the spatial system, at two different moments in time of its world cycle.
In as much as those equations try to be specific to a certain case it will have to include more parameters to ‘define’ which of the three possible paths of the future the equation will take: a repetitive hyperbolic present; a parabolic, entropic path or an elliptic FORM.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
The great importance of differential equations thus, is due chiefly to the fact that the investigation of many problems in physics and technology may be reduced to the solution of such equations; in as much as they reflect well the 2 fundamental elements of reality, its ∆-scales of finitesimals and wholes, and the S≈T symmetries of each plane.
Calculations involved in the construction of electrical machinery or of radiotechnical devices, computation of the trajectory of projectiles, investigation of the stability of an aircraft in flight, or of the course of a chemical reaction, all depend on the solution of differential equations.
It often happens that the physical laws governing a phenomenon are written in the form of differential equations, so that the differential equations themselves provide an exact quantitative (numerical) expression of these laws. The reader will see in the following chapters how the laws of conservation of mass and of heat energy are written in the form of differential equations. The laws of mechanics discovered by Newton allow one to investigate the behavior of any mechanical system by means of differential equations.
Initial-Value and Boundary-Value Problems; Uniqueness of a Solution
With partial differential equations as with ordinary ones, it is the case, with rare exceptions, that every equation has infinitely many particular solutions. Thus to solve a concrete physical problem, i.e., to find an unknown function satisfying some equation, we must know how to choose the required solution from an infinite set of solutions. For this purpose it is usually necessary to know not only the equation itself but a certain number of supplementary conditions. As we saw previously, partial differential equations are the expression of elementary laws of mechanics or physics, referring to small particles situated in a medium. But it is not enough to know only the laws of mechanics, if we wish to predict the course of some process. For example, to predict the motion of the heavenly bodies, as is done in astronomy, we must know not only the general formulation of Newton’s laws but also, assuming that the masses of these bodies are known, we must know the initial state of the system, i.e., the position of the bodies and their velocities at some initial instant of time. Supplementary conditions of this kind are always encountered in solving the problems of mathematical physics.
Thus, the problems of mathematical physics consist of finding solutions of partial differential equations that satisfy certain supplementary conditions.
Laplace and Poisson equations as expressions of T.œ parts.
Harmonic functions have a unique solution when we have data on its boundary-value. And so essentially we make them correspond to:
- Poisson equation, which is: ∆u=-4πρ where ρ is usually the density, and so it IMPLIES the T.œ domain is complete, having a singularity with density of form, a boundary and a vital energy within.
- Yet, ρ may, the singularity might vanish and then we have a system with NO singularity, hence DOMINATED BY THE membrane, which IS the only parameter needed, as it holds the ‘maximal’ value. For ρ ≡ 0 we get then the Laplace equation: ∆u=0
Moreover, those are the dominant Universal functions. As the singularity is often born, as in life cells once the boundary is established. so empty bubbles of membranes (Lapace equations) are contrary to common sense, the a priori condition in most cases to create the singularity – with the most notable exception of the birth of the magnetic membrane from the motion of a singularity charge.
And here the praxis is easily explained in theory:
It is not difficult to see that the difference between any two particular solutions u1 and u2 of the Poisson equation is a function satisfying the Laplace equation, or in other words is a harmonic function – we might say the singularities cancel each other. The entire manifold of solutions of the Poisson equation is thus reduced to the manifold of harmonic functions.
If we have been able to construct even one particular solution u0 of the Poisson equation, and if we define a new unknown function w by: u = uo+w, we see that w must satisfy the Laplace equation; and in exactly the same way, we determine the corresponding boundary conditions for w. Thus it is particularly important to investigate boundary value problems for the Laplace equation.
As is most often the case with mathematical problems, the proper statement of the problem for an equation of mathematical physics is immediately suggested by the practical situation. The supplementary conditions arising in the solution of the Laplace equation come from the physical statement of the problem.
ENTROPY EQUATIONS S<<T
The heat example.
We have identified already the 3 ‘Active magnitudes’ as ratios of a ð/$ DENSITY of the 3 physical scales, mass (∆+1), heat (∆ø) and current (∆-1), which being less familiar/more complex we escape in our search for the simplex principles/forms. We have shown that they obey the same equations, and its main parameters are ratios, density and flux. And we have seen its main equations to be those which establish the dualities of the Galilean paradox, in terms of those parameters, the equations of continuity and motion (arguably the lineal vs. cyclical form is a mind paradox). Let us now see their differences when a singularity is (Poisson) or is not (Lapace) in its T.œ configuration with the case of ‘heat’.
Let us consider, for example, the establishment of a steady temperature in a medium, i.e., the propagation of heat in a medium where the sources of heat are constant and are situated either inside or outside the medium. Under these conditions, with the passage of time the temperature attained at any point of the medium will be independent of the time. Thus to find the temperature T at each point, we must find that solution of the equation:
∂T/∂t= ∆T +q where q is the density of the sources of heat distribution, which is independent of t. We get: ∆T+q = 0
Thus the temperature in our medium satisfies the Poisson equation. If the density of ð-singularities – heat sources q is zero, then the Poisson equation becomes the Laplace equation.
In order to find the temperature inside the medium, it is necessary, from simple physical considerations, to know also what happens on the boundary of the medium.
Obviously the physical laws previously considered for interior points of a body call for quite another formulation at boundary points.
In the problem of establishing the steady-state temperature, we can FOCUS in either of the 3 parts of the T.œ:
- 2D: The distribution of temperature on the boundary membrane.
- 3D: or the rate of flow of heat through a unit area of the vital energy surface.
- 1D: finally, a law connecting the temperature with the flow of heat from the source.
So we do have again a ternary ‘Rashomon effect’ from 3 Γ points of view:
Considering the temperature in a volume Ω, bounded by the surface S, we can write these three conditions as:
or finally, in the most general case:
where Q denotes an arbitrary point of the surface S. Conditions of the form (10) are called boundary conditions. Investigation of the Laplace or Poisson equation under boundary conditions of one of these types will show that as a rule the solution is uniquely determined. which in ∆st terms means a T.Œ EXISTS.
YET, in our search for a solution of the Laplace equation it will usually be necessary and sufficient to be given one arbitrary function on the boundary of the domain.
Let us examine that Laplace equation a little more in detail. We will show that a harmonic function u, i.e., a function satisfying the Laplace equation, is completely determined if we know its values on the boundary of the domain.
First of all we establish the fact that a harmonic function cannot take on values inside the domain that are larger than the largest value on the boundary. More precisely, we show that the absolute maximum, as well as the absolute minimum of a harmonic function are attained on the boundary of the domain.
From this it will follow at once that if a harmonic function has a constant value on the boundary of a domain Ω, then in the interior of this domain it will also be equal to this constant. For if the maximum and minimum value of a function are both the same constant, then the function will be everywhere equal to this constant.
We now establish the fact that the absolute maximum and minimum of a harmonic function cannot occur inside the domain. First of all, we note that if the Laplacian Δu of the function u(x, y, z) is positive for the whole domain, then this function cannot have a maximum inside the domain, and if it is negative, then the function cannot have a minimum inside the domain.
For at a point where the function u attains its maximum it must have a maximum as a function of each variable separately for fixed values of the other variables. Thus it follows that every partial derivative of second order with respect to each variable must be nonpositive. This means that their sum will be nonpositive, whereas the Laplacian is positive, which is impossible. Similarly it may be shown that if the function has a minimum at some interior point, then its Laplacian cannot be negative at this point. This means that if the Laplacian is negative everywhere in the domain, then the function cannot have a minimum in this domain.
If a function is harmonic, it may always be changed by an arbitrarily small amount in such a way that it will have a positive or negative Laplacian; to this end it is sufficient to add to it the quantity:
where η is an arbitrarily small constant:
The addition of a sufficiently small quantity cannot change the property that the function has an absolute maximum or absolute minimum within the domain. If a harmonic function were to have a maximum inside the domain, then by adding + ηr2 to it, we would get a function with a positive Laplacian which, as was shown above, could not have a maximum inside the domain. This means that a harmonic function cannot have an absolute maximum inside the domain. Similarly, it can be shown that a harmonic function cannot have an absolute minimum inside the domain.
This theorem has an important corollary for ∆st general laws:
Two harmonic functions that agree on the boundary of a domain must agree everywhere inside the domain.
For then the difference of these functions (which itself will be a harmonic function) vanishes on the boundary of the domain and thus is everywhere equal to zero in the interior of the domain.
So we see that the values of a harmonic function on the boundary completely determine the function. It may be shown (although we cannot give the details here) that for arbitrarily preassigned values on the boundary one can always find a harmonic function that assumes these values.
It is somewhat more complicated to prove that the steady-state temperature established in a body is completely determined, if we know the rate of flow of heat through each element of the surface of the body or a law connecting the flow of heat with the temperature.
We will return to some aspects of this question when we discuss methods of solving the problems of mathematical physics.
∆st generalisation: The boundary, the predator protein, the sepherd dog, without a singularity herds in a thermodynamic equilibrium, the vital energy, of the system, whose informative differentiation requires a singularity of information, with its ‘force pull’ across multiple ∆±1 scales to break it into organic functions.
The boundary-value problem for the heat equation.
Now we can consider NOT only the boundaries in space but the boundaries IN TIME. And the symmetry S=T precludes homologous cases:
IT is the problem of the heat equation in the non stationary case. It is physically clear that the values of the temperature on the boundary or of the rate of the flow of heat through the boundary are not sufficient in themselves to define a unique solution of the problem.
But if in addition we know the temperature distribution at some initial instant of time, which is the equivalent in time of the knowledge we have of a boundary in space (as we have seen in this ‘entropy related’ case the boundary appears first) then the problem is uniquely determined.
Thus to determine the solution of the equation of heat conduction (8) it is usually necessary and sufficient to assign one arbitrary function T0(x, y, z) describing the initial distribution of temperature and also one arbitrary function on the boundary of the domain. As before, this may be either the temperature on the surface of the body, or the rate of heat flow through each element of the surface, or a law connecting the flow of heat with the temperature.
In this manner, the problem may be stated as follows. We seek a solution of equation (8) under the condition:
and one of three following conditions
where Q is any point of the surface S.
Condition (11) is called an initial condition, while conditions (12) are boundary conditions.
We will not prove in detail that every such problem has a unique solution but will establish this fact only for the first of these problems; moreover, we will consider only the case where there are no heat sources in the interior of the medium. We show that the equation:
under the conditions:can have only one solution.
The proof of this statement is very similar to the previous proof for the uniqueness of the solution of the Laplace equation. We show first of all that if: then the function T, as a function of four variables, x, y, z, and: t (0 ≤ t≤to), assumes its minimum either on the boundary of the domain Ω or else inside Ω, but in the latter case necessarily at the initial instant of time, t = 0.
For if not, then the minimum would be attained at some interior point. At this point all the first derivatives, including ∂T/∂t, will then be equal to zero, and if this minimum were to occur for t = t0, then ∂T/∂t would be nonpositive.
Also, at this point all second derivatives with respect to the variables x, y, and z will be nonnegative.
Consequently ΔT − (l/a²) (∂T/∂t) will be nonnegative, which in our case is impossible.
In exactly the same way we can establish that if ΔT − (l/a²) (∂T/∂t) > 0, then inside Ω for: 0 < t ≤ t0
to there cannot exist a maximum for the function T.
Finally, if ΔT − (l/a²) (∂T/∂t) = 0, then inside Ω for: 0 <t<Tk
the function T cannot attain its absolute maximum nor its absolute minimum, since if the function T were to have, for example, such an absolute minimum, then by adding to it the term η(t − t0) and considering the function T1 = T + η(t − t0), we would not destroy the absolute minimum if η were sufficiently small, and then ΔT1 − (l/a²) (∂Tl/∂t) would be negative, which is impossible.
In the same way we can also show the absence of an absolute maximum for T in the domain under consideration.
However, an absolute maximum, as well as an absolute minimum of temperature may occur either at the initial instant t = 0 or on the boundary S of the medium. If T = 0 both at the initial instant and on the boundary, then we have the identity T = 0 throughout the interior of the domain for all: t ≤ t0.
If any two temperature distributions T1 and T2 have identical values for t = 0 and on the boundary then their difference T1 − T2 = T will satisfy the heat equation and will vanish for t = 0 and on the boundary. This means that T1 − T2 will be everywhere equal to zero, so that the two temperature distributions Tl and T2 will be everywhere identical.
S=T EQUATIONS OF WAVES
The energy of oscillations and the boundary-value problem for the equation of oscillation.
We now consider the conditions under which the third of the basic differential equations has a unique solution, namely equation (9).
For simplicity we will consider the equation for the vibrating string ∂2u/∂x2 = (l/a2) (∂2u/∂t2), which is very similar to equation (9), differing from it only in the number of space variables. On the right side of this equation there is the quantity ∂2u/∂t2 expressing the acceleration of an arbitrary point of the string. The motion of any mechanical system for which the forces, and consequently the accelerations, are expressed by the coordinates of the moving bodies, is completely determined if we are given the initial positions and velocities of all the points of the system. Thus for the equation of the vibrating string, it is natural to assign the positions and velocities of all points at the initial instant:
But as was pointed out earlier, at the ends of the string the formulas expressing the laws of mechanics for interior points cease to apply. Thus at both ends we must assign supplementary conditions. If, for example, the string is fixed in a position of equilibrium at both ends, then we will have:
These conditions can sometimes be replaced by more general ones, but a change of this sort is not of basic importance.
The problem of finding the necessary solutions of equation (9) is analogous. In order that such a solution be well defined, it is customary to assign the conditions:
and also one of the “boundary conditions”
The difference from the preceding case is simply that instead of the one initial condition in equation (11) we have the two conditions (13).
Equations (14) obviously express the physical laws for the particles on the boundary of the volume in question.
The proof that in the general case the conditions (13) together with an arbitrary one of the conditions (14) uniquely define a solution of the problem will be omitted. We will show only that the solution can be unique for one of the conditions in (14).
Let it be known that a function u satisfies the equation:
with initial conditions:
and boundary condition:
(It would be just as easy to discuss the case in which U|s = 0.)
We will show that under these conditions the function u must be identically zero.
To prove this property it will not be sufficient to use the arguments introduced earlier to establish the uniqueness of the solution of the first two problems. But here we may make use of the physical interpretation.
We will need just one physical law, the “law of conservation of energy.”
We restrict ourselves again for simplicity to the vibrating string, the displacement of whose points u(x, t) satisfies the equation:
The kinetic energy of each particle of the string oscillating from x to x + dx is expressed in the form:
Along with its kinetic energy, the string in its displaced position also possesses potential energy created by its increase of length in comparison with the straight-line position. Let us compute this potential energy. We concern ourselves with an element of the string between the points x and x + dx. This element has an inclined position with respect to the axis Ox, such that its length is approximately equal to:
So its elongation is:
Multiplying this elongation by the tension T, we find the potential energy of the elongated element of the string:
The total energy of the string of length l is obtained by summing the kinetic and potential energies over all of the points of the string. We get:
If the forces acting on the end of the string do no work, in particular if the ends of the string are fixed, then the total energy of the string must be constant: E=Const.
Our expression for the law of conservation of energy is a mathematical corollary of the basic equations of mechanics and may be derived from them. Since we have already written the laws of motion in the form of the differential equation of the vibrating string with conditions on the ends, we can give the following mathematical proof of the law of conservation of energy in this case. If we differentiate E with respect to time, we have, from basic general rules:
Using the wave equation (6) and replacing ρ(∂2u/∂t2) by T(∂2u/∂x2), we get dE/dt in the form:
If (∂u/∂x) | x = 0 or u | x = 0 vanishes, then: dE/dt=0 which shows that E is constant.
The wave equation (9) may be treated in exactly the same way to prove that the law of conservation of energy holds here also. If p satisfies the equation and the condition:
then the quantity:
will not depend on t.
If, at the initial instant of time, the total energy of the oscillations is equal to zero, then it will always remain equal to zero, and this is possible only in the case that no motion occurs. If the problem of integrating the wave equation with initial and boundary conditions had two solutions p1 and p2, then υ = p1 − p2, would be a solution of the wave equation satisfying the conditions with zero on the right-hand side, i.e., homogeneous conditions.
In this case, when we calculated the “energy” of such an oscillation, described by the function υ, we would discover that the energy E(υ) is equal to zero at the initial instant of time. This means that it is always equal to zero and thus that the function υ is identically equal to zero, so that the two solutions p1 and p2 are identical. Thus the solution of the problem is unique.
In this way we have convinced ourselves that all three problems are correctly posed.
Incidentally, we have been able to discover some very simple properties of the solutions of these equations. For example, solutions of the Laplace equation have the following maximum property: Functions satisfying this equation have their largest and smallest values on the boundaries of their domains of definition.
Functions describing the distribution of heat in a medium have a maximum property of a different form. Every maximum or minimum of temperature occuring at any point gradually disperses and decreases with time. The temperature at any point can rise or fall only if it is lower or higher than at nearby points. The temperature is smoothed out with the passage of time. All unevennesses in it are leveled out by the passage of heat from hot places to cold ones.
But no smoothing-out process of this kind occurs in the propagation of the oscillations considered here. These oscillations do not decrease or level out, since the sum of their kinetic and potential energies must remain constant for all time.
The other fundamental time-measure: harmonic oscillators.
As usual in ∆st we apply dualities and ternary symmetries to everything, culminating in 5D Rashomon effects.
So the question is, if a Fourier transform is the best method to ‘differentiate’ the lineal motion of wave-patterns, hence best for 1D actions, what is the equivalent, to observe the cyclical patterns of ‘pure’ cyclical motions around a point of balance, which is strictu senso the purest form of worldcycles of time for physical systems? The ‘master’ function of them all – the harmonic oscillator.
As the key unifying direct translations of ∆st actions/motions/dimensions/forms into mathematical physics (Lagrangians, Hamiltonians, Fourier transforms, o-1 probability time emergence and its space symmetry with statistics, and so on), the oscillator is a system that can be exactly solved in classical and quantum theory as a system of ‘fundamental’ physical relevance.
Quantum physics INDEED in what truly matters objectively IS in NOTHING different to any other organic ensemble of parts into wholes.
3 considerations on this matter:
1.All what seems weird is either subjective ego-trips of Copenhagen that deny the sentient, organic properties of particles, which collapse into wholes, when colliding as herds come into tighter forms in danger, intrinsically… it is not the shark, Mr. Bohr who collapses but its presence what makes the photons or schools to come together. Or it is just an ill-understood philosophical interpretations of the excellent praxis of spatial maths that tries to ‘map together’ all in a single image using ‘multiple dimensions’ and functions of functions.
2. The reflection of the gathering of quasi infinite≈(in)finite populations in space, through long periods of world time cycles, with Hilbert multiple dimensional spaces, functionals of functions and operators does NOT mean there are ∞ dimensions, but that the mathematical space of the mind tries to put infinite parts into a single whole equation.
3.The absurd discussion between a probabilistic=time event vs. density=population description of it, are just another mind-reality s=t symmetry. The o-1 Dimension though is faster in time, smaller in space according to 5D ($ x ð=k) metric. So a probabilistic mirror fits easier for micro-fast particles, which however intrinsically at their scale will appear as herds of populations.
This said, the harmonic oscillator IS the key function to describe worldcycles of individual T.œs: Any system fluctuating by small amounts near a configuration of stable equilibrium may be described either by an oscillator or by a collection of decoupled harmonic oscillators, which again as in Fourier transforms represent the ‘minimal Rashomon duality’ to describe a system, its S=T fluctuations in a single plane and its ∑∆-1>∆o decomposition in wholes and parts.
This bare minimum of a Γ: present ternary description and a ∆: scalar whole and part analysis IS thus the basic ‘analysis’ that happens else where in all stiences to fulfil the feeling of knowledge.
The oscillator then is to the Fourier what the Hamiltonian that ads the Active ∆-scalar magnitude is to the Lagrangian, focused in the pure s, t parameters.
Thus the oscillator we are actually IS the general problem of worldcycles of a T.œ, which tries to keep ALL its S≈T PARAMETERS in balance as it moves between two S-T extreme states:
- near equilibrium the active magnitude, or vortex door to its scalar dimensions is at rest (0-1 D motion) and its acceleration (the informative Dimension of time) is maximal.
- As opposed to the max. S displacement in space, further from equilibrium, where its ð” acceleration is minimal but its 2Distance is maximal.
So the SHM ultimately expresses the inversion between those different dimensions of time and space.
In mathematical terms, the first found case of a single harmonic oscillator is of course a mass m coupled to a spring of force constant k.
For small deformations x, the spring will exert the force given by Hooke’s law, F = –kx, (k being its force constant) and produce a potential V = kx². The Hamiltonian for this system is:
where ω = (k/m)1/2 is the classical frequency of oscillation.
Any Hamiltonian of the above form, quadratic in the coordinate and momentum, will be called the harmonic oscillator Hamiltonian.
And we see incidentally how momenta is expressed also as a bidimensional form (holographic principle), and the clear Duality also in the energy elements between T, the lineal 2D kinetic motion energy and V the 1D potential-position energy . The kinetic IS the 2D lineal motion part of 3D energy and so it ‘reduces’ the importance of ‘mass’, the 1D active magnitude, which is the factor eliminated in the ratio; while the potential energy, due to position – to the active magnitude 1D is reinforced by the frequency time oscillation.
Now, the mass-spring system is just the simplest among an astounding huge family of systems described by the oscillator Hamiltonian. And its corollarium reinforces the tendency of all parts of the Universe to an S=T equilibrium (or symmetry in physical jargons, as the breaking of such symmetries are inversely, the main form of ‘creation’ and diversification of physical systems). Since a particle moving in a potential V(x), placed at one of its minima x0, will remain there in a state of stable, static equilibrium. While a maximum, will be a point of unstable static equilibrium.
RECAP. The fourier series maps then the speed of time of a system, which for repetitive motions with limiting domain increases its form constantly, while the inverse solution expands them, and so both together balance a zero sum of frequencies.
IF WE consider the unit circle, either in its probability version or mere mirror of all possible sinusoidal functions, as the full world cycle, then we can consider different speeds of time clocks, which increase with the growth of Ak, and its synchronous gathering into a single wave functions, the prove that in time also the laws of superposition work together, and the synchronisation of actions, brings a whole into existence.
Fourier inverse view of a time function as a sum of potential finitesimals of time (its decomposed frequency) opens up a whole symmetry of the concepts of timespace ∫∂teps.
Once calculus has found the finitesimal it can study according to the action, how many frequencies or populations of it will be integrated to reach the calculus of that specific action of space-time.
RECAP. Differentiation of 3 parts and subspecies of physical T.œs: |-Potentials, Ø-waves and O-singularities.
How we differentiate the differentiation of different planes of existence?
Easy, when we change plane of existence, the parameters of changechange:
And yet we repeat the same repetitive operandi that apply to all of them, under the laws of Disomorphisms.
So when we study change between planes of existence we apply double differentials and then we can obtain an approximation to the complete motion between planes. And this is why when we study 4D change, we use potential equations with double differentiation:
Many physically important partial differential equations are second-order and for the sake of simplicity we shall consider here only the case of the ternary Generator Group of linear ones. For example:
- uxx + uyy = 0 (two-dimensional Laplace equation, used to describe the $t ∆-1 entropic potential, which an ∆+1 particle uses to move)
- uxx = ut (one-dimensional heat equation; used to describe the ∆º heat wave of ∆-1 gaseous lineal kinetic motions )
- uxx − uyy = 0 (one-dimensional wave equation, used to study the ‘information/form’ imprinted by an ∆º wave over a ‘liquid’ state of ∆º particles)
Some remarks we can do first on the ‘RASHOMON effect’ of those equations (pov on them of ‘S=T: algebra’ and @nalytic geometry, the most relevant in this case):
S=T. The behaviour of such an equation depends heavily on the coefficients a, b, and c of auxx + buxy + cuyy. So by making the coefficient variable in an inversion of S=T symmetry (group permutations) and applying algebraic group techniques they can be fairly solved. We are not though in this web concerned with techniques, so few examples of them are shown.
PENTALOGIC. GEOMETRIC VIEW ON THE 3 EQUATIONS FROM…
@naylitic geometry… on the other hand, explains its ternary Generator as they respond to the 3 topologies of reality, this time perceived in terms of ∆-scales: Thus they are called:
T-elliptic, S-parabolic, or TS: hyperbolic equations according to its coefficient solution as either b2 − 4ac < 0, b2 − 4ac = 0, or b2 − 4ac > 0, respectively.
Thus, the Laplace equation is T-elliptic, the heat equation is S-parabolic, and the wave equation is TS-hyperbolic.
And so once more we are able to classify them as belonging to the 3 ST elements of the generator equation:
S-parabolic, entropic limbs/fields < st-hpyerbolic body-waves >Tiƒ, elliptic functions.
Indeed, as we study the main solutions to partial differential equations we shall find that the function of the element described does correspond to its geometric solution or rather its temporal solutions, as we are talking of functions with motion, of arrows of time. So S-parabolic equations define entropic motions, hyperbolic, steady state, and elliptic, cyclical, harmonic, informative ones. The simplest 3 kind of equations and most used in all stiences will suffice to understand this at the introductory level of this post. Yet before we explain the 3 simplest cases (Laplace: T-gravitation, Fourier: s-heat; D’alambert: ST-wave), a few pricessions on the previous statements:
-As information has more dimensional form, the simplest elliptic, Tiƒ solution is already 2 dimensional.
– b² − 4ac < 0, b² − 4ac = 0, or b² − 4ac > 0, are 3 solutions, similar to many other solutions in systems, which have ternary values.
Thus for example, Einstein’s EFE has 3 solutions around the cosmological constant, weather it is >0, <0 or =0 which also correspond to a flat Universe (o), hyperbolic or elliptic in space, and to the 3 solutions in time of the Universe or galaxy that goes through an entropic big-bang, a steady state, as the cosmological constant hits 0 to a big crunch.
So again this ‘mathematical equation’ properly interpreted will show multiple homologies between systems and its 3 topological solutions in space and 3 ages in time.
EXPANSION TO MULTIPLE S=T DIMENSIONS.
T-Laplace equation: the arrow of information in space-time functions.
Laplace’s equation states that the sum of the second-order partial derivatives of R, the unknown function, with respect to the Cartesian coordinates, equals zero:
The sum on the left often is represented by the expression ∇2R, in which the symbol ∇2 is called the Laplacian, or the Laplace operator.
Are they related to cyclical, accelerated time vortices, as we expect for an elliptic function? Yes, they are. In fact, the Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics, where the operator gives a constant multiple of the mass density when it is applied to a given gravitational potential.
Many physical systems are more conveniently described by the use of spherical or cylindrical coordinate systems. Laplace’s equation can be recast in these coordinates; for example, in cylindrical coordinates, Laplace’s equation is:
And so the study of the ‘formal mode’ of a differential equation for an event of physics will often give the schooled scientist an immediate ‘image’ of what kind of process is.
I.e. a process of death from form into entropy-field: ∆+1 (t) << ∆-1 (S), will be a ‘second differential equation’ ONLY which jumps and dissolves the tif parameter of the entity, mass or charge, down two scales into a field.
Hence all fields created by a singularity are expressed with variations of the Poison equation: ∇²φ=ƒ
On the other hand a process of distribution of the same ‘Stif’ parameter from a singularity source outwards, into a wave motion, will be a single jump of scale: a process of balanced distribution into an equilibrium, present state departing from the source that becomes a ‘wave’: ∆º<∆-1 (S).
So again we find all over mathematical physics, those processes of a singularity becoming a wave, the first one discovered the the heat equation, which is a single jump between states and scales, hence one which uses only a jump between parameters and a first differential (wave state) or between a first differential and a second one (wave state): ∂u/∂t = α∇²u
Which ultimately is a case of a diffusion equations, which will appear as the fundamental law of many systems (so Ohm’s laws is his ‘homolog’ NOT ‘analog’ in the electromagnetic scale):
Where ϕ(r, t) is the density of the diffusing material at location r and time t and D(ϕ, r) is the collective diffusion coefficient for density ϕ at location r; and ∇ represents the vector differential operator del.
So we observe here some other ‘themes’ of GST: first that we use as always ratios NOT absolute parameters to define any reality, in this case density not mass. Second that the process involves a Tiƒ ‘dense’ state vs. a ‘wave-state’ (D(Φ, r). And if the diffusion coefficient depends on the density (Cyclical tiƒ element, then the equation is nonlinear), otherwise if it depends of the wave state it is linear.
And so on and so on… Indeed, the ‘explanations’ of all the equations of mathematical physics must be done departing from those ‘fundamental set of events’ which themselves depend on the space-time actions allowed to any system. And for that reason a few differential equations defining those basic events of GST (diffusion, harmonics, waves, speeds, etc.) suffice to explain mathematical physics.
In other terms, the whys and purposes of mathematical physical events in space-time are therefore encoded by the operandi of ∆nalysis.
In that sense the generator equation of analysis is simple, if we think of the term ∆≈D.
“The fifth dimension for simple systems is equivalent to the Differential equation, with two asymmetric arrows; the arrow of wholeness in time, ∂, and the arrow of disintegration in space, ∫’
A derivative in time descends an scale in the fifth dimension. Two derivatives descend two. And viceversa. A non-derivative equation stays in the ∆º present of the observer frame of reference (Pov).
The singularity in its incapacity to be observed fully, often is described with a single ternary number for its three simplest parameters of ∆st. Such is the case of the black hole as wheeler put it: a black hole has no hair (meaning its mass, momentum and charge are enough to describe it).
Then the field is the opposite also 0 value, for the Poisson equation, where after two derivations from the singularity we achieve a 0 flatness, yet with a potential towards the singularity of the type:
Field (second derivative) = Singularity (attractive vortex described by gravitational or charge or ‘temperature’ gradients).
is parabolic, since heat is a scattering, entropic, diffusive wave, specially when it is not limited by boundaries.
TS-hyperbolic wave equation
Those will be the present-wave equations that describe waves between ‘boundaries’ (or else the steady state balance of present non-changing world form (the boundary region) would disappear, and the wave scatter into a S-Parabolic solution. I.e:
D’Alembert’s wave equation
D’Alembert’s wave equation takes the form
Here c is a constant related to the stiffness of the string. The physical interpretation of (9) is that the acceleration (ytt) of a small piece of the string is proportional to the tension (yxx) within it.
Yet in order to specify physically realistic solutions, d’Alembert’s wave equation must be supplemented by boundary conditions, which express the fact that the ends of a violin string are fixed. Here the boundary conditions take the form
D’Alembert showed that the general solution to is: y(x, t) = f(x + ct) + g(x − ct)
where f and g are arbitrary functions (of one variable). The physical interpretation of this solution is that f represents the shape of a wave that travels with speed c along the x-axis in the negative direction, while g represents the shape of a wave that travels along the x-axis in the positive direction. The general solution is a superposition of two traveling waves, producing the complex waveform.In order to satisfy the boundary conditions given, the functions f and g must be related by the equations
These equations imply that g = −f, that f is an odd function—one satisfying f(−u) = −f(u)—and that f is periodic with period 2l, meaning that f(u + 2l) = f(u) for all u.
The 3 sub-type of waves according to topological equations.
In mathematics, a hyperbolic partial differential equation of order n is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first n−1 derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is: Utt – Uxx=0. The equation has the property that, if u and its first time derivative are arbitrarily specified initial data on the line t = 0 (with sufficient smoothness properties), then there exists a solution for all time t.
The solutions of hyperbolic equations are “wave-like.” If a disturbance is made in the initial data of a hyperbolic differential equation, then not every point of space feels the disturbance at once. Relative to a fixed time coordinate, disturbances have a finite propagation speed. They travel along the characteristics of the equation. This feature qualitatively distinguishes hyperbolic equations from elliptic partial differential equations and parabolic partial differential equations. A perturbation of the initial (or boundary) data of an elliptic or parabolic equation is felt at once by essentially all points in the domain.
What this means is that unlike motions between elliptic futures and parabolic past motions which are non-local, infinite, motions in present space have finite length in its quanta.
Example 1. The law of decay of radium says that the rate of decay is proportional to the initial amount of radium present. Suppose we know that a certain time t = t0 we had R0 grams of radium. We want to know the amount of radium present at any subsequent time t.
Let R(t) be the amount of undecayed radium at time t. The rate of decay is given by the value of – (dR/dt). Since this is proportional to R, we have:
-dR/dt=kR where k is a constant. In order to solve our problem, it is necessary to determine a function from the differential equation. For this purpose we note that the function inverse to R(t) satisfies the equation: – dt/dR=1/kR, since dt/dR = (1/dR)/dt. From the integral calculus it is known that equation is satisfied by any function of the form: T= – 1/k ln R+ C.
where C is an arbitrary constant. From this relation we determine R as a function of t. We have:
From the whole set of solutions (5) of equation (3) we must select one which for t = t0 has the value R0. This solution is obtained by setting C1 = R0eˆkt0.
From the mathematical point of view, equation (3) is the statement of a very simple law for the change with time of the function R; it says that the rate of decrease – (dR/dt) of the function is proportional to the value of the function R itself. Such a law for the rate of change of a function is satisfied not only by the phenomena of radioactive decay but also by many other physical phenomena.
We find exactly the same law for the rate of change of a function, for example, in the study of the cooling of a body, where the rate of decrease in the amount of heat in the body is proportional to the difference between the temperature of the body and the temperature of the surrounding medium, and the same law occurs in many other physical processes. Thus the range of application of equation (3) is vastly wider than the particular problem of the radioactive decay from which we obtained the equation.
Example 2. Let a material point of a mass m be moving along the horizontal axis Ox in a resisting medium, for example in a liquid or a gas, under the influence of the elastic force of two springs, acting under Hooke’s law (figure 1), which states that the elastic force acts toward the position of equilibrium and is proportional to the deviation from the equilibrium position. Let the equilibrium position occur at the point x = 0. Then the elastic force is equal to –bx where b > 0.
We will assume that the resistance of the medium is proportional to the velocity of motion, i.e., equal to –a(dx/dt), where a > 0 and the minus sign indicates that the resisting medium acts against the motion. Such an assumption about the resistance of the medium is confirmed by experiment.
From Newton’s basic law that the product of the mass of a material point and its acceleration is equal to the sum of the forces acting on it, we have: md²x/dt²= – bx –a(dx/dt) (6)
Thus the function x(t), which describes the position of the moving point at any instant of time t, satisfies the differential equation (6). We will investigate the solutions of this equation in one of the later sections.
If, in addition to the forces mentioned, the inaterial point is acted upon by still another force, F outside of the system, then the equation of motion takes the form: md²x/dt²= – bx –a(dx/dt) + F (6′)
Example 3. A mathematical pendulum is a material point of mass m, suspended on a string whose length will be denoted by l. We will assume that at all stages the pendulum stays in one plane, the plane of the drawing (figure 2). The force tending to restore the pendulum to the vertical position OA is the force of gravity mg, acting on the material point. The position of the pendulum at any time t is given by the angle ϕ by which it differs from the vertical OA. We take the positive direction of ϕ to be counterclockwise. The arc A A′ = lϕ is the distance moved by the material point from the position of equilibrium A. The velocity of motion ν will be directed along the tangent to the circle and will have the following numerical value:
v= l dΦ/dt.
To establish the equation of motion, we decompose the force of gravity mg into two components Q and P, the first of which is directed along the radius OA′ and the second along the tangent to the circle. The component Q cannot affect the numerical value of the rate ν, since clearly it is balanced by the resistance of the suspension OA′. Only the component P can affect the value of the velocity ν. This component always acts toward the equilibrium position A, i.e., toward a decrease in ϕ, if the angle ϕ is positive, and toward an increase in ϕ, if ϕ is negative. The numerical value of P is equal to –mg sin ϕ, so that the equation of motion of the pendulum is:
m dv/dt= – mg sinΦ or: d²Φ/dt² = – g/l sin Φ
It is interesting to note that the solutions of this equation cannot be expressed by a finite combination of elementary functions. The set of elementary functions is too small to give an exact description of even such a simple physical process as the oscillation of a mathematical pendulum. Later we will see that the differential equations that are solvable by elementary functions are not very numerous, so that it very frequently happens that investigation of a differential equation encountered in physics or mechanics leads us to introduce new classes of functions, to subject them to investigation, and thus to widen our arsenal of functions that may be used for the solution of applied problems.
Let us now restrict ourselves to small oscillations of the pendulum for which, with small error, we may assume that the arc AA′ is equal to its projection x on the horizontal axis Ox and sin4 is equal to ϕ. Then ϕ ≈ sin ϕ = x/l and the equation of motion of the pendulum will take on the simpler form:
(8) d²x/dt²=-g/l x
Later we will see that this equation is solvable by trigonometric functions and that by using them we may describe with sufficient exactness the “small oscillations” of a pendulum.
Example 4. Helmholtz’ acoustic resonator consists of an air-filled vessel V, the volume of which is equal to ν, with a cylindrical neck F. Approximately, we may consider the air in the neck of the container as cork of mass where ρ is the density of the air, s is the area of the cross section of the neck, and l is its length. If we assume that this mass of air is displaced from a position of equilibrium by an amount x, then the pressure of the air in the container with volume ρ is changed from the initial value p by some amount which we will call Δp.
We will assume that the pressure p and the volume ν satisfy the adiabatic law pvk = C. Then, neglecting magnitudes of higher order, we have
(In our case, Δν = sx.) The equation of motion of the mass of air in the neck may be written as:
md²x/dt²= ∆p • s (11)
Here Δp · s is the force exerted by the gas within the container on the column of air in the neck. From 10 & 11 we get
where ρ, p, ν, l, k, and s are constants.
Example 5. An equation of the form 6 also arises in the study of electric oscillations in a simple oscillator circuit. The circuit diagram is given in figure. Here on the left we have a condenser of capacity C, in series with a coil of inductance L, and a resistance R. At some instant let the condenser have a voltage across its terminals. In the absence of inductance from the circuit, the current would flow until such time as the terminals of the condenser were at the same potential. The presence of an inductance alters the situation, since the circuit will now generate electric oscillations.
To find a law for these oscillations, we denote by ν(t), or simply by ν, the voltage across the condenser at the instant t, by I(t) the current at the instant t, and by R the resistance. From well-known laws of physics, I(t)R remains constantly equal to the total electromotive force, which is the sum of the voltage across the condenser and the inductance –L(dI/dt). Thus: IR= – v – L(dI/dt) 13.
We denote by Q(t) the charge on the condenser at time t. Then the current in the circuit will, at each instant, be equal to dQ/dt. The potential difference ν(t) across the condenser is equal to Q(t)/C. Thus I = dQ/dt = C(dν/dt) and equation (13) may be transformed into:
Example 6. The circuit diagram of an electron-tube generator of electromagnetic oscillations is shown in figure. The oscillator circuit consisting of a capacitance C, across a resistance R and an inductance L, represents the basic oscillator system.
The coil L′ and the tube shown in the center of figure 5 form a so-called “feedback.” They connect a source of energy, namely the battery B, with the L-R-C circuit; K is the cathode of the tube, A the plate, and S the grid. In such an L-R-C circuit “self-oscillations” will arise. For any actual system in an oscillatory state the energy is transformed into heat or is dissipated in some other form to the surrounding bodies, so that to maintain a stationary state of oscillation it is necessary to have an outside source of energy. Self-oscillations differ from other oscillatory processes in that to maintain a stationary oscillatory state of the system the outside source does not have to be periodic. A self-oscillatory system is constructed in such a way that a constant source of energy, in our case the battery B, will maintain a stationary oscillatory state. Examples of self-oscillatory systems are a clock, an electric bell, a string and bow moved by the hand of the musician, the human voice, and so forth.
The current I(t) in the oscillatory L-R-C circuit satisfies the equation:
Here ν = ν(t) is the voltage across the condenser at the instant t, Ia(t) is the plate current through the coil L′; M is the coupling coefficient between the coils L and L′. In comparison with equation (13), equation (15) contains the extra term M(dIa/dt).
We will assume that the plate current Ia(t) depends only on the voltage between the grid S and the cathode of the tube (i.e., we will neglect the reactance of the anode), so that this voltage is equal to the voltage ν(t) across the condenser C. The character of the functional dependence of Ia, on ν is given in figure. The curve as sketched is usually taken to be a cubical parabola and we write an approximate equation for it by:
Substituting this into the right side of equation (15), and using the fact that dv/dt=I we get for ν the equation:
In the examples considered, the search for certain physical quantities characteristic of a given physical process is reduced to the search for solutions of ordinary differential equations.
Physical systems reduce the number of space-time parameters we measure essentially to 3 corresponding to scale, space and time, Mass-density (mass-energy ratio per volume), Space-Length (space ratio to time frequency) and time (frequency of steps).
So the number of symmetries of space-time to find is relatively limited: ∆ρ ≈ Sl ≈ Tƒ
Where ∆ρ codes for any scalar active magnitude, which can be mass (gravitational scale) or charge (quantum scale) even Temperature (thermodynamic scale) So in principle the final reduction of the equations of physics deal with only those 3 elements and yet it has a ginormous volume of information. Let us then consider the key equations that we can elaborate with the 3 elements, first noticing this parallelism with the ∆, s and t elements of any REAL tœ and its symmetry between the 3 parts of its simultaneous space and its limits of duration in time.
What mass, heat or charge measures then is the potential capacity of the internal vital energy to move expand, and as the result of being ‘enclosed’ by the membrane of the higher T.œ system. It also follows that solutions to systems without ‘membrane constrains’ or ‘singularity’ centres for the active magnitude, to define either a closed o-1 or a 1-∞ relative ‘equal’ regions of measure will not normally have meaningful solutions.
In the graph, mathematical physics deals with 3 type of parameters to define the cyclical membrane, vital energy and singularity force that constrain the system, and establish when interpreted those forms as functions of time-variables in a Graph, the parameters to operate over them.
In general it is then possible to ‘reconstruct’ from classic mathematical physics the laws and Disomorphisms of our systems by considering also how subjective humans perceive them. In general an ‘angular momentum, mrv and its equivalent in other active magnitudes-scales is the best measure of a membrane value. The internal regions of the being though often hidden, they require a simpler unit of measure; an ‘scalar’, which is often the best way to value as an active magnitude the force of the singularity, as an attractive pole that holds the whole, vital energy and membrane together. But if we do have a minimal detail of perception, better than a scalar is to use ratios, between space and time that define the 3 values with similar ‘1 units’, S x ð = Speed = (S/t); ∆@/S = Density; and S x ∆@ = Momentum.
Equations of conservation of mass and of heat energy.
Let us express in mathematical form the basic physical laws governing the motions of a medium.
First of all we express the law of conservation of the matter contained in any volume Ω which we mentally mark off in a space and keep fixed. For this purpose we must calculate the mass of the matter contained in this volume. The mass MΩ(t) is expressed by the integral:
This mass will not, of course, be constant; in an oscillatory process the density at each point will be changing in view of the fact that the particles of matter in their oscillations will at one time enter this volume and at another leave it. The rate of change of the mass can be found by differentiation with respect to time and is given by the integral:
This rate of change of the mass contained in the volume may also be calculated in another way. We may express the amount of matter which passes through the surface S, bounding our volume Ω, at each second of time, where the matter leaving Ω must be taken with a minus sign. To this end we consider an element ds of the surface S sufficiently small that it may be assumed to be plane and have the same displacement for all its points. We will follow the displacement of points on this segment of the surface during the interval of time from t to t + dt. First of all we comDute the vector: v = du/dt.
which represents the velocity of each particle. In the time dt the particles on ds move along the vector υ dt, and take up a position ds1, while the position ds will now be occupied by the particles which were formerly at the position ds2. So during this time the column of matter leaving the volume Ω will be that which was earlier contained between ds2 and ds1. The altitude of this small column is equal to υ dt cos (n, υ), where n denotes the exterior normal to the surface; the volume of the small column will thus be equal to: v cos (n, v) ds dt
and the mass equal to: ρv cos (n, v) ds dt.
Adding together all these small pieces, we get for the amount of matter leaving the volume during the time dt the expression: ∫∫ ρv cos (n, v) ds dt.
At those points where the velocity is directed toward the interior of Ω the sign of the cosine will be negative, which means that in this integral the matter entering Ω is taken with a minus sign. The product of the velocity of motion of the medium with its density is called its flux. The flux vector of the mass is q = ρυ.
In order to find the rate of flow of matter out of the volume Ω it is sufficient to divide this expression by dt, so that for the rate of flow we have:
where: Vn = V cos (n,v), Qn = q cos (n, q). The normal component of the vector υ may be replaced by its expression in terms of the components of the vectors υ and n along the coordinate axes. From analytic geometry we know that:
Vn = V cos (n,v) = Vx cos (n, x)+ Vy cos (n,y) + Vz cos (n,z)
Hence we can rewrite the expression for the rate of flow in the form:
From the law of conservation of matter, these two methods of computing the change in the amount of matter must give the same resuIt, since all change in the mass included in Ω can occur only as a result of the entering or leaving of mass through the surface S.
Hence, equating the rate of change of the amount of matter contained in the volume with the rate of flow of matter into the volume, we get:
This integral relation, as we have said, is true for any volume Ω. It is called “the equation of continuity.”
The integral occurring on the right side of the last equation may be transformed into a volume integral by using Hamilton’s formula:
Hence it follows that:
So we get the following result; the irttegral of the function:
over any volume Ω is equal to zero. But this is possible only if the function is identically zero. We thus obtain the equation of continuity in differential form:Equation (1) is a typical example of the formulation of a physical law in the language of partial differential equations.
Let us consider another such problem, namely the problem of heat conduction.
In any medium whose particles are in motion on account of heat, the heat flows from some points to others. This flow of heat will occur through every element of surface ds lying in the given medium. It can be shown that the process may be described numerically by a single vector quantity, the heat-conduction vector, which we denote by τ. Then the amount of heat flowing per second through an element of area ds will be expressed by τn, ds, in the same way as q, ds earlier expressed the amount of material passing per second through an area ds. In place of the flux of liquid q = ρυ we have the heat flow vector τ.
In the same way as we obtained the equation of continuity, which for the motion of a liquid expresses the law of conservation of mass, we may obtain a new partial differential equation expressing the law of conservation of energy, as follows.
The volume density of heat energy Q at a given point may be expressed by the formula: Q = CT, where C is the heat capacity and T is the temperature. Here it is easy to establish the equation:
The derivation of this equation is identical with the derivation of the equation of continuity, if we replace “density” by “density of heat energy” and flow of mass by flow of heat. Here we have assumed that the heat energy in the medium never increases. But if there is a source of heat present in the medium, equation (2) for the balance of heat energy must be modified. If q is the productivity density of the source, that is the amount of heat energy produced per unit of volume in one second, then the equation of conservation of heat energy has the following more complicated form:
Still another equation of the same type as the equation of continuity may be derived by differentiating equation (1) with respect to time. Let us do this for the equation of small oscillations of a gas near a position of equilibrium. We will assume that for such oscillations changes of the density are not great and the quantities ∂ρ/∂x, ∂ρ/∂y, ∂ρ/∂z, and ∂ρ/∂t are sufficiently small that their products with υx, υy, and υz, may be ignored. Then:
Differentiating this equation with respect to time and ignoring the products of ∂ρ/∂t with ∂υx/∂x, ∂υy/∂y, and ∂υy/∂z, we obtain
Equation of motion.
An important example of the expression of a physical law by a differential equation occurs in the equations of equilibrium or of motion of a medium. Let the medium consist of material particles, moving with various velocities. As in the first example, we mentally mark off in space a volume Ω, bounded by the surface S and filled with particles of matter of the medium, and write Newton’s second law for the particles in this volume. This law states that for every motion of the medium the rate of change of momentum, summed up for all particles, in the volume is equal to the sum of all the forces acting on the volume. The momentum, as is known from mechanics, is represented by the vector quantity:
The particles occupying a small volume dΩ with density ρ will, after time Δt, fill a new volume dΩ′ with density ρ′ although the mass will be unchanged: ρ’ d Ω’=ρ d Ω.
If velocity υ changes during this time to a new value υ′, i.e., by the amount Δυ = υ′ − υ, the corresponding change of momentum will be
or in the unit of time:
Adding over all particles in the volume Ω, we find that the rate of change of momentum is equal to:
or, in other words:
Here the derivatives dυx/dt, dυy/dt, and dυz/dt denote the rate of change of the components of υ not at a given point of the space but for a given particle. This is what is meant by the notation d/dt instead of ∂/∂t. As is well known, d/dt = ∂/∂t + υx(∂/∂x) + υy(∂/∂y) + υ(∂/∂z).
The forces acting on the volume may be of two kinds: volume forces acting on every particle of the body, and surface forces or stresses on the surface S bounding the volume. The former are long-range forces, while the latter are short-range.
To illustrate these remarks, let us assume that the medium under consideration is a fluid. The surface forces acting on an element of the surface ds will in this case have the value p ds, where p is the pressure on the fluid, and will be exerted in a direction opposite to that of the exterior normal.
If we denote the unit vector in the direction of the normal to the surface S by n, then the forces acting on the section ds will be equal to: – pn ds. If we let F denote the vector of the external forces acting on a unit of volume, our equation takes the form:
This is the equation of motion in integral form. Like the equation of continuity, this equation also may be transformed into differential form. We obtain the system:
This system is the differential form of Newton’s second law.
2. Another characteristic example of the application of the laws of mechanics in differential form is the equation of a vibrating string. A string is a long, very slender body of elastic material that is flexible because of its extreme thinness, and is usually tightly stretched. If we imagine the string divided at any point x into two parts, then on each of the parts there is exerted a force equal to the tension in the direction of the tangent to the curve of the string.
Let us examine a short segment of the string. We will denote by u(x, t) the displacement of a point of the string from its position of equilibrium. We assume that the oscillation of the string occurs in one plane and consists of displacements perpendicular to the axis Ox, and we represent the displacement u(x, t) graphically at some instant of time (figure 2). We will investigate the behavior of the segment of the string between the points x1 and x2. At these points there are two forces acting, which are equal to the tension T in the direction of the corresponding tangent to u(x, t):
If the segment is curved, the resolvent of these two forces will not be equal to zero. This resolvent, from the laws of mechanics, must be equal to the rate of change of momentum of the segment.
Let the mass contained in each centimeter of length of the string be equal to ρ. Then the rate of change of momentum will be:
If the angle between the tangent to the string and the axis Ox is denoted by ϕ, we will have:
This is the usual equation expressing the second law of mechanics in integral form. It is easy to transform it into differential form. We have obviously:
From well-known theorems of differential calculus, it is easy to relate T sin ϕ to the unknown function u. We get:
and under the assumption that (∂u/∂x) is small, we have: sinΦ ≈∂u/∂x. Then:
This last equation is the equation of the vibrating string in differential form. It is an essential equation of the Universe, despite or precisely because of its simplicity, as it ultimately represents an S=T symmetry in the realm of ∆-1 derivatives, of space and time, parametrised by the spatial force (tension) and density of the Active time magnitude, in each side.
Basic forms of equations of motion mathematical physics.
In that regard, whenever we find a fundamental equation of physics it will respond to a basic S≈T symmetry of the 5D² Universe… or a breaking of symmetry that splits and seems to create from an ∑∏ system, two ‘parting S vs. T’ elements.
Specifically as most physicists are only interested in $t, lineal time motion, the fundamental use of analysis have been in the study of equations of motion, which we shall review with the usual insights and Disomorphisms with GST.
The goal then of most physical analysis is to ‘reduce’ all the parameters to those which allow us to determine the motion of the physical system, which by dogma is then reduced to such single time-dimension, and this is the bolts and knots of most of mathematical physics.
Indeed, as mentioned previously, the various partial differentia1 equations describing physical phenomena usually form a system of equations in several unknown variables. But in the great majority of cases it is possible to replace this system by one equation, as may easily be shown by very simple examples.
For instance, let us turn to the equations of motion considered in the preceding paragraph. It is required to solve these equations along with the equation of continuity. The actual methods of solution we will consider somewhat later.
We begin with the equation for steady flow of an idealized fluid.
All possible motions of a fluid can be divided into rotational and irrotational, the latter also being called potential. Although irrotational motions are only special cases of motion and, generally speaking, the motion of a liquid or a gas is always more or less rotational, nevertheless experience shows that in many cases the motion is irrotational to a high degree of exactness. Moreover, it may be shown from theoretical considerations that in a fluid with viscosity equal to zero a motion which is initially irrotational will remain so.
For a potential motion of a fluid, there exists a scalar function U(x, y, z, t), called the velocity potential, such that the velocity vector υ is expressed in terms of this function by the formulas:
Vx = ∂U/∂x, Vy= ∂U/∂y, Vz= ∂U/∂z
In all the cases we have studied up to now, we have had to deal with systems of four equations in four unknown functions or, in other words, with one scalar and one vector equation, containing one unknown scalar function and one unknown vector field. Usually these equations may be combined into one equation with one unknown function, but this equation will be of the second order. Let us do this, beginning with the simplest case.
For potential motion of an incompressible fluid, for which ∂ρ/∂t = 0, we have two systems of equations: the equation of continuity:
and the equations of potential motion: Vx = ∂U/∂x, Vy= ∂U/∂y, Vz= ∂U/∂z
Substituting in the first equation the values of the velocity as given in the second we have:
The vector field of “heat flow” can also be expressed, by means of differential equations, in terms of one scalar quantity, the temperature. It is well known that heat “flows” in the direction from a hot body to a cold one. Thus the vector of the flow of heat lies in the direction opposite to that of the so-called temperature-gradient vector. It is also natural to assume, as is justified by experience, tliai to a first approximation the length of this vector is directly proportional to the temperature gradient.
The components of the temperature gradient are: ∂T/∂x, ∂T/∂y, ∂T,∂z.
Taking the coefficient of proportionality to be k, we get three equations:
τx= -k ∂T/∂x, τy= -k ∂T/∂y, τz=-k ∂T,∂z.
These are to be solved, together with the equation for the conservation of heat energy:
Replacing τx, τy, and τz, by their values in terms of T, we get:
Finally, for small vibrations in a gaseous medium, for example the vibrations of sound, the equation:
and the equations of dynamics (5), give:
and. assuming the absence of external forces (Fx = Fy = Fz = 0) we get:
(to obtain this equation it is sufficient to substitute the expression for the accelerations in the equation of continuity and to eliminate the density ρ by using the Boyle-Mariotte law: p = a²ρ).