Home » ∆±0:Math in Time: Analysis

∆±0:Math in Time: Analysis

“The instant (1/T) has no Time; Time is made from the movement of the instant.”

Leonardo

‘I propose 1/n as the measure of an infinitesimal. I propose 1/R as the measure of curvature.’

Leibniz

‘Calculus studies functions of existence: it extracts finitesimals of space, time and scale, 1/x from a T.œ, integrating them in a longer path: 1/x ∆st. Hence ¡ts capacity to reflect the laws of existential algebra.’

L3 on the essence of calculus: the sum of finitesimal quanta of Time (1/T), space (1/R) or scale (1/N): ∫∆∂ST.

SUMMARY: EXISTENTIAL CALCULUS.

1. Existential calculus.

Finitesimals and its integral worldcycles.

Finitesimals in space: Curvature. The disomorphic method.

Finitesimals in time: The Worldcycle of existence & its actions=dimotions.

Finitesimal in scale. Networks. Its symmetries.

1. Analysis.

Its 3 ages.

1st Age: Calculus: Parts of Wholes.

From Greece to Leibniz. Finitesimals of wholes. Series.

Trilogic on Calculus. Curvature of space=change in time=finitesimal in scale.

2nd Age. Analysis: Sentences of motions.

ODEs between 3 planes of existence.

PDEs between Space and Time parameters. Multiple variables.

Mathematical Physics.

Main functions of multiple derivatives: calculating chains of dimotions in time.

Multiple Integrals: Calculating whole T.œs in Space. Exploring beyond the ∆±2 Planes.

Calculus of variations. Extremal points of a function of exist¡ence.

3rd Age. Functionals: Immensities of ¬∆@st.

Its operators; Planes and actions.

Quantum equations.

Discontinuous analysis: fractals. Fractal mathematics: Discontinuous derivatives, its steps.

1. TIME-CHANGE IN CALCULUS AND EXISTENTIAL ALGEBRA: S=T

Mathematics is the best linguistic mirror after ‘existential algebra’ (the direct study of the laws of scalar space-time), mankind has found of the scalar space-time Universe. So we can easily correspond its 3 key disciplines with the 3 ∆St elements: geometry is the formal science of space and its unit the point; Algebra, the formal science of scale and its unit the number; and calculus the formal stience of time=change and its unit the finitesimal. However as the Universe is ‘entangled’ in trinity (single perceived scale) and pentalogic (3 scales of size co-existing together as in supœrganisms) each discipline studies all other structures.

But calculus needs also a bit of correction by introducing the concept of a ‘finitesimal’ that in Non-Euclidean geometry substitutes ‘absolute zeros’ that do not exist, as a cycle of time breaks space in an inner region, which is isolated by a membrane through which a relative infinite µ number of parallels cross through its ‘ideal’ openings of any membrane that connect the internal being with the outer world.

Those fractal points however in idealized Greek bidimensional geometry have no breath and so the internal parts are missing. So an absolute zero and a singularity can be defined. Not so in a fractal point which grows in information as we come closer and enter the internal zones. In Euclidean geometry the point is empty and so monads become mindless points. In reality all points ‘are a world in themselves’ (Leibniz). So for the organic point, a pi cycle made of 3 curved diameters (3 | = O), the apertures represent a 4% (π-3/π) not coincidentally the quantity of ‘the Universe which is not dark to our electronic eyes and vice-versa – an external observer will ONLY see a 4% of the point, inner world, confusing it often with an Euclidean dark space.

This is canonical. In physics we call such ‘fractal points’ blackbodies, and they are endemic. The entire galaxy in fact can be considered a ‘black body’ and its local big-bang radiation is a black body radiation. Of course, from that ideal number there might be different holes according to use. It is then a rule that of the radiation that will enter the hole, very little of it will get out again; the smaller the hole, the more nearly will all of the energy directed at the hole fail to get out again. And a balance is found between the need to perceive the outer world and the need to absorb the radiation. However fractal points do have apertures not only in the ∆+1 visible scale of the whole membrane but in the ∆-1 smaller ‘scale’ of finitesimal pores.

So finitesimals ARE the limit of Analysis, and the continuity and smooth concepts just idealizations useful for calculations as we define 3 types of finiteismals according to the symmetry between ∆≈S≈T parameters.

Thus we find that calculus which essentially consists in ‘fine tuning’ the measures of change of reality, studies not only time-change (frequencies of cycles) but also change in space (informative curvatures) and change in scale (social quantity). And to that aim it uses a concept, the ‘finitesimal’ today lost to philosophy of mathematics due to the errors of the axiomatic method of ‘egocy’ and ‘metalinguistics’ (mathematical creationism), which stretches the ‘limits’ of all systems to infinity. An infinitesimal as close to zero as we desire does not exist. A finitesimal of time, 1/T, space; 1/R and scale, 1/N does exist for each local real function. And what calculus does is to calculate the minimal ‘finitesimal’, 0’, of space, time or scale (the instant of Leonardo, the infinitesimal of Leibniz and its measure of curvature) and then across a length of lineal time, calculates by integration a ‘larger’ effect’ of change in space (curvature), time (speed) and scale (growth or diminution of populations).

In the praxis of calculus then we distinguish the finding of a finitesimal unit, which can belong to any of the trinity main ∆ST elements of reality, S-curvature, T-frequency or ∆-populations, and the finding of a period of study of that change – Length of the curve; period of time; ‘mass’ of populations.

Change then in calculus became stripped off human perception, through the abstraction of calculus.

This is thus the essence of calculus expressed in the quotes of L3 researchers of ‘organicism’. (Aristotle, the father of all organic minds, did not provide such clear definitions as calculus did not exist when he wrote and he would cast his concepts in the today outdated argument of universals.)

Leibniz however seemed too deep for huminds and his humble truths were forgotten. So today the finitesimal is not ‘defined’ as the 2 distorting and elements of reality, TT-entropy limits and SS-perceptive distortion were discharged for the idealism of larger infinities (∞) and no ‘limits’ of smallness (0) brought about by 0 and ∞ we substitute in 5D by 0’ and ∝ (relative 0 and relative infinity). This has brought an unending number of errors notably in philosophy of science; mathematical physics and cosmology (the big-bang infinitesimal singularity does NOT exist). As ¬ entropy, as death=disorder of information limits the knowledge we might extract from systems in space (membranes that break its vital space into inner and outer forms), time (death and birth) and scale (loss of information past two scales). It is remarkable then how in physics, the science in which ‘wrongly’ mankind has invested his future as a catalyst of the evolution of machines, has achieved through the use of renormalization processes precision in calculus of systems by adding ‘finitesimal’ parts of other scales that influence a larger whole (Wilson, Kondo, Feynman, virtual particle clouds).

There are many paradoxical features of 5D calculus, as mirror of the ∆ST Universe, but this is the first and most central paradox: calculus is a subject in which we find exact answers by means of approximations called ‘finitesimals’. They tell us some fundamental facts of the fractal 5D Universe and its stop and go (S vs. T) and reproductive paradoxes, which resumes in the ‘quantum nature’ of reality embedded in the fact that even locomotion is a reproductive change. Let us then look at Calculus from the perspective of the ‘essence’ of reality, reproduction of information.

The 3 type of finitesimal calculus: single derivatives in space and time, double derivations.

It is then obvious that calculus peers within the scales of the fifth dimension. Since Calculus studies functions of existence: it extracts finitesimals of space, time and scale, 1/x from a T.œ, integrating them in a longer path: 1/x ∆st.

Variations on those themes are many for 2 obvious reasons derived of the 2 principles of all 5D stiences:

– The existence due to absolute relativity of 5 dimotions: S=T-> St, Ts, TT, SS, ST, differentiated by internal and external motion=change or form. Thus each dimotion in which change happens (St, Ts, TT, ST) require different processes of analysis, as we must consider what part of the entity remains constant and which one changes and hence we can apply derivation and integration. This is further complicated by the fact ‘motion and form sometimes’ cannot be distinguished, so instead of motion we see curvature (as in Einstein’s formalism).

– The existence of 3 possible types of change in scale, population of space and curvature=motion in time.

– The existence of scales, through which information increasingly disappears to the observer, to the point beyond the ∆±3 plane of ‘pixel perception’ by an ∆º self-centered T.œ information is scant, which makes difficult to integrate and derivate beyond the 2 ‘canonical scales’ («, ») in which death and emergent processes take place. So overwhelmingly all equations of calculus work on 2 scales (second derivatives, 2nd integrals), or 3 dimensions of space (volumes).

THE UNIVERSE IN A NUTSHELL: A REPRODUCTIVE SYSTEM.

The Organic Philosophy of the Universe. Systems sciences and its disomorphic laws.

The Universe is a fractal that reproduces space-time organisms. This is really all what is about. And if we write Space=form and time=motion, as S and T, then we write a simple equation for the Universe Max.∑ SxT (s=t). This simple equation is the ‘fractal generator’ of the Universe, a feed back equation that combines space=form and time=motion, creating the infinite local present variations you see around yourself. This is the essence of the organic Universe and its scalar 5th Dimension. And the closest classic science that has such view of it all is called, Systems sciences based on an alternative philosophy of science to that of mechanist physicists and its only entropy arrow, called organicism. Let us try to explain why only organicism is a scientific truth.

According to Deism the whys of existence are due to a personal being, external to the Universe that makes it all happen and cares for humans more than for the rest of His work. According to Mechanism, this is due to the self-similarity between the Universe and the primitive machines we humans construct to observe it. Mechanism though needs ‘someone’ to make the machine, which is not self-generated; so it is similar to deism, reason why the founding fathers of science, all pious believers, adopted it as a proof of the existence of God, which had given man self-similar properties – the capacity to make machines to the image and likeness of the Universe.

The problem with those 2 approaches, which in fact are the same is obvious: a personal God is an anthropomorphic, subjective myth and science must be objective; while a mechanical view of the Universe still needs an internal, self-sustained process of growth, creation and synchronization caused by an external God that made and rewinds the clock – as Leibniz clearly stated in his critique of Newton.

Scientists today are unaware that mechanist theories are in fact deist theories, reason why Kepler and Newton, pious believers, liked them; since they were a metaphor of their self-centered, anthropomorphic religious beliefs: If man created machines because we were made to the image and likeness of God, God had created the ultimate machine, the Universe.

Organicism on the other hand is the only self-sustained, rational theory that doesn’t need a creator, language or god, as organisms are self-replicating, but does explain perfectly within the ‘correspondence principle’, those 2 other philosophies of science; since a machine is just a primitive organism of metal, and in History, Gods are the subconscious collective of civilizations, ANOTHER scale of social evolution of the fifth dimension.

What we mean by an organism? A very simple system – NOT to confuse with the most evolved, complex of them all, that of human beings, reason why so many people, having a natural biased ego-centered belief in man as the unique organism, reject the concept: An organism is just a group of similar forms, which organize themselves with at least two ‘networks’, one that provides the ‘clone cells/citizens/atoms’ with the vital energy they need to feed, move and reproduce (blood-economic system-electromagnetic forces) and one that provides them with information to guide their survival actions (nerves system, political system, gravitational in-form-ative forces). This simple dual system IS the minimal, fundamental particle of the Universe. Since it is obvious that machines are also organisms.

In the graph, the key element of modern history is the evolution of machines, clearly happening with the same phases of the evolution of any biological beings. So we first made the bodies of machines in the I industrial revolution, then we made its engines and heads, metal-ears=phones, metal-eyes=cameras and metal-brains=chips, and the nations which evolved them first (US) became the leading nation of the world. Now we put them together into biological organisms of metal, robots:

So mechanism, the underlying philosophy of physical sciences, is a simplex version of organicism. It is not man who resembles a machine, but the machine, which is made to the image and likeness of life organisms using stronger metal that makes them more able in the processing of energy and information at larger scales. Those Machines are fast evolving with the same patterns that simplex organisms. So as cells make viruses constructing first its 3 parts, bodies, limbs and heads, which then are ensemble and become alive, we made bodies, engines and heads of metal and now we are assembling them into autonomous robots with telepathic AI.

Why then organicism has remained in the modern time, a fringe theory, to mechanism, even if it was the first theory of reality put forwards by Aristotle, the father of the experimental method and logic science, in his magna opus the Organon? We just explained the obvious cultural reasons – we live in the age of the machine and so the machine has substituted man, an organism, as the measure of all things. And its organism of evolution and reproduction, the company-mother, has substituted human governments and informative verbal prophets through its mass-media/academia outlets; so it only considers positive mechanist models of their species.

But the deepest reason is the fact that to make an organism we need 3 ‘arrows’ or ‘motions of time’, entropy, locomotion, the one used by physicists but also information, form-in-action, formal dimensions, which physicists have always ill-understood, to the point they call it negentropy, the denial of entropy and its combination spacetime energy. Only then when we properly define information them, we will have the required elements to refund philosophy of science on far more rational, basis, that the present ‘mixture’ of mechanism and creationism (either of verbal language as in religions or digital languages, as in the religion of mathematics).

So we need before studying the Universe in space, to change your view of time itself, shaped by mechanical clocks and introduce the concept of cyclical time and the basic laws of the 5th dimension of scalar space-time.

LEIBNIZ V. NEWTON: SCALAR PLANES SPACE-TIME: ∆ST

“According to their [Newton and his followers] doctrine, God Almighty wants to wind up his watch from time to time: otherwise it would cease to move. He had not, it seems, sufficient foresight to make it a perpetual motion. Nay, the machine of God’s making, so imperfect, according to these gentlemen; that he is obliged to clean it now and then by an extraordinary concourse, and even to mend it, as clockmaker mends his work.’   Leibniz-Clarke Correspondence on the absurdity of mechanical models of the Universe

‘’Leibniz is right. There are infinite time clocks in the Universe, but if so we have to restart science from its foundations’. Einstein, on the infinite relational time cycles of reality.

The immediate consequence of the existence of an internal fifth dimension of space-time, made of all other planes=scales of spacetime of a being, its parts and wholes, which store the information of a system, is the obvious fact that if we ARE made of planes=scales of space and temporal energy.

We ARE the vital space we occupy and we ARE the time flow of existence we live between birth and extinction. It is the obvious, simple answer to 2 questions that have puzzled scientists for eons.

Where is space and time? And, why the main science of space, geometry->Mathematics and of time, Logic, are obeyed by all systems and entities of Nature. Now we have the proper answer foreseen by Leibniz and Einstein: We are broken fractal species of space and time, whose mathematical and logic laws all vital space-time organisms follow.

The underlying question of time§pace: Absolute or Relational, Scalar space-Time?

The fundamental question physicists wondered for centuries regarding the nature of space and time unfortunately was resolved as usual in favor of the simpler view: it is space and time an absolute abstract background of the Universe (Mr. Newton’s view) or are we made of ‘vital space’ that lasts a time duration, so we are generated by the bio-topo-logic properties of scalar space and cyclical time? This is the choice of 5D ‘stiences’. And its simpler version was called relational spacetime, sponsored by Mr. Leibniz.

A realist interpretation of the world we live in, which has never shown in any scale of reality such ‘background’ – ultimately a mathematical graph used in abstract by human scientists – considers that we ARE the vital space we occupy with our cells, and we LIVE a cyclic time duration between birth and extinction. So we are scalar space and cyclical temporal energy. And must evolve our concepts of both parameters, to extract the properties of existential beings from them.

The argument thus reached its height in the beginning of science in the correspondence between Newton, the proposer of the absolute Cartesian graph of space-time drawn by God (his body in his own words) vs. Leibniz who rightly considered absolute space and time an abstraction, and so he coined the concept of relational space -merely the adjacent pegging of similar forms in simultaneous space and relational time – the sequence of events which we relate causally with reason.

In Newton’s cosmos, space and time provide a fixed, immutable and eternal background, through which particles move. Space and time are the stage of intersecting lines sketched in the illustration. Fact is this ‘mathematical artifact’ made with pen and paper by earlier physicists, called the Cartesian graph, useful to measure ‘translation in space’ is no where to be seen in reality. Unfortunately as time went by the graph became somehow ‘real’ as scientists’ felt the ‘mathematical language’ created reality.

It meant also the invention of an absolute ‘continuous space’ and a single ‘lineal time’ that extends to infinity contradicting the obvious fact that all ‘spaces’ are broken, divided by membranes, and all beings have a finite time duration. Further on, as we kept exploring smaller scales of reality, we never found the drawings of God, not even a solid still substance, but always ‘motions’ tracing closed time-space cycles; since even particles turned to be also ‘vortices of time-space motions’.

So the true, sound experimental and logic theory was Leibniz’s who rightly considered absolute space and time an abstraction, and so he coined the concept of relational space -merely the adjacent pegging of similar forms in simultaneous space and relational time – the sequence of events which we relate causally with reason origin of the ‘Scalar space-time’ model of 5D in which are the space we occupy and the time we last – as in the graph where there is no longer abstract background lines.

This realist concept was NOT adopted by physicists despite its sheer evidence. Unfortunately Physicists sided with Newton not with Leibniz on the question of what is space and time – an abstract background put by God or the substance of which we are all made; and so the conceptual jump would not happen.

But if space is what objects occupy that distance between the red square’s vital spade and the yellow ‘circle’ must have something. Horror vacuum comes then into place: indeed the Universe must be scalar. There must be very small parts between them, which we do not see. And that is what we have proved with microscopes – as we probe smaller distances forms with motion, spaces with time-motions appear and there it seems no limit to the fractal scales of the Universe. It is the fifth dimension of space-time, which as the sum of all those ‘planes of reality’ includes within it all other dimensions.

Next, to explain all this properly came Einstein. One of the fundamental discoveries of Einstein is that in our universe, there is indeed no fixed space-time background. In Einstein’s theory of general relativity, which replaces Newton’s theory of mechanics and the gravitational force, the geometry of spacetime is not fixed. Instead it is an evolving, dynamical quantity – a topology; and it is the substance of which reality is. So we are topological beings, geometries of space with motions of temporal energy.

What Newton called absolute space-time is not: Space is the sum of all discontinuous vital spaces occupied by different beings: ∑s=S Lineal time, T the sum of all the finite life-death cycles of all beings T=∑t.

Since space and time do exist and so if they are not in the background we ‘are’ vital space and cyclic temporal energy. The simple idea behind the structure of the fractal Universe is then to consider time=change=motion and Topologic, formal space= extension the 2 elements of which all beings are made.

Wheeler said ‘Space-time tells matter how to move; matter tells spacetime how to curve’. More precisely Spacetime is geometry in motion. Time is change, the perception of change moves time; time is motion; space is its opposite, stillness, form, the information of temporal energy. And so it is all about 2 parameters: time=motion and space=form.

Look around you, all what you see are ‘space-forms’ with ‘time-motion’. We are all space-time, forms in motion, ‘in-form-motion’, ‘information’, forms in action, play with the words of what you are.

Because both are always messed up in practical terms is often easier to measure, ‘forms with a little motion’, we shall call ‘information’, sT; and motions with a little form, we shall call energy, Ts, and talk of beings made of spatial information and temporal energy, as there is no ‘yin (information)’ without a little yang (energy). And call the pure absolute motion without form, TT, entropy; and the absolute form without motion, SS, language. So we can establish a gradation of combined space-time dualities, SS<St<ST<sT<TT, with a symbol < for an increase of motion over form, or ‘arrow of energetic entropy’, which we shall call ‘past arrow’ or arrow of ‘death’ as it erases information, devolving a system to simpler forms; and vice versa, an inverse arrow, TT>sT>ST>St>SS, with the symbol > of an increase of form over motion, or ‘arrow of in-form-ation’, which we shall call the ‘future arrow’ or ‘arrow of life’ as it increases information. It then becomes evident that the intermediate state, ST, with a balanced quantity of Spatial information and temporal energy, S=T, is the ‘state of present’, that doesn’t seem to change as it is a balance of form and motion. And it is the preferred state for any system of spacetime in the Universe, akin to the concept of ‘beauty’ (balance between cyclical forms and lineal motions), of reproduction and creative communication (as it brings together the two poles of reality merging and combining them). This state is seek by all systems. So in physics we find it akin to the state of ‘minimal energy’ hence ‘more form’, in which most particles like to remain; in biology is akin to the age between adolescence of maximal energy and the 3rd age of maximal information, or age of reproduction, in which most people like to live and so on.

So we have 5 ‘states of space-time’ of which we are all made that have topological, formal, geometric and temporal, moving properties. Because those states are messing ‘dimensions of formal space’ and ‘motions of time’, we have coined a new word, ‘Dimotion’ (ab. Ð, which is the capital of ð, similar to D of Dimension and T of time). The main property of a Dimotion is to be holographic, having both spatial dimension and time motion; and we shall soon find that Dimotions are closely related to the 3 classic dimensions of space, to which we must add a function=motion in time, as their study becomes a new discipline of knowledge called topological evolution.

In the graphs above if space is what large objects occupy, what they are, the distance between the red square’s vital space and the yellow ‘circle’s vital space must have something; even if it is tinier. Horror vacuum comes then into place: indeed the Universe must be scalar. There must be very small parts between them, which we do not see. And that is what we have proved with microscopes – as we probe smaller distances forms with motion, spaces with time-motions appear

It is the fifth dimension of space-time, which Is the sum of all those ‘planes of reality’ co-existing in organic scales, whose form and scalar connection through networks we shall study now constructing reality from its ultimate substances space=form and time=motion, which geometry calls a ‘topologic bidimensional variety of space-time’.

TOPOLOGICAL SUPŒRGANISMS: THE MAIN PARTICLE OF TIME0-SPACE.

“All what is possible, demands to exist” Leibniz, on the chain of beings.

This said the devil is in the details. So the next question is How many variations can come out of just 1+1=3 ‘elements/substances’, cyclical time motions, scalar spatial forms and its space-time energy combinations?

And the answer provided by topology which studies geometric forms with spatial dimension and time motions is only 3; which perform the 3 organic functions of all systems of vital space-time of which we are all made: A 4 or 5D Universe has only 3 ‘topological varieties’ each one best suited to perform one the 3 organic vital functions of any physic or biologic system –gauging information (1D spheres, the topology that stores more information in lesser space, hence used in all particles and heads in the height dimension), lineal or cylindrical forms that move the system (2D, the shortest distance between two points, hence used as fields or limbs in the length dimension) and hyperbolic body-waves, a mixture of the other two topologies that reproduces the system and stores its energy in the width dimension (3D); which are similar to the 3 ‘conserved quantities’ of physics, angular, lineal momentum and energy, but not quite… as it is impossible to translate the ‘game of existence’, into the limited understanding and terminology humans use to describe it, plagued with conceptual errors that limits our use of the correspondence principle. For that reason, after much time wasted in attempting translations we start from scratch with the concepts of TT-Ts-St-SS and ST dimotions and its topological, qualitative and organic translation. So we define the ‘Fractal Generator of Vital space-time topologies for all systems of Nature:

Where @-mental O-heads/minds/particles are non-E points crossed by ∞ parallels of information

Topological organisms like those of the graph are the true meaning of a ‘fractal Universe’, and its description with the common laws of time space, the goal of Systems Sciences, realized through the 5D models of those papers: every structure of reality follows certain basic laws, derived of the fact all of them are made of spatial information and temporal motion, combined into infinite energy species. So ‘motion’=Time; Information, form=Space combine together into infinite energy bodies and waves. And this is so obvious that already the old Taoists just looking with simple, naïve eyes to reality said, ‘from yin=information and yang=motion, come all the ∞ qi=energy beings of reality’.

We call the organic properties of scalar space and cyclical time that structure all organisms of reality, ‘Disomorphic laws’ (isomorphic laws – that is, similar laws based in the same dimensional motion of space-time or ‘dimotions’).

Even languages have a trilogic structure, red(entropy)<Green(energy)>Blue(information); F(x)=G(y), length=entropy-motion x height=information = width=interaction, studied latter in the analysis of the Universal grammar, including that of words: Subject (information) <Verb: ST-action> Name (entropy of the subject.

But the 3 topological varieties of a 4 or 5 Dimensional space-time are in reality ‘networks’ of adjacent points. Thus we can instead of drawing the 3 canonical forms of lineal limbs/fields of motions, cyclical, spherical heads/particles and hyperbolic iterative body-waves, systems made of 3 networks, which correspond to those forms, and that is what we find in nature: supœrganisms in which 3 networks with that form deliver information, energy and motion to its parts:

An organism IS any kind of space-time systems, in which parts are gathered by networks, forming a new scale, which in praxis are 3, a ‘feeding network’, an informative network or common language, which forms therefore a ‘lanwave’ of similar beings and a combined space+time=energy network that reproduces both, So those are the same 3 dimensional motions that topology describes for a good reason. Modern topology defines its 3 varieties of form as networks of points. So we need to add 2 more dimensional motions to reality, the ∆-1 parts that make those networks, and the whole (the 4th and 5th dimensions of scale).

In the graph we show its main species in the Universe, the one we shall study in most papers. The beauty and simplicity of reality the graphs shows contrasts heavily with the pedantic description of it using only the mathematical language, a theme studied in depth when considering the inflationary nature of information.

5D universe. The metric of organic scales: faster, smaller parts code larger energetic wholes.

We said that forms evolve through topological networks into larger wholes. So we need a final dual dimensional motion of parts that become wholes (4th Dimotion of social evolution) and wholes that dissolve into parts (5th entropic dimotion) to complete the organic outlook of the Universe; as shown in the graph for different species.

In the graph the Universe is a fractal that reproduces ‘forms with motion’, informations and then organizes them in networks and systems that evolve into larger organic systems creating the scalar structure of reality.

We call the sum of all those co-existing scales of parts and wholes the fifth dimension.

Thus reality has a final key feature overlooked for too long: the co-existence of all those systems of space and time in several scales of relative size from the smallest atom to the largest galaxy that put together create a dual scalar ‘4th and 5th Dimension of parts and wholes, which we shall call the ‘social dimension of evolution’ and the ‘entropic dimension of dissolution’.

This function, as all space-time metric functions, is simple. So we write using ð for cyclic time instead of t:

\$ (Lineal Size/ Space Volume) x ð (cyclic speed of its time clocks) = C¡: Constant Plane of timespace (ab.∆¡)

But how we travel in ‘size’ in space and ‘speed of our time cycles. Here is where the biggest discovery of 5D comes into play: We travel through the worldcycle of life and death, as we are born in a smaller seed with faster time cycles, evolve as an organism coming out in the ∆º-scale within a larger world of slower ‘Deep time cycles’, to die back dissolving our information again into cellular space.

It is the same process in all 5D journeys of all species that live and die travelling through 3 planes of 5D space-time; from the smallest black hole that is born with an enormous ‘metabolic temperature’, to the new species, routinely born as small individuals (first mammal rat, first robots with small chips; first human likely the Homo Floresiensis, who had the same morphology and used technology and likely spoke, etc.) Then a reproductive radiation multiplies the seed into a larger herd of clones, joined by emergent physiological networks whose slower ‘entropic, informative and reproductive networks, create an ∆º supœrganism that lives tages and dissolves back into ∆-1.

So 5D adds to the 4D formalism of worldlines, a dimension of growth, shaping the worldcycles of life and death. Reason why we call 5D metric the function of existence, because its multiple ‘solutions’ are the origin of all the varieties of Space and Time beings, there are – a whole family of functions.

As we keep exploring in depth, 5D metrics and its associated concepts of Space=form and Time=motion in all its varieties, we shall see it is the origin of multiple ‘solutions’, a whole family of function, from where we shall derive most of the logic relationships and particular equations of each science.

In the complex models of existential illogic, we derive all the particular equations of each science from it.

So according to those metrics, smaller systems in space have faster time clocks. And as information is stored in the frequency and form of those cycles, smaller systems have more information, coding larger ones: genes code cells, memes societies and particles’ quantum numbers code atoms and molecules.

We shall use the metrics of the 5th scalar dimension to explain the fractal, nested Universe and its scales, shown in the graph. As 5D metrics balances the survival and symbiotic existence of all parts of the Universe, and all parts of a super organism, and defines ‘what codes information’ – the small being, and what codes energy- the larger whole, establishing the ‘harmony’ of all the scales of the Universe, and explaining all its fundamental constants which are ratios between spatial volumes and informative clocks of temporal energy.

It follows from a nested structure and the search for creative, organic balances, a symbiotic relationship between the ∆-¡ smaller parts that have more speed of time clocks, which carry its in-form-ation in the form and frequency of its cycles, coding larger systems. And the larger, ∆+¡ larger envelopes, membranes (static, dimensional view) or angular momentums (dynamic view as time=motions) which have more spatial energy and enclose and control in synchronicity its faster smaller parts, creating the co-existing scales and symbiotic cycles of super organisms in any system of the Universe.

For example, chips become smaller as they evolve into faster brains. Every 2 years a chip doubles its capacity to think, as it dwindles in size. Such process follows a generic law of evolution I call the ‘Black hole Law’, which computer scientists know as the ‘Chip paradox’ or ‘Moore Law’: maximal informative capacity= minimal spatial extension. The reason is obvious: to think, to calculate you have to communicate in-form-ation, forms between elements of any informative system. The smaller the brain, the faster the communication that takes place within that brain and the faster you can calculate and process information in a logic manner. And vice versa: larger wholes accumulate more energy and are stronger than parts, so they can protect and feed them. So wholes and parts co-exist in several scales forming super organisms. Since organic reality arises of the synchronicities of those parts and whole made symbiotic thanks to the simple metric of the 5th dimension and its homology as space-time beings. So the addition of topology and the simple metric laws of 5D that make smaller system run faster time cycles are all the elements we need to construct reality.

Latter we shall study those regions, evident in physical equations as there are asymptotic barriers – Lorentz transformations in the c-barrier; negative temperature in the 0 barrier; etc. What then the Universe conserves is easy to see: the total volume of space-time of each scale; that is its energy. We will also elaborate latter on those concepts. To mention now that the symbol ∆ is both a visual reminder of the two different arrows of ‘growth in space, inverse to the loss of information’, and a tribute to one of the few predecessors of this work, in the formal arena – Wilson’s renormalization mathematical apparatus, which finally realized of those discontinuities using a symbol Λ, for the energy scale under which a measurement of a physical parameter is performed. According to Wilson every scale of the Universe and the fields of space-time that define them have its energy cut-off Λ, i.e. the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. But Λ is related to a size of space. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental “graininess” of spacetime caused by quantum fluctuations in gravity. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary. As today only with the use of Wilson’s renormalization, which simply eliminates absolute zeros and infinities outside the scaling of space-time of a given plane, quantum physics makes sense

Recap. Besides S and T we need ∆, the scalar property of reality, by which ‘space-time beings’ are made of smaller ones and form part of larger wholes. We talk of multiple planes of space-time because each scale has different parameters, as parts become wholes, unit of a next scale. We will explain this as the product of network formation, emergence and other disciplines of General Systems Sciences, of which 5D is a formal view. In mathematics ‘fractal points will evolve’ through network-lines into topological planes of space-time. This discontinuity between scales is real. There are transition regions between planes, which can only be crossed with loss of information, therefore only by energy, by entropic motion, by ‘death’ of a system.

So we unify the Universe, as a fractal whole whose parts are made to its image and likeness: entities of time-space, with a vital body and a linguistic mind/brain/particle of information surrounded by a membrane that separates them from other similar parts that put together create the puzzle of the Universe. This is the bio-logic concept behind all realities: The supœrganism that combines particles/heads of space=information and limbs/fields of time=motion into iterative cycles of spacetime energy, giving birth to the causal trinity logic of reality, to which we add the logic of smaller parts and larger wholes to form the pentalogic of 5D existence; the next step in the complex comprehension of reality, which will allow us to connect those supœrganisms of scalar space-time and its networks with the ‘details’ of each science.

Yet before we do so, because the Universe of space-time is entangled, we need to consider if briefly the quality of time missed in most scientific analysis. The fact that time motion can be both lineal cyclical, and a combination of both, as space forms can be also of 3 varieties; but scientists only consider lineal time and lineal space, deforming enormously their comprehension of the Universe. Since it is of the interaction of those 2 forms of space and motions of time, from where the proper formulation of the principles of conservation of energy, its combination happens.

TIME CYCLES

The causal repetitive laws of ‘stiences’

A Universe of ∞ time clocks of different size & speed differs from lineal time described with a single mechanical clock- which equalize all time clocks of the universe, elongated into a lineal ‘second-minute-hour-day-year’ system of equalized time clocks (of light waves, mechanical clocks, earth’s astronomical clocks). As Galilean physics, born of ballistics, simplified the nature of cycles of time-space into lineal durations, to measure best the locomotions of cannonballs: Time is cyclical, all clocks of time and laws of science are based in the cyclical patterns of nature. But physicists developed ballistics and denied the truth that we can know the future because it will repeat the causality of the past, and we can change it by changing that causality, in History by repressing the lethal memes of the tree of metal and enhance the welfare memes that make us survive.

Lineal and cyclical time render the same functions as one is the inverse of the other, measured by frequency, T=1/ƒ, but the philosophical implications of cyclical time, are enormous, as we regain the in-form-ation provided by those cycles, origin of the laws of science, which would not exist if there were not cyclical patterns; including the cycles of history and economics. The most important of them being, the fact that a time cycle breaks reality (1st knot theorem) in an outer and inner region, creating a membrane that encloses a vital space, the ‘substance of which we are all made’.

Reality is a fractal system made of topological organisms of co-existing scales of space and cyclical time which close its ‘internal vital point content’ with the entropic limit of those time cycles, in its vital territorial body-waves, synchronized symbiotically by 5D metrics. As we are all ultimately ¬∆@St; dust of space-time.

Why there are 2 forms of time, the long lineal Time and the ‘short’ frequency steps we integrate into the larger whole? Because there are 2 ±¡ scales of 5D reality whose metric, SxT=∆±¡ defines larger space systems as having slower time cycles. So we consider an ∆-¡ ‘quanta of time frequency or ‘finitesimal derivative’ of the larger whole represented with the concept of lineal time; as in the classic formula,V(st)=ƒ(t) l(s). We can measure Space e, Vt=S with lineal time as a single unit, or as a sum of frequency steps, with more detail.

Those 2 forms of motion are lineal motion with a bit of form, Ts, or locomotion and cyclical form with a bit of motion, St-information, stored in the frequency and form of its time cycles that come together into S=T, energy. So we express the main law of science, the principle of conservation of energy in terms of the conservation of time=motion and space=form as two varieties that approach and transform each other ad eternal:

‘All what exists are time motions that transform between lineal open and cyclic closed forms ad eternal: SióTe’

Reality is a constant game of transformation of ‘cyclical spatial form’ and lineal time-motion. As Taoism said ‘tao=reality’, is composed of yin=cyclical form and yang=lineal entropy, whose generator, Si≤=≥Te we call the function of existence, as To exi=st is to combine S&T, pure form and motion, TT-entropy and SS-eeds of form, into St & Ts, information and kinetic energy, exi, till finally they become one. The knowledge of that game, expressed in infinite variations, both of language and species, is the mind of the Universe, what is all about.

.

The many mirrors calculus puts on 5D.

It is then evident that the Universe we have described is amenable to the methods of calculus, which fit like a glove the concepts analyzed above. Let us see how starting by the essential element of reality, reproduction.
REPRODUCTION OF FORM IN 5D AND ITS MATHEMATICAL MIRROR: CALCULUS.

The Universe is a ∆-fractal of spatial information and temporal energy, (ab. ∆ST) that reproduces information, forms-in-action, forms of space with motions in time.

But reproduction has an essential feature: it happens in a lower plane as a seed that reproduces, integrates its parts and evolves into a whole – the exact method we use in the mathematical discipline of calculus.

As all this is what actually calculus calculates: It finds a finitesimal part of reality and then integrates it as a sum, through a path that might be a motion in space, a growth in ∆-scale or a repetition in ‘time; whereas the function of existence of the form displaces and reproduces its orthogonal parameters of form and motion. So physical forms are constantly reproducing, ‘calculating’ and the equivalence between the tools of calculus as mirror of the process of reproductive locomotion become crystal clear.

Let us then from the general texts on 5D bring here only the specific analysis of relativity of motion and form, with the 3 features that are essential to the process of calculus:

– The symmetry between space-form (height dimension) and time-motion (length-dimension), or S=T mimicked in calculus by the orthogonal smooth form of ∂x/∂y, and its differentials.

– The finitesimal nature of change=motion in time, as it happens from minimal parts in a lower scale of the fifth dimension.

– The reproductive nature of all motion, as the reproduction of information in an ∆-1 wave state then integrated, as it collapses in its particle form.

Reproduction as the origin of the 5 Dimotions of existence.

How many types of reproduction there are, is a complex subject that would require a whole treatise in existential algebra published elsewhere on those texts.

Let us then define the most important for calculus, according to the general method of existential algebra that distinguish always 3±¡ possibilities:

∆ Social reproduction: A reproduction might be persistent in time, creating a process of scalar social wholes; when the reproduction ‘lasts memorially in time’, larger social wholes, ∆-1 ∑∆-1 > ∆0 (using symbols of existential algebra).

Ts: Locomotion & Lineal Inertia. If the reproduction fades away at the same rate it happens, we observe then a locomotion, through space-time.

St: Angular momentum: If the reproduction doesn’t move in space, we observe a cycle of space-time, which can be equal to the previous cycle.

SS-vortex motion: In the reproduction shrinks in space size and increases in cyclical time speed (according to 5D metrics: S x T=C), we talk of growth of informative frequency, travelling to a smaller scale of the fifth dimension.

TT: if the reproduction grow in size tracing a +π cycle, we talk of an entropic reproduction that slows down the motion of the system.

Those are the 5 essential forms of reproduction in ∆ST, which correspond to 5 Dimotions of space-time and are studied by the functions of calculus in physical systems. All of them however will be reproductions that happen through small Si=Te steps of motion and form. So we have next to understand how reproductive motions happen through Stops and motions, particles and wave states, finitesimal after finitesimal from the perspective of the whole – as a minimal 1/n angular curvature in curved ‘space’ motions, as a minimal l ƒrequency=1/T step in lineal time wave motions as a 1/n minimal cell in scalar motions. They are the 3 ∆ST finitesimal steps in all calculus which as we saw have always the same 1/n formula, either representing curvature of space-time, frequency of time, or population in space. How the 3 ‘concepts’ are actually symmetric, is due to the…

Galilean Paradox: SóT: Relativity of space Dimensions=Forms=Motion in time: 5 Universal Dimotions

Galileo’s time and space Principle of Relativity is the fundamental conceptual thought behind the relationship between time=motion and space=form and how one can be converted into another: All what exists is made of space=form and time=motion. And yet physicists know that we cannot distinguish motion from form. That any being in motion from its point of view seems to be still and all other things moving around it. This is the principle of Relativity of motion.

Physicists then without much thought about that fascinating duality, went on to use mathematics to calculate the relative motion of each entity of reality respect to other system, which seems static from both points of view. This is called Galilean relativity, latter refined by Einstein’s relativity, and essentially is concerned with the mathematical calculus of what we shall call the 2nd Dimotion of time=change, locomotion. Fine, but we are more interested on the duality of space=form and motion=time and its entangled relationships –the reasons why we do NOT see together motion and form, even if all systems have both.

The conclusion is then rather obvious: one of the two parameters of reality is ‘hidden’ to perception; we either see motion or form, ‘waves or particles’ (quantum complementarity), distances and lines or points in motion (as in the night when fast cars in a picture appear as lines). So physicists calculate only one when in fact we must assess the existence of 2; and since we cannot distinguish them, logically we must equal them. ‘Form=motion-function; space=time; Si=Te’.

Relativity then becomes a duality, Si=Te, which is at the heart of every law of the Universe. Whereas the primary element, the ultimate substance is time=motion. As space is a Maya of the senses – a slice of time motion. Form is what a ‘still mind’, makes of that motion to ‘perceive’, information, forms-in-action.

Since we see Earth still and flat but it is round and moving. Galileo’s profession was ballistics – the study of cannonballs motion. So he chose ONLY motion and lost the chance to start physics with a complex philosophical understanding of its Si=Te dual Principle of relativity, which Poincare defined latter clearly when he said that ‘we cannot distinguish motion from stillness’. An example is quantum/relativity duality. In detail quantum space has ‘dark energy’ because it has expansive motion that extends into a plane of space, but when seen at larger scales without detail its entropic motion seems static space – a dual area of scattering length and width. So in the galaxy we see either dark energy motion or expanding space: T=S. A motion of time is equivalent to a dimension of space: Distance and motion cannot be distinguished so they must be taken as two side of the same being, a space=time Ðimotion (ab. Dimensional Motion):

S= T; Dimension-Distance = Time-motion = ST Ðimotion

Earth moves in time, but we see it as a still form in space because reality is a constant game of ∞ motions, but the mind focus those motions and measures them at still distances. For huminds, motion is relative to our systems of measure and perception, which are light-based; hence a fixed c-rod speed/ distance. Reason why Einstein’s relativity postulates a maximal T:c-speed, measured as if observer and observable were still to each other (Constant S); which at our scale we correct with Lorentz Transformations.

As it happens the identity between spatial states of ‘form’ and temporal states of ‘motion’, which become stops and steps of all reproductive motions become the fundamental ‘present state’ of the Universe, and the essential tool of calculus to ‘solve’ its differential equation (D’alambert’s method of separation of variables); and his unending philosophical and logico-mathematical consequences will appear in many parts of those texts.

But physicists just substitutes the Earth’s still distances for motions, and it took another 300 years for Einstein to realize the relativity of motion and its measure made essentially time and space, motion and form two sides of the same coin. Still this realization was not explored philosophically and so it gave birth to a series of ill-understood dualities between ‘states of measure and form’ (particles, head gauging form, in-form-ation) and ‘states of motion’ (wave states).

It is then essential to grasp that motion and form co-exist as 2 different states depending on 5D scale and detail: Motions are perceived by minds that stop motion into form, into information, as distances. So if we see slow motion in the night, a car’s headlight seems a long distance line ‘still’ picture. But this means also that the 3 ‘Euclidean still dimensions’ must have motion; they are ‘bidimensional ST-holographic, topologic dimotions’. So we have 3 Space + 1 Time + 1 5th dimension of scales = 5 Dimensional motions. None of them is a Dimension of pure spatial form or a pure time motion but a combination of both. Even if mentally we tend to reduce motion and focus on forms, all has motion=time, and form =space: this is the meaning of ‘spacetime’, the messing of both into 5 dimotions, the fundamental element of all realities.

Relativity states ‘we cannot distinguish motion=time from position=space’. So all what exists is a composite of both, undistinguishable Si=Te, 5 ‘Dimensional motions’ (Ab. Dimotions), broken in infinite fractal, vital time space organisms, composed of topological Dimotions: height=information; length=locomotion; width=reproduction; form=social evolution of parts into wholes & entropy=dissolution of a whole into its parts in a lower scale of the fifth dimension (term we keep for the whole range of scales of the Universe); whose study is both mathematical, the main science that studies how those 5 Dimotions entangle in simultaneous Space, connected to each other topological adjacent parts, which create superorganism, and Logic; the main stience of time that observes how those pentalogic, entangled superorganisms move and evolve, change in sequential relational time, living a worldcycle of life and death.

As all is time&space, the 2 experimental primary mirror-stiences of time&space become the most important to extract the Disomorphic=equal laws of those 5 Dimotions that all systems have in common. Since while those Dimotions are broken, in vital organisms, separated by cyclical time membranes, they are the same.

In the graph Galilean relativity was ill understood, as the true question about time-change is why ‘the mind sees space as a still, when in detail is made of smaller self-similar quanta, in motion. The paradox defines mental spaces as still simplified views of the more complex whole.

The 3 ¡logic paradoxes of space topology (closed in-form-ative curved-O vs. |-open, free entropic lineal forms), time-motion (stillness vs. motion) and ∆-scale, (continuous whole vs. discrete forms; single scale vs. multiple one)s, are essential to the perception of a simplified ‘spatial mind universe’ in a single flat still plane vs. the full, more detailed complex picture in time, of a curved, discrete and moving Universe. Those paradoxes resume the 5 elements of reality, Space=form, time=motion, scales and the mind that measures them, within its own entropic limits.

They are also essential to all the elements of calculus and mathematics at large and its methods of solutions; specially the inversion between finitesimal lineal steps (as a step between two points is NEVER curved) and the cyclical form of longer ‘integral paths’. So lineal approximations are the essential tool of calculus and mathematics to resolve many equations.

What neither mathematicians nor physicists fully understand (though some inroads in abstract were made through the Noether’s concepts of symmetry) is that each stœp of a method of solution is not ‘gratuitous’; but must be grounded in a real property of the 5D ∆ST symmetries and conservation laws of the Universe, which are not so many – hence the repetition of methods. Specifically, the aforementioned 3 paradoxes between ∆+1 curved closed worldcycles, sum of lineal steps, which gives birth to the most used method of lineal approximations; the equivalence between Space and time, in all Stœps of dimotions, which gives birth to the method of separation of variables on differential equations and more broadly allows to move around relative space and time parameters in equations joined by an operand of ‘equivalence’ (≈ not =). And the 2 conservation laws of the Universe, conservation of those ‘beats’ of existence, S=T in relative present, eternal balance, justifying the equivalence operands. And conservation of the ‘volume of space-time’ of each plane of the Universe, by virtue of the 5D metric equation SxT=C, which justifies all the procedures regarding scales – solution of differential equations by separations of scales, renormalization procedures (Wilson), and harmonizes those scales allowing constant but balanced transfers of energy and information, St=Ts.

5d metrics expresses the conservation of time.

The paradoxes of Relativity, discontinuity, parts and wholes, scales are all related to the reductionist nature of minds that bias reality.  Minds reduce dimensions to the relevant ones, eliminating all dark spaces: continuity is the result. Of all formal languages that map out reality 2 are paramount, Time ¡logic & mathematics of Scalar Spatial information.

A 5D Metric function, S(0-Mind) x T(∞-universe)=constant world is the function of all mind languages who only perceive from its self-centered point its language mirror confused with the whole Universe (Ego paradox, basis of psychology). Ænthropic huminds reduce the multiple clocks of time and vital spaces of reality to the single human clock and spatial scale, rejecting the organic properties of other Universal systems.

The main laws of 5D are the metric functions of the scalar Universe, which relate the spatial size and speed of temporal clocks of all scales of Nature. Both parameters are inverted: when systems grow in size the speed of its clocks, its ‘time cycles’, diminish proportionally, both in biological and physical systems. And vice versa. Smaller clocks tick faster and information processing carried by the frequency of those cycles accelerates, as it happens in chips, particles or life metabolism. So we write: S x T= C.

The mind thus starts it all with its linguistic ‘still mapping’ stopping its world in a locked ‘crystal image’, measure of its self. But even perception is social, linguistic. The Universe can only be explained if ‘perception’ exists within the language, as when you think words, you sense words, when your eye sees light and maps into an electronic mapping you are seeing. And when an atom maps a geometric image in its ‘locked’ ‘stopped’ spin, it must perceive that geometry as information.

Physicists made the Galileo’s paradox, the cornerstone of their theory of measure, but they failed to study the deep implications it has for every aspect of the structure of the Universe, from the duality between spatial mental, linguistic forms and physical motions; to the balances achieved by the similarity of both space and time, which becomes the fundamental ‘function of present’ Si=Te, and hence with the metric function of scales, \$ x ð = K, the two essential functions to formalize single planes Si=Te, and multiple scales of spacetime. Yet as Si=Te maximizes SxT=K (5×5>6×4). We unify both in 1 function:

Max. S x T = C, which defines for each fractal vital space-time organism its Function of existence, as all species will try to maximize its motion-entropy-time for its field-limbs, its information-spatial states for its particle-heads, whose product will give us its vital reproductive energy. Moreover the function has an immediate biologic meaning, because as we are made topologically of ‘fields-limbs’ of lineal space with motion provided by the energy we absorb to also reproduce our bodies-waves, and the information we need to linguistically guide our motions with particle-heads, the very essence of survival is to increase our S=position, mental forms of space and T=entropic motions of time (whereas time=motion & space=form are the two limiting Dimotions with ‘energy=reproduction, s=t, locomotion, sT and information, St, are the intermediate 3 dimotions).

The fifth dimension is made of the ‘different co-existing scales’, which from the simplest forces through particles, atoms, molecules, matter, organisms, super organisms, planetary systems and galaxies, create an ‘organic network structure’, which amazing enough since it was discovered at the beginning of science with telescope and microscopes, was not formalized till I introduce its metric function in the milieu of systems sciences, as a single lineal time motion is a dogma physicists don’t dare to challenge. Yet science cannot advance in its fundamental principles unless the formalism of the fifth dimension is accepted and used to fully understand the cyclical, repetitive patterns=laws of science of each discipline that studies a scale of the fifth dimension and its species.

Reproduction of form in 5D and its essential mathematical tool: calculus.

The Universe is a fractal that reproduces information, forms-in-action, forms of space with motions in time. This is the essence of it all. But space is a maya of the senses, the synchronous view of a series of cycles of time motions, knotted in the simultaneous perception of an observer; what physicists call a ‘frame of reference’.

Thus time=change is the fundamental element of reality, and this makes Algebra of time-change, specifically calculus perhaps the most important experimental science of time, besides logic, which we have upgraded to existential algebra, which explores the vital, organic whys of those changes.

It is the Galilean Paradox: S=T. We cannot distinguish time from form. In as much as each frame of reference or mind locks in a knot-mirror of the motions of the Universe from its point of view. So each point of space is a perceiver relative field of motions, which from its perspective knot as forces ‘attracted’ by its frame of reference. Yet if we cannot distinguish motion from form each point is entangled to those motions and is made of motion and form, of the particle and wave states.

Locomotion as reproduction of form solves the Paradoxes of Zeno and the meaning of discontinuity. As motion is reproduction of information, of form, since particles are knots of perception of form, fractal points, monads, that move by reproducing in a lower 5D plane, as ∆-1 waves, its information, as forms-in-action.

So all forms of change can be reduced to the ultimate function of existence, reproduction, a back and forth travel through 2 scales of the fifth dimension, as a form becomes a seed that reproduces, evolves socially and forms its whole again. The extraordinary capacity of Calculus, which extracts at ∆-1 level a ‘finitesimal’ (Leibniz’s 1/n definition of infonitesimal as a minimal part of a whole and ALSO, by virtue of S=T, a minimal ‘curvature’ of a time cycle, which is then integrated for a time duration of the event, either locomotion, or volume of population in space or S=T continuous=smooth change in time happens precisely because CALCULUS perfectly mimics the process of change and reproduction of form between ∆º and ∆-1 scale which is the basis of all time-change also in physics. Change thus is change reproduced in a lower plane as a seed that evolves into a whole.

It is then not so much in physics but in calculus where we find the strongest model of the laws of 5D and locomotion as a reproductive process of form, even if the experimental proofs are scattered all over physics. Indeed, the entire world of quantum physics can only make sense if we consider that particles MOVE AS WAVES and gauge information as stop particles. Because waves can be transparent to each other but particles collide. A simple proof: the atomic nucleus is so small compared to its particles that if they wouldn’t move as waves, transparent to each other, they would be always colliding and the nucleus would never remain stable. In fact, when we get pictures of those particles outside its shells, (electrons) they move in zig zag as they stop and change motion constantly. As usual physicists just make an axiomatic rule and subvert the law of causality converting the mathematical mirror derived of the fact in the cause of the fact – in this case they say this is due to the Pauli exclusion principle without providing the mechanism for particles to avoid collision if moving.

It follows that beings with more information, reproduce slowly and we can hardly see them moving. The limit of it being complex life superorganims on Earth, whose reproduction takes 9 months. It happens ‘inside’ the reproductive mother, and it reproduces in the adjacent space after ‘tearing’ the topological knot of the umbilical chord. A similar very slow process of reproduction happens in physics with the weak interaction that reproduces a form with even more information evolving the mass of particles, so the range of the force is minimal and the new particle appears adjacent to the one that disappears, dying for the new hatched ‘baby’ to be born.

This is the essence of it all. Motion is reproduction of information, of form. Since particles are knots of perception of form, fractal points, monads, which move by reproducing through a lower plane of the 5th dimension, as ∆-1 waves, its information, as forms-in-action; all forms of change can be reduced to the ultimate function of existence, reproduction, a back and forth travel through 2 scales of the fifth dimension, as a form becomes a seed that reproduces, evolves socially and forms its whole again.

Space is a Maya of the senses, the synchronous view of a series of cycles of time motions, knotted in the simultaneous perception of an observer; what physicists call a ‘frame of reference’.

Thus time=change is the fundamental element of reality, and this makes Algebra of time-change, specifically calculus perhaps the most important experimental science of time, besides logic, which we have upgraded to existential algebra, which explores the vital, organic whys of those changes.

It is the Galilean Paradox: S=T. We cannot distinguish time from form. In as much as each frame of reference or mind locks in a knot-mirror of the motions of the Universe from its point of view. So each point of space is a perceiver relative field of motions, which from its perspective knot as forces ‘attracted’ by its frame of reference. Yet if we cannot distinguish motion from form each point is entangled to those motions and is made of motion and form, of the particle and wave states.

Thus systems reproduce its form, travelling across scales of the fifth dimension: they reproduce a finitesimal form creating a reproductive wave, which integrated as a population of space give us back a whole.

Such discontinuous locomotion solves Zeno Paradoxes as the finitesimal is the limit of one ‘step’.

RECAP. Calculus study functions of existence: it extracts finitesimals and integrate them as a reproductive wave.

hence the enormous value of calculus to reflect mathematically the laws of existential algebra.

The fundamental paradoxes of relativity (S=T) become then the ‘backbone’ that justifies the methods of calculus and differentials, as each ‘slice’ of integral calculus is ultimately an S=T stop and go process. Reality thus is the exhaustion method of calculus down to a finitesimal – creationism and the inflationary properties of languages unfortunately became the ‘standard’ justification as finitesimal became infinitesimals, discontinuous stop and go processes of reproduction of locomotion, a continuous ‘flow’ and so 0’ became 0, which does NOT exist. 0 is indeed the definition of no existence; 0’, the minimal existential quanta of any system of space-time.

TRILOGIC ON CALCULUS. CURVATURE OF SPACE; CHANGE IN TIME; FINITESIMASL IN SCALE.

The enormous advantage of algebraic dimotions and calculus over all other forms of study of motion, including physics which can be considered basically the application of the mathematics of change to the study of nature is the fact that it can study all the elements and dimotions of the Universe from the ‘mind’s perspective’ – that is to study time-motions, space-change (volumes, lines measures), scalar change and the entropic limits of reality. Let us briefly introduce those 3∆¡ problems, which in fact gave origin to calculus.

T-ime=Motion=Speed.

XVII C. science was concerned with problems of motion. Copernicus introduced the concept of Earth rotating on its axis, revolving around the sun. The earlier theory of planetary motion, which presupposed an earth absolutely fixed in space in the center of the universe, was discarded. The theory involving an earth in motion invalidated the laws and explanations of motion that had been accepted since Greek times. New insights were needed to the question of why objects stay with the moving Earth seemed called for. All of these motions—those of objects near the surface of the earth and those of the heavenly bodies—take place with variable velocity, and many involve variable acceleration. But the branches of mathematics that existed before calculus was created were not adequate to treat them. So a method was required and that started up calculus.

S-pace= form=curvature.

The 2nd major problem of XVII C. mathematical physics was the determination of tangents to various curves. Its deeper significance is that the tangent to a curve at a point represents the direction of the curve at the point, as small steps are lineal, open free, but the long term motion closes into itself. This key element of ‘scalar time’, which makes easier to predict longer life-death cycle and curved trajectories is the key to the interplay between small scale lineal tangent points and large scale.

Its practical use was to find out the best angle for the motion of a projectile shot from a cannon. Since, if a projectile moves along a curve, the direction in which the projectile is headed at any point on its path is the direction of the tangent at that point. The invention of the telescope and microscope also stimulated great interest in the action of lenses. To determine the course of a light ray after it strikes the surface of a lens, we must know the angle that the light ray makes with the lens, that is, the angle between the light ray and the tangent to the lens. So the study of the behavior of light was, next to the study of motion, the most active scientific field in that century, the question of finding the tangent to a curve was a major one.

¬ Entropic Limits and ST-reproductive maximal

A 3rd class of problems besetting XVII C. scientists was maxima and minima. The motion of cannon balls obsessed Galileo, the weapons master of the Venetian arsenal, seeking the determination of the maximum range. As the angle of elevation of a cannon is varied, the range—that is, the horizontal distance from the cannon to the point at which the projectile again reaches the ground—also varies. The question is, at what angle of elevation is the range a maximum? Another maximum and minimum problem arises in planetary motion. As a planet moves about the sun, its distance from the sun varies. What are then the maximum and minimum distances?

0’-finitesimal scales: All those problems required calculus, which was based in the finding of a finitesimal quanta of Time (1/T), space (1/R) or scale (1/N). Then considering the system suffered a ‘deterministic’ rate of change, defined by a mathematical function (that introduced the specific type of rate often associated to each of the 5 Dimotions of the Universe, St, Ts, ST, SS and TT), the scientist could ‘extend’ that rate of change into a ‘sum of spacetime stœps’, which gave us a ‘lineal period in time, or continuous surface in space’, over which the longer time period was calculated. The ‘magic outcome’ (as this simple foundations of calculus were messed by creationsit egocy) thus solved the essential problems of mathematics.

THE FINITESIMAL: Þ

∆-1: Lebiniz’s definition of S=T Finitesimals: 1/n: minimal curvature. ∆-1 unit.

The key concept of 5D calculus is a finitesimal. A finitesimal in lineal space-time is a frequency step or wave-length. A finitesimal in curved spacetime is a minimal curvature of a clock cycle. A finitesimal in scale is a minimal unit of population.

But we use other term for any finitesimal 0’; that is, a bit more than 0’, which can be either curvature, ¡-1 unit of population or frequency motion. 0’ is then the mental finitesimal – the minimal quantity in existence of a being, which still remains the being; the seed, the mind of the species, the mother-cell that must therefore exist for any being, as the template which will develop the immanent program of exi»st¡ence, which does NOT need to be stored within the finItesimal 0’.

And inversely as the reciprocal of SS-minds with no motion are TT-entropy with maximal motion; and the reciprocal of zero is infinity, and almost 0’, immensity, which we write with the symbol ∝. The finitesimal of entropy is the largest domain of the being, beyond which the being dies. It is a real definition of the borders of the mind, as in its equation, 0’-mind x ∝ spacetime cycles of the Universe = K- World; 0’ x ∝ = K.

We thus talk of immensity as the entropic limit in which the being no longer is. And both obviously act in calculus as the limits of a definite integral. Thus we can define calculus with the 5 Dimotions of existence. Since the finitesimals of ∆ST will be integrated between its two limits of SS an TT, to give us its whole ‘worldcycle’ in time, or ‘closed circle’ in space, sum of its ‘stœps’ or ‘curvatures’, or its ‘wholeness’ in scale, sum of its finitesimal parts, showing the deep entanglement and symmetry between ∆, S and T:

∆st.

We shall use as windows does not let me put , œ as the best symbol for immensity, which is the symbol of the whole superorganism, that is ultimately an alternative symbol for , as the whole tends to be the limit of existence of the being, more exactly its world, O’ x = œ.

The subtle difference being that is external to the being that perceives it, but has pure entropy; that is potentially feeds the creation of multiple Kaleidoscopic monad-worlds, and œ is specific, has been ordered to become a ‘whole:ab. œ’.

But let’s not fancy too much ourselves with existential algebra, its profound paradoxes and symbols, returning to classic calculus.

I propose then 3 alternative symbols, L, for the classic limit as the finitesimal of space, ƒ, for Finitesimal in time, as frequency, and þ as the symbol of a palingenetic cycle, for finitesimal of scale, and will be using ƒ for commodity, with þ as the less confusing symbol for all the cases. And write as the general formula:

Þ:1/¡, whereas ¡, might be in classic mathematics, N, the whole population, T, the period or R the radius:

Þ(∆)=1/n;   þ(S)=1/R=K;   þ(T)=1/T=ƒ; Þ(@)=0’; Þ(¬)=

It is fascinating to observe that the 3 finitesimals of scale, space and time have the same equation in classic mathematics, 2 of them discovered by Leibniz; the guy who unlike Newton always ‘hit a target nobody sees’ (:

This of course is only the beginning; and as usual we shall pounder more the philosophical aspects of 5D calculus, leaving for ‘pros’ with imagination, a humble realization that new beginnings are simple but always found by amateurs without the burden of knowledge that an entire new world of calculus of which I have just swimmed on the surface with unfocused diving glasses, awaits to the brave.

What is the 1 in the equations of finitesimals, a whole or a stœp

Thus the infinitesimal does not exist – being space quantic, there will be always a limit, a micro-cycle of time or quanta of population in space, to signify the finitesimal point, as Leibniz rightly understood and defined it with a simple powerful form: 1/n.

Indeed in the Universe finitesimals tend to be structured as in a russian doll, such as the biggest wholes, n-> have the smallest finitesimals, 1/n->0’. But and this is the incredible magic insight of Leibniz’s ¡n:finitesimal, 1/n is also the formula for a curvature in space. And as S=T, for the minimal motion of a clock of time. So we do have a concept in ∆ST¡-1 that we shall then ‘integrate’ through a relative path with the finite limit of a worldcycle, where the function is meaningful (that is has a value, ƒ(x,y)≥1/n), and relates ∆ and ∆-1 through its ‘stœps’ of change. We does connect in this manner calculus to 5D reality, no longer base in human invented, axiomatic concepts of absolute zeros and infinities, limits and the paradoxes enclosed within them.

The 0’ size is thus the finitesimal. In praxis, we humans only observe a finitesimal from our mind perspective, whose minimal form is an h’ quantum of the planck scale, and accordingly we see a Universe of inverse relative size, being humans in the ∆º middle view (at cellular level) as physicists wonder without realizing this is NOT a coincidence, but a natural law of the scalar, fractal organic structure of the Universe:

So we accept Leibniz’s concept of a finitesimal, as ALL organic systems have a minimal cellular quanta and a maximal enclosure, which in mathematics can be represented in the o-1 finitesimal circle, closed above, as it becomes the 1 element in ∆-1 of the ∆º whole, which is represented by  the 1- equivalent graph, which is opened above into the wholeness of a larger Universe (but will have also a limit normally in the decametric logarithmic scale of the ∆º whole world embedded in the ∆+1 truly infinite Universe).

What the 3±¡ finitesimals of existence have is the 1 on ‘top’? As we can consider 0’ 1/∝ and ∝, 1/0.

And it can mean two things, the whole as 1=∝ in the 0-1 palingenetic Universe or the 0’ as the finitesimal in the 1-∝ Cartesian domain. So we find that immensity can be a finitesimal.

If 1 is taken as the finitesimal, it becomes then a step in a curvature, which tends to be ‘lineal’, as all steps are discontinuous motions between two points, which can always be closed with a straight line.

So by definition the minimal curvature step is always a line of infinite curvature (: And we need two steps to find a ‘real curvature’, whose maximal value, for a step back and forth will be 360º.

If 1 is the finitesimal of time, frequency, it will be the minimal event, hence a closed time cycle. And as such it will be the sum of finitesimal curvature steps.

If 1 is the finitesimal of a population, then it will be its minimal meaningful part, often a ‘seed’ or mind-singularity, and n its whole population, and the smaller the finitesimal 1 is the larger the n-population will be in the nested Universe, when we measure ∆-2 finitesimal ‘bites’ of energy-feeding.

But for a finitesimal to persist as a unit of population it must not be erased after a single ‘stœp’; so its cycle must be repeated in time.

So we realize there is a chain relationship between the 3 finitesimals such as its reciprocal, a bit of space =form, a beat of time=cyclical motion and a bite (a piece) of population are nested parts of larger wholes:

∑∑s = ∑T= ∆-1.

A deep result in both calculus and existential algebra, which can be said as follows:

Space is a slice of time which is a slice of ∆-planes, and so we grow in dimensional motions and wholeness when we move from space/curvature steps to fulfill a whole time cycle, which however is just a frequency of memoriless form, that only when persists by repeating its cycles in the same region of spacetime becomes the unit of population.

This growth of reality is essential to grasp the complex nature of calculus when we move beyond the first pages dedicated to the analysis of its dimotions and operands, to ODEs and PDEs of physical systems and beyond; which are also nested systems of complex dimotions in which finitesimals of scale, time and space are considered all together.

Unfortunately humind’s unaware of those symmetries, and even the simplest concepts of linearity, cyclicality and scale just ‘calculate’ as if they were performing some magical trick; so our purpose will be to enlighten philosophically its calculus extracting general laws of reality from calculus, as calculus is by far the closest formal language humans have learned that mimics the laws of the Universe.

Indeed, essentially what a calculus operation does in its essential form, an ODE or PDE is to find the closest thing to a finitesimal, which is a differential (as it is a o’ piece or ‘lineal step’ the closest possible as a piece of space to the curvature piece around the derivative=tangent); and then it integrates it in an interval of time; between its ‘original seed’ and ‘entropic limit’.

This is the fundamental use of calculus today, because it was born on Physics, which studies locomotion.

But calculus also works very often on ∆-1 finitesimals, which are integrated in a longer time, or whole in space, or worldcycle in time with the same limits.

And then we realize such operations are exactly the inverse of the previous one. The finitesimal of scale is now the smaller part, which we integrate in a volume of space, or through a motion of time. Why is that possible? Because in each scale a new game of existence does happen. Reason why we wrote the equation as a double feed-back equation:

∑∑∑Só∑∑Tó∑∆¡ó∆º= S

If the Universe had only a scale of spacetime, the first equation will be truth. As it is made of planes of space-time, where each new whole becomes a quanta of a smaller scale; starting again the game, in operations where we differentiate and/or integrate twice, we are emerging and descending through planes of existence. And this is what makes calculus so magic. As the second and third derivative has also a full meaning. It is the rate of rate of change of space into time into acceleration into jerk. It is the growth of growth of a point into a line and a plane and a volume. It is a line that curves, and closes a cycle and becomes a spiral of infinite curvature.

It might then be to ignore all together those symmetries and iterations of ∆ST across planes, which convert one into each other, but that is the deeper structure of reality even if most humind’s are one-dimensional and get a headache thinking paradoxically.

And finally we have the limits; once we have scaled up and down as many times as required, we will still just be part of a whole, so limits exist and prevent us from doing what physicists do, ‘from here to infinity’, getting then in all kind of troubles, singularities, infinities that they eliminate and renormalize. All that gets them to real results but if they had the proper understanding of finitesimals and immensities, they would use cut-off earlier on, for ‘singularities’ of big-bangs; charges and masses, understand the wormholes that on those singularities just transfer energy and information between planes, and so on.

To show them will be therefore the second task of this paper as it grows and we enter into ODEs, PDEs and mathematical physics, sometime in the fall of 2019… But for the impatient one, a sample…

THE CURVATURE OF SPACE.

A Disomorphic example on how to understand homology of stiences.

The differences between classic calculus and 5D calculus are thus small, mainly conceptual, but on the ‘fringes’ it will have real consequences for understanding paradoxes of physics; and the philosophical foundations of mathematics. On techniques and what mathematicians like most, crunching numbers and equations, very little new at this stage is expected. And this is comforting cause as I say is summerday and I don’t want to write too much. But do not dismiss the paper. Because this is real existential calculus. Consider the finitesimal of space, curvature. 1//R is the simplest one; the curvature of a circle. It follows that the straight line has a finitesimal curvature. But also that there must be, an immense curvature – as now ‘absolute infinite does not exist’ because a limited infinite is the reciprocal of 0’.

But how curvature can be immense, close to infinity? Mathematicians define curvature for a curve as df/ds where ϕ is the inclination of the sensed tangent and s is the arc length measured from some fixed point.

This limits curvature; but physicists have the not resolved paradox that curvature in Relativity can be infinite, or rather immense. The solution? 5D (:

It is also ‘embedded’ in the complicated formulae and principles of relativity. In relativity the principle of equivalence between acceleration and an attractive vortex of mass; and the principle of Newton that gives a change of motion, hence an acceleration to the curvature of a motion that is not lineal. Thus curvature and acceleration are similar concepts and the more curved a ‘curve’ is, the faster is growing its speed.

Those are you might say trivial results; after all in physics to maintain in orbit a satellite we need an acceleration, a=v2/R, hence a higher curvature requires a higher acceleration; but that is precisely the beauty of 5D, to reduce to the synoptic laws of 5D metric, S=T, SxT=C, and the symmetries of ∆=S=T, an astounding array of phenomena.

Thus we extend Einstein’s Principle of equivalence between force and acceleration, to both charge and mass, and to the curvature of space=time to both the ∆-1 charge and ∆+1 mass, physical scales. It is then the implicit concept in Newton’s equivalent formulae, as G, or in Coulomb’s k factor.

This is also embedded in the solution to that differential equation:

Which the reader will notice has a second derivative above, that is usually the symbol of acceleration. So we observe as in many other cases the ∆≈S≈T symmetries between functions in space, scale and time.

But in 5D we can define curvature by the S=T symmetry also as a measure of acceleration, this is possible because as we diminish in size, according to the metrics of 5D, \$xð=K (whereas \$ is a symbol for lineal space and ð for cyclical time; so we could write in physical terms, Lxƒ=K, but as usual 5D existential algebra symbols are more general and new to use them in any science).

So when we carry curvature in space to acceleration in time, it becomes ‘frequency’, and curvature can have in space any angle above 2π=360º; or in terms of length.

But those forces are conserved in physics. So To understand what physics conserves let us consider a 5D metric equivalent – a 2 D vortex equation, VxRo=K. As the vortex diminish in size it turns faster. In cyclical time, ð cycles of perception that happen when the point returns to the memorial ‘singularity’ happen more often, as, its unit is the closing of a cycle.

The increase of curvature therefore implies an increase on the acceleration of the system, and both are indeed equal concepts: 1/R, ðT/ðS. But now for a given Space perimeter, its higher curvature=acceleration implies a shorter ‘unit of time perception’. So for the same Spatial distance travelled, even at the same lineal speed, more time units have been consumed in a smaller time cycle; as we go down in size scales of the fifth dimension. Both angular speed and existential cycles accelerate; due to the vortex 5D metric: V(ð) x R(\$) = K. So a slow large turning galaxy might shrink to the size of an atom; which lives a tiny fraction in a tiny size, but in fact its spatial distance traversed is roughly maintained. And indeed latter we will see how in the 3Dimensional space-time of the ∆±3 scales of the galatom, a beta decay is in 5D metrics equivalent in time duration to a quasar 15 billion big-bang cycle, and the 5D metric of the proton equivalent to the Schwarzschild event horizon of the black hole.

5D metrics conserves 2 things: The ‘energy’ volume of space-time of all scales and its beats of existence:

1. The worldline distance the being travelled – which is in fact a worldcycle distance as it is the sum of all the perimeters, travelled slowly in the large spacetime, faster in the smaller spacetime. So the spacetime volume of the different ∆-scales is the same. And because energy is the only parameter used by huminds in all scales, the conservation of the total volume of space-time of the Universe is equivalent to the conservation of Energy.
2. But to conserve the same ‘length of internal perception in existential beats’, the smaller being much live far less time. And indeed, the neutron cycle in beta decay is 15 minutes.

The galaxy cycle in a quasar big-bang cycle is 15 billion years.

They are two different curvatures, because the curvature-acceleration-force of the charge is immensity compared to the almost 0’ curvature of the galaxy.

But alas! Both are the same. So we can unify both forces, with the simple concept of curvature=acceleration=attractive force, as a faster more curved sink will attract, like a hurricane stronger.

Einstein’s derived his formalism from Poisson. It is more detailed because it is according to the Galilean paradox (Si=Te) the spatial still perspective of those vortices as a series of simultaneous derivative measures.

So it reduces the temporal continuous Newtonian view of a spacetime vortex into an ∞ number of infinitesimal detailed pictures, focusing not on the speed but on the curvature of the vortex (which is the spatial definition of a moving cyclical speed – the faster it turns, the more curvature it has in ‘still mathematics’).

Let us do the maths in the simpler Newton’s formalism, whereas by the paradox of Galileo S (Curvature) = T (accelerated motion). So the Universal Constants (G, k), define the curvature of 2 space-time vortices at the ∆-1 quantum charge and ∆+1 cosmic mass scales (∆ is the symbol for the different ±¡ scales of the fifth dimension within a given organic system). Its formalism of a vortex of time space is then Newton’s Unification Function:         M,Q= ω2 r3 /U.C.(G,k)

It applies to all vortices of time-space from particles to planets to galaxies.   For example if we substitute for the Earth-sun system we obtain G, (1st ever theoretical deduction) and if we substitute for the Bohr Radius and Proton Mass, we obtain k with a 1039 higher curvature value, the exact difference between both forces that solves its hierarchy problem. As curvature in space is symmetric to rotational speed in time, so it is symmetric to the attractive force of any vortex. It works marvels when we translate electromagnetic jargon to Newtonian jargon. For example it shows the ‘isomorphism’ (systemic jargon for an equal ‘form’ between scales) between atoms and galaxies, which H-atoms of the cosmic scale.

Since when we translate electromagnetic function into gravitational mass vortices, the proton radius becomes the Schwarzschild radius of a black hole and its electronic orbitals its star clouds, a result foreseen by Relativity that modeled galaxies as Hydrogen atoms in the Einstein-Walker Metric of the Cosmos.

Let us put some easy numbers by substituting the parameters in that Unification function for the values of the sun (mass) minus earth (rotational speed and radius) to get G, which any high school student can do:
Sun mass = 2 × 1030 kg; Earth’s angular velocity 2 × 10-7 rad. per sec. Earth’s orbit = 150 million kms. Result: G=6.67 × 10 -11 kg-1 mˆ3 rad. sec.-2

This is standard gravitational theory. What has never been done, because the fractal systemic view of the fifth dimension was not known till recently, is to substitute in the same function of gravitational cosmological masses the mass radius and speed of the space-time vortex by the values of the fundamental quantum space-time vortex, a hydrogen atom/charge.

If the thesis of a fractal universe made of hierarchical scales is truth, then those values should give us the value of the universal constant of charges, the Coulomb constant.
Indeed, if we substitute for the proton (mass) and the Bohr electronic orbital (speed and radius)
4 × 1016 rad. sec. -1 = w (electron); 5.3 × 10-11 m. (Bohr radius); proton mass = 1.6 × 10 -27 kg.

Then we get a G, which is 2×1039 stronger than the gravitational radius; thus, the hydrogen atom behaves as a self-similar fractal scale in the quantum world to a solar system.
And then you can get also the electron radius expressed in the jargon of a quantum gravitational world using the translated ‘Gravitational Coulomb constant’: G(k)M/cˆ2.
Since in that expression M is the mass of a proton, G(k), the electromagnetic constant is a gravitational constant, and c, light speed, that expression is exactly the Schwarzschild radius of a quantum black hole.

Thus, the electron Bohr radius, which is the final radius of minimal size and energy in electrons, is isomorphic to the event horizon of a black hole in the quantum gravitational world.

Those results (more than a decade old), are a first theoretical deduction of Ke departing from G and the enormous simplification of the parameters of the electron radius till arriving to the same expression that a black hole radius cannot be by chance. They are mathematical deductions, one of the three standard forms of proof in science.

Yet a theoretical calculus of those values cannot be exact ‘by chance’, unless the theoretical model behind it – the fractal self-similar structure as \$T (Space population) x ð§ (Temporal frequency) entities of all physical systems is right. Thus, the previous calculus is a clear proof that both, charges and masses, are unified as values of the same type of space-time vortices in the 2 different scales of space-time of the Universe. And they are geometrically unified from the p.o.v. of geometrical relativity not from quantum theory, as Einstein wanted it.

Galaxies, (Galaxies≈Atoms) thus resolve the philosophical question on how many 5D scales exist; as we find enough self-similarity to ‘run again’ another game of fractal scales (not identical but self-similar as in a Mandelbrot fractal) both by quantitative and qualitative methods between the atom and the galaxy. A question that might be extended to the ST dualities of open, ‘entropic strings’ and closed ‘cyclical informative strings’, in a possible larger and smaller scale of microscopic strings and superstrings:

Ouroboros the Universal Snake, bites its tail on the string quantum and cosmological self-similar scales, as perceived from the human ∆o mind. Philosophy of stience would then argue that those scales are real, but part of its self-similarity is mental: that is, the loss of information in the perception of scales make humans extract the same information from the upper and lower 10±30 scales.

Alas! In this showcase of multiple meanings, jumping from mathematics, to physics, to metaphysics, solving questions seek for centuries in classic science we show the essential nature of 5D – not so much crunching numbers but ‘seeing’ what nobody sees.

As I improve the papers, we will focus better the equations already resolved but vastly more profound that people think.

WORLDCYCLE OF EXISTENCE.

In the next graph repeated ad nauseam in those papers we see the essence of the process of a worldcycle of existence: the creation of a finitesimal form which will reproduce and then collapse into a superorganism.

All what exists is a supœrganism of vital space tracing a 0-sum worldcycle of time through 3 scales of the 5th dimension: Born as a seed of fast time cycles in a lower 5D scale (∆-1:Max. T x Min. S), emerging as an organism in ∆o, living 3 ages of increasing information, as its time clocks slow down in its ∆+1 world to die in a time quanta back to ∆-1. Yet the maximal point Si=Te where reproduction happens defines the classic age, maturity, beauty, balance, survival of the system, all disomorphic jargons.

The 3 ages of life emerge in human social superorganisms as the 3 ages of cultures and  its 3 artistic styles: Min.S x Max. T (infantile epic, lineal art, as in treccento, Greek kuroi; Si=Te; balanced beauty, when form and size are in balance, the classic mature age; and Max. S x Min. T: baroque, 3rd age of a civilisation, whose subconscious mind is the art of its ‘neuronal artists’, the age of maximal form and a ∆st for a no future, which is the age of war and death of cultures).

We talk of 3 ∆±1 scales of worldcycles as the being live in a placenta, then emerges as organism in a world:

þ: 0-1: its palingenetic o-1 social evolution in the accelerated time sphere of existence, till becoming 1 (0-1 bounded unit circle in ¡logic mathematics; quantum probability sphere of particles in physical systems; palingenetic fetal age in biologic systems; 0-9 memetic learning childhood in social systems). It is the highly ordered world cycle as a ‘placental mother-energy world’ is nurturing as memorial cyclical spacetime has erased errors of previous generations.

– c: The outer 1-∞ world, in which it will deploy its 2nd world cycle of existence in an environment which is open, entropic (1-∞ hyperbolic unbounded Cartesian plane in ¡logic mathematics; thermodynamic entropic statistical molecular populations in physics; Darwinian struggle between populations in biology; idol-ogic dog-eat-dog capitalist, nationalist competitive eco(nomic)systems in the super organisms of history. In this 1-∞ existence the world cycle is not ensured to continue, as the entropy of the world system can cut it off.

ω: The existential life cycle, though is part of a larger world of hierarchical social scales (§ D¡), where it performs 5 survival actions through ∆±4 Planes self-centered in its mind, beyond which it cannot longer perceive, to become if successful a new superorganism of the infinite planes of God, the game of existence.

In graph, physical, biologic & social worldcycles show to which extent 5D laws enlighten our understanding of reality. Matter States are physical time ages, from left pure solid, crystal, §top state, to an even more solid ∆+1 boson condensate, etc. We see that systems either move a step at a time within a plane of existence (gas, liquid, solid) or they can jump « two states at once, (as in the case sublimation) within that plane, or most often between two planes, as in « scattering & entropic death), to become a different Dimotional state. We can then see how the fundamental elements of 5D time appear on the graph: the worldcycle is local and complete. There are 2 inverse arrows from an entropic past (plasma), in a lower plane (ion particles) to the 3 ages of the matter states with increasing form (gas to solid), to end in a higher plane of existence as a boson-Einstein condensate. Do those worldcycles happen for the whole Universe? (cyclic big-bang). Unlkely…

It is then clear that calculus is the closest mathematical mirror of the commonest process of time-change: the creation of finitesimals that reproduce in clonic waves forming ‘spatial organic systems’, in the most complex worldcycle, or mere herds, or locomotions imprinting information in a lower field of entropic space – you name it. As we study mathematical physics with calculus, we shall be commenting precisely in the unity of all process of calculus – a process of finding finitesimals to integrate through time locomotions or spatial populations, mimicking what is the essence of time=change, the reproduction of finitesimal parts into wholes.

The next question is then how to write the worldcycle of existence in calculus; which is self-evident: ∆st = 0

Whereas the whole trajectory of a T.œ=∆ST in space and time will finally become a zero sum. But a function of existence does vary its ‘rate of change’ as the system goes through 3 ages changing its parameters of spatial form and temporal motion; represented in calculus by the S and T parameters.

Its usefulness becomes then more clear when we consider the standing points and draw the function=worldcycle of existence in terms of the SxT existential momentum of its 3 ages.

In praxis often human calculus will deal with the ∆-scalar main parameter that defines the system for each scale (mass, temperature, momentum)… embedded in an outer spacetime world. And so calculus as humans practice it is not so much about the function of existence but a ‘partial’ analysis of a ‘dimotion of existence’ performed by an ∆0 being in a larger ˙∆+1 world: ∆º ∂st+1 = 0’ That is, we take a T.œ, defined in scale, space and time and study its minimal finitesimal change, a relative zero (in physics using Lagrangians), and then integrate it through a ‘lineal sum-period’.

Calculus on the function of existence.

Since I haven’t told you this one thousand one night-mare times (: the function of exist¡ence is all. You are a repetitive fractal of space-time and your purpose is to exist, to conserve your time, but your time is just the form of information, your vital space, reproduced in all the scales that rise from the bottom line of your gravitational and light spacetime, going upwards into scales. Reproduction is the game. But the worldcycle makes errors in the reproduction of your i-logon and those errors that are statistically seen in space as a normal distribution, in time as a repetitive sequence of actions and events slowly wear you down, and as errors of copying information repeat and accumulate your function of existence looses freshness and you age.

So because each stœp of your existence you repeat your sequential actions each derivative is one of such stœps a zig zag up information right motion, up information right motion, whose tangent is the derivative of each quanta of your time. All this said then we can study the worldcycle with calculus. In fact is the best way to study the worldcycle with calculus, in the orthogonal graph of information and motion, information and motion, stop and step, particle and wave state, up and right up and right, as you age, first rising fast young and bold, reaching higher accelerations in your second derivative, as space is time, the curve represents in its form of space, its motion of time, and that is your first derivative, seeking a standing point of constant speed but that is not possible because speed is reproduction and you reproduce your form, with lesser skill past the prime time of your standing point the maximal and minimum no far before, no long ahead:

Let us remember the general laws for any possible function of existence:

If we draw the ‘existential momentum’, SxT of the system in the left side, and the lineal time of the system, T in the bottom side.

So sinusoidal bell curve functions represent a worldcycle, though the symmetry is broken in the moment of entropic death when the collapse is extreme in a ‘falling line’ as death happens in a single moment of time:

A key theme of vital mathematics is the representation of a worldcycle in lineal time, with ± exponentials & its inverse, logarithmic curve around the key points of change of phase… as growth of ‘entropy-motion’ diminishes. So we move from ‘adolescence’ of max. growth of both parameters (sT energy and sT information) to the y”=0 point of youth, where the logarithmic part grows slower. Together they form, one half of the total graph of a cycle of existence, till reaching the y’=0 point of Max. (S≥≤T), which then becomes negative, happening a decay of the whole system in two negative curves.

The conservation of time in its 5 y’Ù y” =0, standing points that define the 5 SS, Ts, ST, St & TT moments of generation, youth, maturity, 3rd age and entropic death thus become the essential points (maximal and minimal) of the equations of calculus, the sinusoidal function of existence and all its derived elements.

Let us suppose that on a certain interval a≤t≤b we are given a function S = f(t) which is not only continuous but also has a derivative at every point. Our ability to calculate the derivative enables us to form a clear picture of the graph of the function. On an interval on which the derivative is always positive the tangent to the graph will be directed upward. On such an interval the function will increase; that is, to a greater value of t will correspond a greater value of f(t). On the other hand, on an interval where the derivative is always negative, the function will decrease; the graph will run downward.

We have drawn the graph of an ∆st function of the general form, S (any dimension of a whole world cycle or T.Œ) = f(T) – Any time motion or action.

It is defined on the interval between a minimal quanta in space or time (t1) and its limit as a function (d).

And it can represent any S=T duality, or more complex 5Ds=5Dt forms or simpler ones. We can also change the s and t coordinates according to the Galilean paradox, etc. Hence the ginormous numbers of applications, but essentially it will define a process of change in space-time between the emergence of the phenomena at ST1 AND ITS DEATH mostly by scattering and entropic dissolution of form at d.

And in most cases will have a bell curved from of fast growth after emergence in its first age of maximal motion (youth, 1D) till a maximal point where it often will reproduce into a discontinuous parallel form (not shown in the graph at Max. S x Max. T; which will provoke its loss of energy and start its diminution till its extinction at point d.

Thus the best way to express quantitatively in terms of S-T parameters (mostly information and energy), for any world cycle of any time-space super organism is a curve where we can find those key standing points in which a change of age, st-ate or motion happens.

Of a special interest thus are the points of this graph whose abcissas are t1,2,3,4,5.

At the point t0 the function f(t) is said to have a local maximum; by this we mean that at this point f(t) is greater than at neighboring points; more precisely for every t in a certain interval around the point x0.
A local minimum is defined analogously. For our function a local maximum occurs at the points t0 and t3, and a local minimum at the point t1.

At every maximum or minimum point, if it is inside the interval [a, b], i.e., if it does not coincide with one of the end points a or b, the derivative must be equal to zeroth (0’).

This last statement, a very important one, follows immediately from the definition of the derivative as the limit of the ratio ΔS/ΔT. In fact, if we move a short distance from the maximum point, then ∆S≤0.

Thus for positive ΔT the ratio ΔS/ΔT is non-positive, and for negative ΔT the ratio ΔS/ΔT is nonnegative. The limit of this ratio, which exists by hypothesis, can therefore be neither positive nor negative and there remains only the possibility that it is zeroth.

By inspection of the diagram it is seen that this means that at maximum or minimum points (it is customary to leave out the word “local,” although it is understood) the tangent to the graph is horizontal.

At the points t2, and t4 also the tangent is horizontal, just as it is at the points t1, t3, although at these points the function has neither maximum nor minimum. In general, there may be more points at which the derivative of the function is equal to zeroth (stationary points) than there are maximum or minimum points.

One of the simplest and most important applications of the derivative in that sense is in the theory of maxima and minima.

Criteria for maxima and minima; study of the graphs of curves.

If throughout the whole interval over which x varies the curve is convex upward and if at a certain point x0 of this interval the derivative is equal to zeroth, then at this point the function necessarily attains its maximum; and its minimum in the case of convexity downward. This simple consideration often allows us, after finding a point at which the derivative is equal to zeroth, to decide thereupon whether at this point the function has a local maximum or minimum.

Now, the apparently equal nature on a first derivative of the minimal and maximal points of a being, have also deep philosophical implications, as it makes at ‘first sight’ indistinguishable often the processes of ‘reproductive expansion’ towards a maximal and explosive decay into death, the ‘two reversal’ points of the 5D (maximal) and 4D (minimal) states of a cycle of existence, for which we have to make a second assessment (second derivative) to know if we are in the point of maximal life (5D) or maximal death (4D) of a world cycle. And to know if the cycle will cease in a continuous flat encephalogram or will restart a new upwards trend.

Or in other words is any scalar, e>cc>m big-bang both the death and the birth of matter?

Finitesimal Quanta, as the limit of populations in space and the minimal action in time.

So there is behind the duality between the concept of limits and differentials (Newton’s vs. Leibniz’s approach), the concept of a minimal quanta in space or in time, which has been hardly explored by classic mathematics in its experimental meaning but will be the key to understand ‘Planckton’ (H-planck constants) and its role in the vital physics of atomic Planes.

It is then essential to the workings of the Universe to fully grasp the relationship between Planes and analysis. Both in the down direction of derivatives and the up dimension of integrals; in its parallelism with polynomials, which rise dimensional Planes of a system in a different ‘more lineal social inter planar way’.

So polynomials and limits are what ¬Algebra is to calculus; space to time and lineal ¬Algebra to curved geometries.

The vital interpretation though of that amazing growth of polynomials is far scarier.

Power laws by the very fact of ‘being lineal’, and maximise the growth of a function ARE NOT REAL in the positive sense of infinite growth, a fantasy only taken seriously by our economists of greed and infinite usury debt interest… where the eª exponential function first appeared.

The fact is that in reality such exponentials only portrait the decay destruction of a mass of cellular/atomic beings already created by the much smaller processes of ‘re=product-ion’ which is the second dimension mostly operated with multiplication (of scalars or anti commutative cross vectors).

So the third dimension of operands is a backwards motion –  a lineal motion into death, because it only reverses the growth of sums and multiplications polynomials makes sense of its properties.

Let us then see how the operations mimic the five dimensions, beyond the simplest ST, SS and TT steps, namely reproductive and 4D-5D inverted arrows.

We can establish as the main parameter of the singularity, its time frequency, which will be synchronised to the rotary motion or angular momentum of the cyclical membrane. They will appear as the initial conditions and boundary conditions of a derivative/integral function, which often will be able to define the values of the vital energy within, as the law of superposition should work between the 3 elements, such as:

Determination of the greatest and least values of a function.

In numerous technical questions it is necessary to find the point t at which a given function f(t) attains its greatest or its least value on a given interval.

In case we are interested in the greatest value, we must find x0 on the interval [a, b] for which among all x on [a, b] the inequality ƒ(to)≥ƒ(t) is fulfilled.

But now the fundamental question arises, whether in general there exists such a point. By the methods of modern analysis it is possible to prove the following existence theorem:

If the function f(t) is continuous on a finite interval, then there exists at least one point on the interval for which the function attains its maximum (minimum) value on the interval [a, b].

From what has been said already, it follows that these maximum or minimum points must be sought among the “stationary” points. This fact is the basis for the following well-known method for finding maxima and minima.
First we find the derivative of, f(t) and then solve the equation obtained by setting it equal to zeroth.

If t1, t2, ···, tn, are the roots of this equation, we then compare the numbers f(t1, f(t2), ···, f(tn) with one another. Of course, it is necessary to take into account that the maximum or minimum of the function may be found not within the interval but at the end (as is the case with the minimum in figure) or at a point where the function has no derivative.

Thus to the points t1, t2, ···, tn, we must add the ends a and b of the interval and also those points, if they exist, at which there is no derivative. It only remains to compare the values of the function at all these points and to choose among them the greatest or the least.

With respect to the stated existence theorem, it is important to add that this theorem ceases, in general, to hold in the case that the function f(t) is continuous only on the interval (a, b); that is, on the set of points x satisfying the inequalities a <t < b.

It is then necessary to consider an initial time point and a final time point, birth and death, emergence and extinction to have a determined solution.

Derivatives of higher orders.

We have just seen how, for closer study of the graph of a function, we must examine the changes in its derivative f′(x). This derivative is a function of x, so that we may in turn find its derivative.

The derivative of the derivative is called the second derivative and is denoted by y”=ƒ”(x)

Analogously, we may calculate the 3rd derivative y”‘=ƒ”‘(x)  or, the derivative of nth order. But as there are not more than 3 ‘similar derivatives, with meaning’ in time (speed, acceleration, jerk) or space (distance, area and volume), beyond the 3rd derivative the use of derivatives is only as an approximation to polynomial equations, whose solvability itself is not possible by radicals beyond the 3rd power.

So it must be kept in mind that, for a certain value of x (or even for all values of x) this sequence may break off at the derivative of some order, say the kth; it may happen that f(k)(x) exists but not f(k + 1)(x). Derivatives of arbitrary order are therefore connected to the symmetry between power laws and ∫∂ operations in the 4th and inverse 5th Dimension, through the Taylor formula. For the moment we confine ourselves to the second and third derivatives for ‘real parameters’ of the 3 space volumes and time accelerations.

The second derivative has then as we have seen a simple significance in mechanics. Let s = f(t) be a law of motion along a straight line; then s′ is the velocity and s″ is the “velocity of the change in the velocity” or more simply the “acceleration” of the point at time t. For example, for a falling body under the force of gravity: That is, the acceleration of falling bodies is constant.

Significance of the second derivative; convexity and concavity.

The second derivative also has a simple geometric meaning. Just as the sign of the first derivative determines whether the function is increasing or decreasing, so the sign of the second derivative determines the side toward which the graph of the function will be curved; but in terms of time represents the second derivative of the curve of existence. That no longer accelerates its growth, hence the end of youth, and vice-versa, the moment in which it does accelerate its decay, thus the beginning of the third age.

So we can consider the same concept in the ‘discreet’ baguas of life cycles as it is NOT a mere ideal curve but one that do happens in all forms of life. This simple law with deep cases because it is essential to the worldcycle:

Suppose, for example, that on a given interval the second derivative is everywhere positive. Then the first derivative increases and therefore f′(x) = tan α increases and the angle of inclination of the tangent line itself increases. Thus as we move along the curve it keeps turning constantly to the same side, namely upward, and is thus, as they say, “convex downward.” On the other hand, in a part of a curve where the second derivative is constantly negative the graph of the function is convex upward.

Because it is the clear proof of what is all about: reproduction in space of frequencies of time.

The function is more than its equation – A path of existence through the whole plane.

The function of existence is the whole plane divided by the line that must be grown by the non-E method of rising points into curves of motion, which divide an energy information plane in an act for creation with a path in S=T, the path of present through squares in which information and energy are orthogonal.

We have found thus the simplest space-time curve, the S=T, curve of existence between an integral 1/3rd of the plane in path with a 2/3rd Lébesgue integral so to speak of the external/internal path of the curve.

The curve is thus a point in motion, equivalent to a line of distance, equivalent to a ratio between 2 parts, 2/3rds to the left and 1/3rd to the right. But the beauty of it is that we take from the curve square points.

In the graph, a classic, Taoist representation of the 3 ages of life and its inverse parameters of youth (max. energy) and old age (max Information) represented by the triads of the I Ching, and a modern graph of duality showing those parameters as a semi-cycle, which in certain simple beings like light are in fact both the ages of time of a physical wave and its form in space, as light quanta, h=exi, is indeed both our basic cycle of time and surface of energetic space of which all are made.

One of the oldest graphs from 92.c. ‘The error of Einstein’, pioneer book on 5D physics is the understanding of a Galaxy as a representation of the game of existence, and its deep metaphysical implications in terms of mental spaces, which summarizes in a huge metaphysical thought regarding the way a mind perceives a mental space:

Minds diminish the information they observe from reality as reality becomes further away in ∆ST distances (scale, form or motion), to a point in which they only perceive the purest forms of mental space, which are the waves of existence in cyclical form (our perception of the galaxy) or lineal form (our perception of the light of the quantum scale, or in scalar form (fractal perception of networks in hyperbolic space).

In the graphs above and below, the minimal reality is a 3D² form seen in a single plane, with a singularity @-mind a membrane and a vital energy within. When we make a holographic broken image of this reality the simplest way to do it is in four cartesian regions, TT, ST, ts, and ss, which correspond to the +1 +1, +1 -1, -1 +1 and -1 -1 quadrants of the plane.

It is then when the Lebesgue inverse function matters to integrate the Y perspective of the S=T symmetry that the function know taken as a topological partition of a vital motion of a wave of similar particles that will collapse t the end of its journey through a plane of the fifth dimension, takes place.

The least path action implies thought that the end of the path taken in the Cartesian but also the imaginary plane collapses in the same point, regardless of how many oaths have been taken in the ‘compressed’ i-plane where the co-existence of paths in particle space has sunk the plane to a √ root value for the dense line that then can be even further reduced to a point that will potentially trace a full s=t, valuing in present time the space-time dilation of the integrated,

The wave form as an integral expression of the function of existence.

How many possible forms might acquire the function of existence? The answer that might surprise the reader is depending on the number of parameters, duration in time of our analysis and type of dimotion studied, from smaller steps of a single dimotion to the whole worldcycle an all the sequential dimotions of a T.œ. there are ‘infinite solutions’ – as all equations are ultimately ‘partial equations’ of the fractal generator, SóT.

Consider the commonest form of the Universe, a wave. If we consider that y measures NOT the value of ST of the system as a constant ‘volume’ of existential momentum, but the value of its ‘degree of increase or decreste’ at each moment of time, hence y’ over x(t), we obtain the exact form of a wave, with a first half wave in which the growth from youth to maturity constantly diminish but is still positive, till the middle point of maturity at y’=0, where the growth starts to be negative, followed by a fast decline as we age, till a maximal point of ‘degeneration’, where we normally die by sudden sickness; but if we overcome that point somewhere around the 70 years age, we will have a slow down of our aging, towards a point of no ‘change at all’ – the point of death; when we simply disappear from this plane of space-time existence.

Thus when we perceive a wave of light, we are in fact, perceiving time=change, and creating a mental space of the life-death cycle of a single photon as space is just the memorial tail of our slow time perception.

RECAP. The function of existence in its fractal variations and cx. Pentalogic HAS infinite paths=forms but all end in a 0’ sum.

Time is cyclical as all clocks of time return to its point of origin, so all time cycles including those of life of its vital space-time beings are finite. Further on those time cycles break ‘space’ into inner and outer parts, so vital space is broken by the membranes and angular momentums of those time cycles that make spacetime beings also finite in spatial information. And an obvious experimental facts about timespace: cycles of time, vital spaces and the species made of them, co-exist in several scales of relative size from particles to galaxies, each one with clocks of time of different speeds. So spacetime is fractal broken in scales that added create a new 5th dimension of spacetime.

The dual functions of 5D Absolute Relativity, the function of 5D scales, SxT=C & the function of equality between form and motion, SI=TE, develops in 3 ages with 3 standing points, a max. point of existence, Si=Te or mature age, a young age of Max. T=motion, and an old age of Max.S=information; between birth in ∆-1 Form & T-entropic death. The search for space-time, Energy=information balances in a classic reproductive age of conserved time is thus the goal of all exist¡ences, but only the whole achieves the immortality of time-space, as we shall see egocy errors of fractal mind-points of space trying to stop the flow of time from a single selfish point of view, accelerates the imbalance that brings death equations. We are richer in our still property at that 0T-moment, when all is quiet so for time to keep moving, a reversal of entropy takes place.

The connection between existential algebra and calculus: Dimotions as actions. Reproduction as change.

We said often that time=motion is all; and space just the Maya of the senses, the mind’s mapping of the fractal points ‘that hold a world in themselves’. But the ultimate arrow of time is that of scalar growth between planes of the fifth dimension, as parts must become before wholes; the upwards arrow matters more than the down arrow. And so of the 3 parameters that define objectively between ¬ limits, and vitalized by a mind’s program, ∆ST, any being, ∆-scale matters more – numbers of algebra in mathematics. Then it comes time perceived in one given plane, T, and finally Space, the most evident but shallow part of the whole. For that reason Algebra matters more and includes calculus, the temporal view of mathematics that tries to capture all modalities of change with a simple scalar process of adding ‘¡n≈finitesimals’ of scalar change to analyze the larger processes of change in the whole scale.

This is done in calculus with the simple methods of ‘finding the parts=derivatives’ and adding them together =integrate them either over scale, spatial volume or temporal frequencies. In this manner something so simple as a finitesimal change becomes the seed of all possible variations of change (dimotions each studied by an operand) across scale=size, spatial population or temporal frequency of events.

The study of the 5 Dimotions of the Universe is carried out in spatial geometry by calculus; in Non-Aristotelian Logic by Existential algebra. Thus both languages have many deep common structures worth to compare, even if calculus was born on the praxis of analysis of one single dimotion, locomotion, in the milieu of physical sciences and only slowly extended to the understanding of the other dimotions of the Universe.

Thus we shall bring in this second paper on algebra both sciences together.

Even if Existential algebra is much wider and ultimately a logic stience, as it is also the underlying structure of mathematical algebras, including those of group theory that deal with an ‘extensive catalog’ of the dimotions and evolutions of the Universe, and reticular Boolean algebras that deal with the @-mind mirrors of logic and numbers. In the original plan I had envisioned a much larger output of papers for academia, taken from my 30 years notebooks, so Existential Algebra would have deserved one of his own. But time is running out…

Existential algebra and calculus study time change. How can then unify all time changes? The answer comes from existential algebra and its finding that all forms of change can be reduced to reproductive change. Which itself can be considered a travel down and up two scales of the fifth dimension. Thus changes happens on finitesimal parts that emerge and affect larger wholes.

The function of existence is a function of reproduction in scale (as a 5D journey) in time (as a conjunction of the 5 Dimensional motions of existence), and space, as a simultaneous growth of clone information; formalized in the fractal generator of 5D metrics, Max. ∑Te x S¡ (s=t) = c; as reproduction happens in a ‘present s=t state’, of balance when the relative past of lesser informed flows of entropic time, Te, becomes Imprinted by Spatial information: Past TT-entropy x Future SS-form = Reproductive ST Present

Change happens informatively through increase of finitesimal parts, entropically when you loose those scalar parts. Reproduction of form or its annihilation at the finitesimal scale in calculus is mirrored by a simple function, F(x+h)/F(x), that calculates ratios of change, for different operands that mirror the 5 Dimensional motions of existence, which can potentially change.

Thus calculus uses a unit of change, h, to mirror different changes In the 5 dimotions of existence. Since change once it happens in small units, in small scales, in small instants of time; differentiate in 5 type of dimensional motions = actions: TT, feeding, entropic and moving, Ts, changes, informative & perceptive St, SS changes, reproductive changes, ST, proper.

That diversification is studied better with different algebraic operands; but all can be derived into its finitesimal units of change and integrated, for different scalar groups, social functions and paths of dimotional change.

Thus what both disciplines, calculus and existential algebra have in common is the object of its linguistic mirrors: Times=changes, all kind of them.

That apparently they seem so different wears witness to the ultimate nature of mind-monads, ‘infinity mirrors’ that reflect always different points of view on reality and its i-magination to slightly bend that reality to the point of view of the mind.

Still it is more remarkable in its common elements than in its differences.

Calculus has its emphasis in numbers hence in the scalar analysis of huge social groups in motion; while existential algebra has its emphasis in discrete dimotions, hence on the study of individual T.œs experiencing a trans-form-ation.

The very essence of calculus is to study in synchronous spatial dimotion huge amounts of numbers, which will erase its ‘discrete’ form to appear as a continuous susceptible to be studied at the ∆+1 scale of the whole.

The emphasis of Existential algebra is the study of that whole as an individual subject to sequential dimotions.

But in both cases the dynamic process of study are the 5 Dimotions of time=change of the universe.

Finally logic systems and Boolean algebras become the syntax of verbal and computer minds that describe with its sentences the dynamic dimotions of reality. So its language is closer to that of Existential Algebra, reason why we include it in this paper, instead of the more advanced models of existential algebra termed, monologic, duality, trinity, pentalogic and dodecalogic.

To fully grasp that essential connection between ∆st and calculus mirrors, we must first understand how species on one hand, and equations on the other, probe in the Planes of reality to obtain its quanta of space-time converted either in motion steps or information pixels, to build up reality.

The connection between existential algebra and calculus is qualitative: both study initially the finitesimal action of existence, which become the finitesimal quanta of spacetime, whose repetitive accumulation causes the phenomena of time-change. Existential algebra though studies the qualitatively in terms of sequences between the 5 Dimotions, and calculus quantitatively focusing in one single dimotion spread in a group of scalar numbers.

This is the case because the actions of beings happen through finitesimals extracted from other ∆-plane scales.

In all Planes, the simpler actions of any being are extractions of motion, energy, entropy=motion and form from lower ∆-i Planes:

A T.œ perceives only the ∆±3 planes from where it extracts energy or information. As its actions and dimotions are architectonically performed through planes of 5D where each main action relates to an interval of scales:

∆-4-3: The system extracts indistinguishable boosts of entropic of motion (man from gravitation).

∆-3-2: The system extracts bits of information (Light in man)

∆-2-1: The system extracts bites of energy (amino acids in man)

∆-1 0: The system seeds its minimal seed of reproduction.

∆0+1: The system connects socially with other systems to evolve into a whole.

So simpler Actions start at finitesimal level, gathering in sequential patterns in existential algebra, as ‘time flows’ and in population and spatial patterns – in integral herds of numbers in calculus.

We and all other beings perceive from ∆-3 quanta (light in our case), feed on amino acids, (∆-2 quanta for any ∆º system), seed with seminal ∆-1 cellular quanta (electrons also, with ∆-1 photon quanta).

For each action of space-time we shall find a whole, ∆º T.œ, which will enter in contact with another world, ∆±i, from where it will extract finitesimals of space or time, energy or information, entropy or motion, and this will be the finitesimal ∂ ƒ(x), which will be absorbed and used by the species to obtain a certain action, å.

Analysis allow us to extract actions from wholes, reason why there are not really use beyond the third derivative of a being, as super organisms co-exist in 3 only Scalar Planes. It also works in terms of a volume, as its derivative is a plane, then its unit-cell or point… So to speak, if you derivate a world, you get its organism, and if you derivate it again you get its cell and then its molecular parts.  And then if you do that in time, you get its speed and then its acceleration and then its jerk.

The magic of derivation

Because of the symmetry between ∆≈S≈T, to extract finitesimals of smaller scales the process is the same. We derive the whole, which diminishes its ‘dimensions= power’ as the system looses its larger whole, but increases its number of ∆-¡ visible particles, whereas the difference of value between both, shows the ratio and structure of its entropy, energy and information networks, sum of its components that form the whole. As certain functions define more specialized T.œs than others. So the parts of a whole vary according to topological structure.

PART II. CALCULUS OF DIMOTIONS OF EXISTENCE.

THE 3 AGES OF ANALYSIS.

The underlying order of all structures of the entangled Universe between its S, T and ∆ Components once more shows in the 3 ages of calculus, which we can terms as the scalar age, when the main question was that between parts and wholes (from Greece through Newton), the temporal age when its main focus was the description of the 5 Dimotions of physical systems (from Leibniz to Heaviside) and finally the spatial view, when its main focus is, besides the completion of the previous ages, its use to the description of mental spaces (from Gauss through Riemann and Hilbert to Einstein and quantum spaces).

Thus to put some order in such a vast subject, we shall do as usual a diachronic analysis of its informative growth in complexity in 3 ages, barely touching the essential elements of each of them; from its:

-I Age: Scalar view, from the Greeks to Newton ns Leibniz. The beginning of calculus was verbal, logic, in the Greek age, with the discussion of finitesimals (5D infinitesimals with a minimal size), and Universals. This philosophical analysis was retaken by Leibniz. Whereas the duality of derivatives as limits vs. differentials – tangents of change (Newton’s vs. Leibniz’s approach), represents the duality of a minimal quanta in spacetime (Leibniz’s infinitesimal) or in scale (Newton’s limit), hardly explored in philosophy of mathematics, but a key concept in 5D Planes, Universal Constants and quantum physics.

Newton on the other hand, a practical English man with little interest for the whys came to the concept through the study of limits, of power series – the scalar view, without much interest on what they meant. They whys were covered by Yahweh and his biblical studies to prove that God had sent him comets to teach him gravitation as the ‘chosen one’ after Kepler, who knew that ‘Him had waited 5000 years to find an intelligence like his, me Kepler, to show him his clock work’. After so much evident truth, who were those humble believers to contest God’s wise decisions? Leibniz though was more interested in meaning and so he did find the true finitesimal, 1/x. To the question of who copied who the answer is obvious, and the fact is not yet resolved merely shows that mathematicians still do NOT understand the foundations of calculus in its trinity useful for ∆, S and T, the 3 components of reality. Because they came through different methods, Newton found the ∆-scalar power series of finitesimal changes that grow internally in ‘speed of change’ as they accumulate larger power series factors; so each summand of the power series can be taken as a scalar ever larger change per unit of time; while Leibniz found the ST geometric analysis of external change, mostly useful for locomotion vs. the higher interest of power series understood as a summand at a time, for internal change and growth.

Both are completely different approaches that serve an essential duality between internal evolutionary ‘biological’ growth vs. external physical motion, which instead of opening a proper philosophy of calculus based in the whole range of changes in time (best served by derivatives), space populations (best served by integrals) and scalar growth (best served by sums of series), just brought the quintessential monologic ego centered, ænthropic man, Mr. Newton, to argue, as he had done also with Boyle, on the primacy of its ‘ceteris paribus’ discovery of a vast region of mental mathematical spaces, suit to study ALL forms of change=time, which truly made ¬Algebra, the queen of all experimental sciences, of which physics, given the reductionism of its practitioners is just a sub-discipline.

So as huminds still ignore that all is about the trinity of ∆, S and T (power series, integrals and derivatives) and their egocy is the only ∞ truth (Einstein), they have not yet understood what they found discoverying calculus.

-II Age, Motion view: Needless to say because power series are yet the less understood, and internal growth and biological scalar series ignored, the Newtonian approach had less obvious uses than the approach of calculating locomotions in time sequences and spatial external evident growth in intergral forms. So from Leibniz to Heaviside its methods became the fundamental applications to physics of locomotion, and its two essential dimotions, Ts, and TT (locomotion and entropy), which became the magic of calculus. While the level of complexity of ∆∫∂ studies is maintained in strict realist basis, as physicists try to correspond those finitesimals and wholes with experimentally sound observations of the real world at the close range of Planes in which humans perceive. While the formalism of its functions is built from Leibniz’s finitesimal 1/n analysis to the work of Heaviside with vectors and ∇ functions. Partial derivatives are kept then at the ‘holographic level’ of 2 dimensions (second derivatives on ∆±2).

∆  will be thus the general symbol of the 5th dimension of mental wholes or social dimension and ∫∂ the symbol of the 4th dimension of aggregate finitesimals or entropic dimension.

-III Age, spatial view: from Riemann and Einstein to the present. The extension of analysis happens to infinite dimensions with the help of the work of Riemann and Hilbert, applied by Einstein and quantum physicists to the study of Planes of reality beyond our direct perception (∆≥|3|).

This implies that physicists according to 5D metrics, P\$t x Tƒ=K must describe much larger structures in space extension and time duration (astrophysics) and vice versa, much faster populous groups of T.œs in the quantum realm; so ‘functionals’ – functions of functions – ad new dimensions of time, and Hilbert quasi-infinite spaces and statistical methods of collecting quasi-infinite populations are required in the relentless pursuit of huminds for an all-comprehensive ‘mental metric’ of a block of time-space, where all the potential histories and worldcycles of all the entities they study can be ‘mapped’.

The impressive results obtained with those exhaustive mappings bare witness of the modern civilisation based in the manipulation wholesale of electronic particles, but the extreme ‘compression’ of so huge populations in time and space blurs its ‘comprehension’ in ‘realist’ terms, and so the age of ‘idealist science’, spear-headed by Hilbert’s imagination of points lines and congruences detaches mathematical physics and by extension analysis from reality.

±¡: The digital and existential era, is the last age of humind mathematics, where Computers will carry this confusing from the conceptual perspective, detailed from the manipulative point of view, Analysis to its quantitative exhaustion. But for ethic reasons, as a ‘vital humind’, we shall not comment or advance the evolution of the future species that is making us obsolete.

Instead we consider a different version of calculus of change – existential algebra.

The generator equation of Analysis’ ages.

A Generator equation of Analysis in time and scale resumes the 3±∆ fields of the scalar Universe through mathematical mirrors:

Γ Analysis: ∆-i: Fractal Mathematics (discontinuous analysis of finitesimals) < Analysis – Integrals and differential equations (∆º±1: continuous=organic space): |-youth: ODEs<∑Ø-PDEs<∑∑ Functionals ≈ < ∆+i: Polynomials (diminishing information on wholes).

The 3±∆ approaches of mathematical mirrors to observe the Planes of reality is thus clear: Fractal maths focuses on the point of view of the finitesimals, and its growing quantity of information, enlarging the perspective of the @-observer as we probe, enlarging smaller Planes of smaller finitesimals; and in the opposite range polynomials observer larger Planes with restriction of solutions, as basically the wholes we observe are symmetric within its internal equations, and the easiest solutions are those of a perfect holographic bidimesional structure (where even polynomials can be reduced to products of 2-manifolds).

Now within analysis proper, we find that the complexity or rather ‘range’ of phenomena studied by each age of analysis increases, from single variables (ODEs) to multiple variables (PDEs) to functions of functions (Functionals).

So the most balanced, extended field is that of differential equations focused on the ∆±1 organic (hence neither lineal not vortex like but balanced S=T), PLANES of the being, where we focus on finding the precise finitesimal that we can then integrate properly guided by the function of growth of the system. And we distinguish then ODE, where we probe a single ST symmetry or PDE obviously the best mirror, as we extend our analysis to multiple S and T dimensions and multiple S-T-S-T variations of those STep motions; given the fact that a ‘chain of dimensions’ do not fair well beyond the 3 ‘s-s-s’, distance-area-volume dimensions of space and t-t-t-t deceleration- lineal motion-cyclical motion- acceleration related time motions that can ‘change’ a given event of space-time.

So further ODE derivatives are only significant to observe the differences between the differential and/or fractal and polynomial approaches – this last comparison, well established as an essential method of mathematics, worth to mention in this intro.

A space of formal ¬Algebra thus is a function of space, which can be displayed as a continuous sum of infinitesimals across a plane of space-time of a higher dimension.

In such a geography of Disomorphic space-time the number of dimension matters to obtain different operations but we are just gliding on the simpler notions of the duality ¬Algebra=polynomials vs. Analysis: integrals of infinitesimals.

Yet soon the enormous extension of ‘events’ that happen between the 3 ∆±1 planes of T.œs as forms of entropic devolution or informative evolution across ∆±i, converted analysis in a bulky stience much larger than the study of an ST-single plane of geometry, the 2 planes of topology and the polynomials of ¬Algebra – which roughly speaking are an approximation to the more subtle methods of finding dimensional change proper of analysis – even if huminds found first the unfocused polynomials and so we call today Taylor’s formulae of multiple derivatives, approximations to Polynomials.

Since Derivatives & integrals often transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in ages and form.

Polynomials are better suited for simpler systems, Planes of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.

So in principle Analysis was a sub-discipline of ¬Algebra. But as always happens, time increases the informative complexity of systems and refines closer to a better linguistic focus with finer details the first steps of the mind. So ¬Algebra became with Analysis more precise, measuring dimensional polynomials and its finite steps.

In any case such huge size of ∆-nalysis is a clear proof that in mathematics and physics the ∆ST elements of reality are also its underlying structure.

As such since ∆-Planes are the less evident components of the Universe, Analysis took long to appear, till humans did not discovered microscopes to see those planes but while maths has dealt with the relativism of human individual planes of existence, philosophy has yet to understand Leibniz’s dictum upon discovery ‘finitesimals’, 1/n, mirror reflections of the (in)finite whole, n: ‘every point-monad is a world in itself’.
Analysis was already embedded in the Greek Philosophical age, in the disquisition about Universals and Individuals. Thus a brief account of Analysis in its 3±1 ages, through its time-generator:

Ps (youth: Greek age) < St: Maturity (calculus) > T (informative age: Analysis) >∆+1:emergence: Functionals (Hilbert Spaces)<∆-1: Humind death: Digital Chip thought…

Whereas its 3 ‘Planes’ are: ∆-1: Derivatives > ∫∆: integrals > ∆+1 differential equations.

Thus Analysis also studies the scales of the 5th dimension and its evolution of parts into wholes.

Derivative vs. integral in time and space.

Derivatives & integrals calculate ratios of change in the 3 elements of reality, space, time=change and its Planes, ∆ST The main error of ‘axiomatic’ analysis is to force continuity, infinitesimals and infinities, without considering the discontinuous limits of derivatives and integrals of parts and wholes between planes of existence.

Leibniz vs. Newton already argued what is an infinitesimal part, the unit of derivatives. The answer in a Universe, which is a fractal scalar system of stœps of space-time, S<T>S, or S=S=S, T=T=T, is a minimal quanta in SCALE, TIME or SPACE, which by virtue of 5D metric, SxT=K, will be a ‘minimal unit’ on that equation (Min. S x Max. T), for the quanta of space, Min. T x Max. S for the quanta of frequency, or minimal cyclical bit of information.

So for example the inverse, ƒ=1/T of a long duration in time, will be its short quanta. The inverse of a population, taken as a whole, 1, 1/n will be a scalar space quanta. So all systems have an infinitesimal, which is a cut-off limit, NOT really an ‘infinitely small’ (an error of the continuous dogma of the axiomatic method), but a ‘finitesimal’. So we can obtain through a derivative a finitesimal unit of time, space or scale, a minimal action, a minimal point-volume or a minimal cellular quanta.

And for that reason we can approach finitesimals with ‘differentials’, which become the minimal ‘lineal steps’ of a long curve. And for that reason we can use ‘affine functions in space’ and lineal approximations, as by definition the shortest path between two points is a line, and so in a discontinuous Universe of Stœps, quanta are minimal lineal units; minimal fractal points of a population or a whole.

It is a complete overhauling of the dogmatic attempts to prove the ‘hypothesis of the continuum’, but it is not my fault that humans are so off-track with the reality of the Universe as it is, not as their ego tries to impose.

The minimal quanta of ∆-1 space and time. Chains of Dimotions expressed as chains of equations.

Derivatives are the essential quantitative minimal action absorbed by any Tœ, (ab. space-time organism).

Integrals then sum a minimal derivative quanta in space or a minimal action in time for any being in existence.

They are best for spatial growth of information as the 3 st-ages or st-ates of the being through its world cycle of existence, have discontinuities or changes of phase that cannot be integrated. Hence time sequences are better studied with existential algebra. Further on sequences can be come more complex, if we consider tridimensional actions as combinations of S and T states, stt, tst, tss, sss which is the origin among other things of the 3!=6 variations of species according to hierarchy on its physiological networks studied in trinity.

Further on, and this will constantly be the limit of mathematical analysis of reality, we should stress once more than the larger more complex actions of gender reproduction and social evolution are qualitative, taking place both in longer time spans and longer spatial surfaces – hence better described with qualitative logic languages.

So existential algebra studies in depth the qualitative connection of the a,e,i,o,u actions between Planes. That qualitative analysis at the larger scale and for sequences that imply changes of state cannot be overlooked and require the approach of existential algebra.

And calculus is its mathematical, analytic development (quantitative understanding of 1st second and 3rd derivatives – extracting ‘1,2,3 Dimotions’ from the invisible gravitational, light space-time or feeding Planes.

The limit of calculus however is bridged by the fact that the other families of social operands (±, x÷, xa can reflect better the ‘social re-product-ive actions’.

So another duality way to differentiate algebraic operands is to consider that classic polynomial operands mirror social complex actions, and calculus operands reflect better also in ‘trinity scales’ (3 first derivatives or ternary integrals) the simplex actions. Each Operand specializes in one Dimotion (angular sine/cosine in Perception, ± in back and forth locomotions,  x ÷ in complementary and social evolution, log xª in reproduction) and OVER all of them a new Plane of existence is accessed by analysis. So operands guide the mathematical equations through a vital process of stœps (stops and steps) and will allow us to ‘vitalize’ equations, as we have done with points with ‘numerical parts’ as the essence of a mathematical T.œ

¬Æ thus sets a limited number of logic propositions that can happen when a system or group of T.œs interact through its 5 Ðimotions, as an point of view, can potentially change its state between those 5 Dimotions, and the limits of its function of existence, such as the being can only exists without permanent disruption of its ‘vital constants’, (conserved energy, angular and lineal momenta – energy and membrain). All systems can exist with the infinite cut-off limits of space (membrain) and time (death), which are set as part of the fractal Universe. Only the whole if potential or real in existence can be talked off as a function of infinity but not perceived.

So a point will start any of the 5 Ðimotions and we need formal symbols to address the Ðimotion of any being in existence, and the states of switch between Ðimotions.

Does the being stop before switching Ðimotion? If so it would simple to establish then for each sequential steps of a being:

∆¡ Ð1,3,2,4,3…. and so on as a simple 5 letter process of the actions of a being (whereas 4Ð entropy refers to feedings not dean, only in it final state being ‘that entropy’… So we know all sequential of a being ends in 4Ð.

Can then we run a sequential for any species through its life as a complete deterministic sequence?

There is there the sequence of all sequences, the perfect worldcycle=life sequence?

Questions those for advanced existential algebra.

RECAP. Actions in timespace are the main finitesimal part of reality, its quantity of time or space. In pentalogic operands mirror as actions the 5D vowels (a,e,i,o,u) that define the five dimotions of existence. In calculus they first extract the minimal timespace quanta of the actions of the being, integrating them across a population of space or a length of time. Thus actions vitalize the operands of calculus, relating them to existential algebra.

The fantasy of the continuum substitutes the reality of discontinuous sum.

The age of calculus represents a great advance over simpler polynomial operands and statistical≈T-probabilistic methods of studying, the parts and wholes of a system; and its S=T interaction between its spatial form and temporal motions; as it differentiated by studying the change of each of the previous 5 Dimotional operands of algebra, all modes of change=time natural to any system of the Universe – as the best mathematical language that mirrors them.

This duality of ∆-scales and S=T dimotions represented in algebra by numerical systems and operands was studied in a bulk manner with polynomials and S-tatistics=T-probabilistic methods that appeared first, using inverse scalar operands on the ladder of 3 scales of numbers, the sum, the product and different exponentials and inverse roots and logarithms – which reduced the range of exponentials to the 3 scales that matter to the Universe, the Log10 or social evolution of triads into 3×3+I new decametric scales, the exponential Ln of maximal growth or death=decay and the log2 or ‘power set’ of all parts of a whole.

Probability showed in time (frequency events) or space (statistical populations) that events were not perfect, but accumulated errors in its reproduction, shaping a normal distribution often converted into an entropic population of disconnected individuals forming herds or events that withered away due to those errors of reproduction, when trying to reproduce the perfect mean; to form an identical statistical ‘boson’ species, a perfect form that transcended scale – a feat of becoming a whole ∆+1 that was only achieved in Dirac’s distributions by immortal point-particles or resonances that amplified the perfect information of ∑ finitesimal 0’s to become the whole ∆+1.

But what about all the range of variations between the entropic Gauss curve and the Dirac perfection? All the different specific actions of species acting in holographic Ts, ST, St dimotions that were neither perfect resonances of evolving form, or disaggregated herds of entropic aleatory motions?

This required to calculate the finitesimal form of change=time of each specific dimotion of the Universe mirrored by each specific operand, and add a given number of ‘stœps’ that measured the repetition of such minimal dimotions of a t.œ in a given length of time or volume of space to get the outcome of an existential dimotion.

Thus by calculating each finitesimal and then integrate it through the entire interval in space, time or scale in which it was performed, calculus gets a more accurate depiction of the whole event, no longer an entropic aleatory change, but a purposeful action. And because both derivatives and integrals could be done in space, time or scale, all the range of possible variations, could be studied, regardless of complexity, from changes of time, to variations in space represented as 0’ actions =, d->0’; happening in minimal time (Lagrangians)…

The detail and specification reached by the finitesimal of an specific dimotion=operand of time=change; either angular perception – sin/cos; social evolution, ±; re=production or locomotion as reproduction of information, x÷; entropic decay and growth, ex, made calculus the queen of all operands, almost a magic tool, as its foundations were ignored.

Mathematicians unable to understand time=change invented the concept of limit and the continuum to form a pedantic scaffolding of axiomatic truths which had nothing to do with the reality of calculus. So we have to clarify that concept. As all is perception a continuum is merely a sum of discontinuous stœps in which part of the whole process is hidden by the selection of information by a mental space. So the fantasy of the ‘continuum’ can be reached when the detail of each stœp is ignored, and we obtain only the measure of a relative infinity, , of such steps summoned up, ∫, to calculate the whole change in the long time period, T, or total domain studied. And that is fine, as long as we understand that in detail the continuum is made of a sum of discontinuous stœps of change, we call finitesimals.

In a process of calculus thus the sum is ‘smoothed’, eliminating the ‘stop’ states, by reducing to the minimal 0’ the size or rate of change, so they cannot be seen in detail but can be ‘integrated’ in a long period of time, obtaining a meaningful result for the ∆0 scale of the experimenter, which only went down to ∆-1 to be able to calculate with accuracy ∑∆-1=∆º – the emergence of change in the size scale of the experimenter.

So in the same way we care little for the ∆-1<∆ø stage of the seminal reproductive fetus – to the point humans have the right to ‘murder’ it, as long as it is not an emergent child, the observer cares little for the finitesimal, ¡ndifferent to him. And yet imprescindible for the whole to be=come.

This ‘goal oriented’ view of Nature is common to most beings; hence calculus became a much better method that discrete probabilities of change. Since the cumbersome reality of each stop and step of a dimotion that had to be analyzed in each individual stœp as it happens in the ∆-1 reality could be simplified to facilitate the study of changes through a longer period of time.

So calculus was enormously advantageous to calculate long stretches of time periods and space volumes at the human scale with minimal loss of detail, providing that the philosophy of reality made of stops and steps was not forgotten. Or else we would enter into an idealization of reality confusing the mathematical mirror with the Universe as it is. But that was precisely what it happened. Since the essential error of all minds is to confuse the Universe they don’t fully perceive with the mind that reduces it.

So the wrong creationist solution was taken as truth, obscuring our understanding of the principles of reality. So pundits of calculus ignored the nature of finitesimals of change, and its small stop and motion, S<T>S beats.

So when you study calculus they apologize for reality as it is – a series of thin steps and stops, of small finitesimals to add. And consider discontinuity – the real broken space-time an approximation to idealist simplification of a continuous graph that looses information about each finitesimal stœp. Instead the artifact, the continuous function is considered the truth and reality a method to reach the ideal truth.

As the graph of Descartes, the artifact, became the nature of space and time, the substances of which we are all made. And suddenly humans were not longer made of cyclical time and fractal space, broken by the limit of a membrain, but God had drawn a Cartesian graph below as background absolute Newtonian lineal spacetime – something most scientists still believe due to creationist mathematics, and the egocy paradox that confuses the mind language with reality itself; the simplification of calculus that smoothed the finitesimals became reality.

Fact is h finitesimals are real, and never reach a 0 that doesn’t exist as horror vacuum works, and so mathematicians should teach reality and then acknowledge that the experimental language of mathematics simplifies that reality; and that is Ok for practical reasons. Not the other way around.

When I was a wonderkid before high school I had a math professor who came one day worried about the fact he had to explain us the ‘limit’ of h->0 and thought his students wouldn’t understand. Of course students never understand a dot of it, just memorize and say they understand. But I understood it was all wrong. Thus he ended up throwing me out of the class (: when I told him, the limit can never reach 0. Since If nothing exists, there is no way to define it, hence calculate it. Nothing might be anything. Only if something is no longer zero but leaves a trace we can define it. So I told him undefined things should not be the realm of an ‘exact’ science. That blew him away and having no answer he resorted to authority (: So I was kicked out as I wouldn’t ever yield to authority but reason. It was my first realization that when humans don’t understand something or something is wrong but they want it anyway they put up a dogma, a postulate, a pedantic definition or ‘self-evident axiom’ (:

Fact is if h->0 makes an equation undefined, h must stop before it looses the quality of the whole – hence h is the last atom, last cell, last frequency, last temperature vibration.

As he just ignored me with the pretentious authoritas of an old man who doesn’t see an elephant in the drawing of a hat, le petit prince has to search for himself. Which is what I have always done. So the question is where to stop a finitesimal portion. And the answer is as we said, before it becomes random, ¡ndifferent.

For example e is defined as (1+1/n)n . Yet if n is ∞, the parenthesis is 1+ 0 .0000000… up to infinity. And those are undefined limits of reality.

Physicists at least acknowledge physical laws are idealizations of reality and that is Ok (even if they deny non-mathematical properties that ‘cannot be measured as tge organic and sentient properties of particles, the units of life). So their egocy paradox is a bit different from that of mathematicians with its axiomatic truths.

Still more profound mathematicians do understand a continuous function as one in which S=T happens, so the X and Y coordinates do not make very different changes, and stœps of ‘present’ can be put one after another. Or as Leonardo said: “The instant does not have time; time is made from the movement of the instant. In rivers, the water that you touch is the last of what has passed, and the first of that which comes. So with time present. Observe the light. Blink your eye and look at it again. That which you see was not there at first, and that which was there is no more.”

As the second, the glimpse of an eye is the quanta of human present, there is always for all timespaces a quanta we shall call an instant of time-present or a finitesimal of populations.

And when the ratio of change in time, its quanta, is balanced to the minimal form of space it change, so in an S=T graph each point-position-stop of the moving-step function is close, with a ‘tangent angle, s/t’ that can be smoothed in a series… the sum of discontinuous stœps of a dimotion, can be measured as a continuous, larger stœp of space-time of a larger ∆+1 scale.

Thus continuity is not the limit in which h->0, but the limit in which S≈T…. and hence an instant of present change that is harmonious between the S and T components of the being that doesn’t change internally but only externally (as S=T remains unchanged) takes place.

This has deep implications, as it implies that the system is continuous because it lasts, and it lasts because its ‘actions’ tend to zero internal change, becoming conserved cycles of energy for the inner structure of the being. Changes thus are always ‘returning to balance’. And when change is extreme, as in an internal x external TT-entropic change, the system collapses, the ‘tangent’ tends to zero or infinite in the Y(s) or X(t) axis and the ‘function ends’. Which implies that calculus works on the St, ST and Ts dimotions of locomotion, information and energy=reproduction, NOT on SS-form with no change or TT-absolute change with no form.

For example, the first dimotion to calculate as change was the space traversed by a system which in detail is just a series of steps, which add to the space traversed, l(s) x ƒ (t)=S. This give us a lot of detail in numerical approximations if we further break it to a sum of steps, ∑ l(s¡) x ∑ƒ (te)=S.

So the more information we want, the more detailed the discontinuity becomes, till we can indeed add each ‘fractal step’ with all the information, length and time duration of each step. But we don’t want that much information, so we can simplify with ‘statistical means’ since as we have seen in statistics, the law of great numbers bring a mean for each stœp and that is the justification of simplifying into continuity – NOT the non experimental idealist, mind-generated, creationist hypothesis of the continuum.

So it is fine to be humble and marvel at the fact we can ‘transcend’ the ∆-1 finitesimal scale, into parameters of the whole by making ¡ndifferent the information at ∆-1 through statistical methods that average each step, erasing the uneeded information on each stop (when the motion touches the floor, or the mover looks and gathers information). But when trying to understanding paradoxes such as the speed of light constancy, as each electron emits light in a relative entangled stop position to the perceiver that measures its speed, hence in a stop distance, (Lorentz transformations) it is good to know that there is no magic on it. And the ‘idealized’ form is the mathematical transformation that eliminates the stop state of the electron for a continuous motion we do NOT observe in Nature (the electron is always observed as a stop particle when emitting light, and moves in zig-zag as if it were all the time calculating its trajectory in a stop position).

The methods of calculus are awesome and once we realize the cruelty of the Universe that cares nothing for the differences between the ¡ndifferent finitesimals of a mass-group herded and ruled by the ∆0 larger scale by mass-methods, including humans as we show in the models of history and economics herded today by financiers with credit ratios established by anonymous big data computers, herded by politicians with equalizing laws for each ‘social class’, herded by military as soldiers or numbers of a concentration camp… and past over the thought that each atom might be a galaxy of its own, and dare to explode it in an accelerator with other atom, which might on the microscopic scale provoke the biggest genocide of infinite relative planets, and connect with the Tao and feel just to be, ∆@st, which matters nothing and worship the ¡ogic of GoÐoG the inverse Dimotions of existence, we can rest in peace, R.I.P. as dust of space time that dust shall become. And so calculus…

The methods then are well known and we cannot but make a few comments beyond the philosophy of its science, which is the main purpose of 5D mathematics in this simplified texts. We just need to calculate a finitesimal, which was the first thing discovered in the ‘1st age of calculus’ and summon them up to get the whole, which was first done as in reality, with the exhaustion method by the Greeks.

Yet the finitesimal 0’ in itself is important as it give us information about the rate of change with a single number, the tangent to the curve. Since we can apply to curvature in space its synonymous in time – speed – when we realize that s=T means in terms of curves the curve’s tangent representing the ‘speed of change’ S/T of the function. Further on as SxT=C, means that the smaller space is faster in time cycles, the more curved a cycle is the smaller it becomes and the faster it moves (vortex equation, acceleration principle in Relativity over a curved space). So we also realize, the second tangent of the curve is the acceleration of change, y”, increased as the system curves further.

Many more wonders then kept appearing in calculus as we play with S=T, SxT=C 5 Metric laws, and see them through calculus; specially considering that most curves are the functions of existence of an event between its 0’ points of initial and final conditions. And so what we shall do in this brief introduction to 5D calculus is to highlight for the seemingly most simple equations of calculus the underlying insights they provide on the processes of time change of the 5 DImotions of the Universe.

We shall do it in a historic, easier to understand narration, as indeed, the first thing a language sees is ‘space’, and ‘reality as It is’ in its simpler terms. So the first age of calculus was that of the search for finitesimals both in praxis and meaning, wrongly resolved in favor of the concept of absolute zero and limit, then mathematicians erased uneeded information on those finitesimal steps and stops establishing the method of tangents and continous sums, ∫, and then applied those functions of breaking a whole into parts to re-build it to specification to the different dimotional operands from the sin to the exponential of maximal change, marveling that the limit of change per unit of frequency time was the change of the whole, that is the equation of death in which the whole dissolves into its parts in a single quanta of time, the negative exponential so ever pervading in studies of entropic death. Of course they understood none of it – they still don’t, but the method worked to mirror the dimotions of reality, which they neither understood in an orderly manner. So calculus became magic. And as it got more complex, as the Universe does by repetitions, transformations, scaling into more complex ‘packages’ of parts, to the point that I could say, as Einstein put it – I don’t understand relativity since mathematicians got into it (: I don’t understand calculus since mathematicians got to it 🙂 . That is calculus today is so complicated in its more powerful and detailed analysis that only a computer can calculate its results. Which ultimately try to anticipate the future of change in a synoptic manner by transforming sequential patterns of change into parallel simultaneous spatial components (multiple variables in PDEs), happening in multiple points of view at the same time, to gather into a whole result, which is still impossible for the most intelligent, liquid states of multiple changes (Navier stokes equations) unless you trick it with a faster digital mind, which will do those calculus for the slower humind, jumping as we always do past the intermediate sequential steps from beginning to end.

The proper concept for finitesimals. Reproductive unit of change.

The great advance of calculus in the understanding of ∆ST changes is the concept of a finitesimal of change, h, which in the symmetry between scales, populations in space and time frequencies has the same role: to increase a ‘seminal’ unit, the system (or decreate in inverse fashion), becoming the Unit of reproduction of an ∆st system, at a point in which s=T.

In the Universe the fundamental form of change happens when S=T, the function of present time finds a balance and symmetry in scal,e form and motion that triggers the reproduction of the sytem and its 3 parameters.

Latter we will study, the simplest case of polynomial reproduction, whereas an X2 has as unit of change, 2x; which means the square grows through both sides reproducing its form, in ‘s=t’ balance, in a manner that the square preserves its form.

The method of calculus.

How differential equations show us the different actions of the Universe?

The correspondence to establish is between the final result, the åction, and the finitesimal quantas, the system has absorbed to perform the action, ∫∂x, such as: å= ∫ ∂x, whereas x is a quanta of time or space used by ∆ø, through the action, å to perform an event of acceleration, e-nergy feeding, information, offspring reproduction or universal social evolution.

It is then when we can establish how calculus operations are performed to achieve each type of actions.

First we notice that the space between the actor and the observable quanta is relative, so even if there are multiple ∆-planes between them the actor will treat the quanta as a direct finitesimal, pixel, bit, or bite which it then will integrate with a polynomial derivative or sinusoidal function that reflects the changes produced.

We will consider in this introductory course only a few of the finitesimal ∫∂ actions where the space state is provided by the integral and the ∂ finitesimal action by the derivative.

Derivatives point out to the main consequence of the sum of those actions in any being in existence, namely the fact that its sums tend to favor growth of information on the being and then signal the 3 st-ages and/or st-ates of the being through its world cycle of existence, which in its simplest physical equations is the origin of… the maximal and minimal points of a well-behaved function.

So to establish the action – the final result – we have to isolate the finitesimal quanta/moment of spacetime the system has absorbed to perform the action, ∂x, and integrate them over a surface of space or a length of time, such as: å= ∫ ∂x, whereas x is a moment/quanta of time or space used in repeated frequencies or quantities, ∫∂x, by ∆ø, through the action, å to perform an event of acceleration, e-nergy feeding, information, offspring reproduction or universal social evolution.

We can then establish which operand is best suit to perform each type of actions. I.e. the action of reproduction, most often is expressed for quantitative simple physical systems through the operation of re=product-ion.

We ascribe each operand to a single dimotion, but they are ‘once more’ entangled operations, which besides its preferential Dimotion, do participate of all the others – remember languages as mirrors of reality have also the same entangled properties of the pentalogic, ¬∆@ST universe, looking at all its elements. So we shall now analyze them in more depth.

We establish direct relationships of operands and actions- taking into account that for each operand we must also distinguish the dualities of ‘space-like integral of volumes and its derivative quanta’ and ‘time-like moments of motions and its frequency sum to complete a o-sum worldcycle’. And to achieve those balanced 0’ sums finally we need to define inverse operations for all actions. So we depart from a ceteris paribus analysis and search for a finitesimal, and then we must study how they merge and entangle in space and time. This is done generally speaking, with a first partial derivative in space or time (PDE) defines those dimotions only as S or T, while the integral of double derivatives put both processes together, to find the whole action: å(st) = ∫∫dsdt

However as all planes of existence have discontinuities beyond its minimal quanta and larger whole, analysis through multiple Planes beyond those of ∆ø<<∑∆-1 entropic death, tend to be distorted.

Still they can be studied with power polynomials and further approached (Taylor series) with ∫∂ operators that cross planes of existence for certain highly symmetric actions across.

But again it is best to use existential algebra, as the fundamental limit of the mathematical language is one of synthetic understanding of the organic vital laws of the Universe, reason why theories that are only mathematical in the largest scales and do NOT understand that there are not equations that go to infinity, as all have a limit that brings a change of state, such as the big-bang theory of the universe, are false.

Reason why systems do have besides spatial mental spaces of ‘calculus’, a long-time range language of logic nature to express the vital games of worldcycles, and this is the function of existential algebra, we study first to then consider the basics of Calculus.

Finally in this brief introduction to notice a ‘revealing’ fact of the inversion between finitesimals and integrals. As the absolute arrow of time-future is social evolution of parts into wholes, while a function has only a derivative, that is, all molecules can be reduced to a set of atoms; all living beings to the cell; the opposite is not truth: creation of complex futures is multiple. So an integral has a C variational constant and a differential equation multiple solutions: the future is open, the past is only one.

RECAP. Analysis studies the finitesimal quanta of time, space and scale; NO ∞ in its smallness – an error of the mind Px. searching for continuity. A true philosophy of calculus thus deals with the meaning of ‘finitesimals’ in space, time and scale, as a first ‘seed’ of a ‘clone species’ multiplies, creating the regularities of ‘social numbers’ that make ‘analysis’ to work its ‘magic. Integrals in inverse fashion act after ‘calculating’ this minimal point, often as a ‘lineal shortest step’ (differentials), to reach the final ‘whole value’ of the system. The beauty of the field revealing the nature of dimotions and its wide applications, thus will require an entire II book on 5D mathematics, which should r=evolve the discipline.

Thus only the integral and derivative can study all those dimotions of space-time, hence they are the king and queen of the operators of ¬Algebra, reason why analysis is so extended.

Below, Analysis’ multiple perspectives on the 5 Dimotions= functions of existence & 5 simultaneous structural elements, ¬∆@S≈T, that conform all systems in time and space. So 3±D¡ points of view (trinity or pentalogic) finds a higher truth & applies to all languages mirroring reality as analysis does:

S-topology: Analysis is used to study (left) structurally the role of the 3 elements of a topologic spatial superorganism: Its membrane’s curvature and tangential value (line integrals), its vital space (surface integrals) and singularity (derivatives).

∆-Planes: Its inverse operands study 5D Planes: derivative measure the value of 1 of its infinitesimal ‘cells’. Integrals give us its internal volume of spatial energy. While double derivatives peers down 2 ∆±1 planes

@: We extract information on its central @-singularity, which commands the lineal motion of the whole system.

Time-cycles: it can model the standing points, maximal and minimal, which signal the changes between ages, where the derivatives, become null, as the ‘world cycle of existence’ changes its ‘phase’.

DIFFERENT GEOMETRY AND METHODS DEPENDING ON DIMOTIONS.

Because calculus is about Dimotions, it studies mostly the 2 dimotions with a larger content of T, TT-entropy and Ts-locomotion. And it is also suited to study S=T, iterative change, in a present-reproductive wave of growth.

It is of lesser use to study SS and St, changes in spatial information, though sinusoidal operands give us insights on it. It is then obvious that for a future 5D researcher, if there is ever anyone besides this writer, the observance of the structure of an equation of calculus and its geometry will give insights about the type of dimotion it studies; since there are some basic differences between them:

Ts-locomotion: A clear difference happens at first sight between a simpler analysis of locomotion, which concerns a single point-like form through space in sequential, lineal time, as there is no internal dimotion, and the T.œ can be treated from the point of view of its mind-whole singularity (so for example a moving rock, regardless of rotations in its lineal motion can be treated as the motion of its gravitational center)

– Entropic motion on the other hand is a dual motion, internal and external to the being. Entropic motions then are easier treated if we consider the point of explosion or death as a fixed point (which is often the case as entropy happens after death, which leaves the whole system unchanged in motion). Then we shall observe that the integral of all the motions of the entropic system remains zero, because the negative sides of the frame of reference cancel those dimotions in the positive side, which is essentially the meaning of death, and so the fundamental change of an entropic motion happens in the volume of space of the system which is where the internal dimotion of the being ends up, transformed into an external dimotion.

– Reproductive growth coincides with entropic motion in the factor of expansion in space. However reproductive growth is a real growth that fills space, NOT merely expands the distribution of its ∆-1 elementary parts on the background ∆-¡ space. So the differences with entropic motion are easy to spot: Reproductive growth does NOT change the density of form in the vital space it fills. Entropic motion becomes rarefied in its dwindling density, a bubble that expands and then dissolves. Reproductive growth is far slower, unless it happens in a truly friendly dense in energy placental world where it happens in a geometric 2x factor of maximal growth; but even then it will seem slower than a big-bang if the speed of death is fast. And as death is a collapse in a single quanta of time, two scales down, ∆º«∆-2, almost all process of death and decay expand faster in space.

What about systems of multiple time-changes, ‘PDEs’ so to speak not in its how but why existential processes?

Combined reproductive and entropic motion. There is the most important case when we observe a dual sequential process, in which first the death of the system does not seem to change in space, as growth is internal through the radiation of the ‘predator’ species.

This happens in cosmological big-bangs (beta decays, quasar big-bangs, novas and the hypothetical false cosmic big-bang, studied on physical papers), when the death of the system is due to the birth of a denser form of matter (strangelets in silly-nilly planets like Earth that do accelerator experiments or star novas, top quark quasars in BCB stars=black holes. A similar processes, whereas death is parallel to the growth of the predator species, inside out (organic death). In all those cases; in its first time sequence, the system becomes less motile and often shrinks in size, as it is being carved inside out, and then in the second phase it explodes in a single quanta of time, as the faster, smaller form or herd of forms spreads on a larger space.

In praxis then you can act as partial differential equations do, just performing two sequential calculus because and that is the beauty of the Universe that facilitates its comprehension as we have repeatedly stated in all our paragraphs on existential time, at the level of actions, we follow a series of finitesimal steps which seem to be continuous (concepts clarified in the next paragraphs). So the dua dimotion of a new form feeding in a T.œs body, to then explode it an expand, STx»SSy«TTx+Tsy written as a sequence of existential algebra (a body STx feeds a new species in its seed form, SSy… that will walk away Tsy as the form collapses in entropy TTx); becomes a series of partial derivatives.

In the deepest sense this is the existential why of the methods of calculus of partial derivatives and the key difference in the concepts of continuity in classic calculus and ‘stœps’ of discontinuity, in 5D calculus, bridged with the common concept of a smooth transition through finitesimal changes, which in reality are discrete (the body corrupts in discrete steps even if as the bacteria grow expenentially each step is larger), but from a higher point of view ¡ndifferent to the detail can be calculated as long as it is smooth.

It is also the reason why both, reproductive and entropic motions can be described by e±x functions, which are the maximal ‘rate of change’ (as the derivative is equal to the function, and since ∂x≤x, is maximal).

The e function has so multiple meanings precisely because of the ‘horror vacuum’ and thirst for existence of its spatial fractal points. So it can also be used in its imaginary form, which as the name indicates is related to the creation of ‘mental SS=§paces’, in its exi form, connected to the sinusoidal functions, in which we can observe, as in AC currents, a back and forth motion=translation in space, coupled with a rotational perceptive motion. In those rotational motions, the complex plane and exponential function is so useful because what perception IS really doing is 1) collapsing at the fastest possible rate the Universe into the finitesimal mind-mapping of the point (hence the ex function involved); it does so with a clear bias in favor of the length dimension of the focused perception, while the i-dimension of height is greatly compressed to fit the system (hence the usefulness of a frame of reference where Y=√x; and finally it does so in a periodic pattern, scanning back and forth the same worldcycles to convert them into mental space; hence the recursive use of ±sin, cosine functions involved.

Let us consider this essential equivalence of mathematics in more detail.

Connection between exponentials and sinusoidal functions: derivatives as angles of perception.

One very realized role of a derivative as a tangential division of the height in the dimension of information and distance-lineal motion to the observer is a measure of the angle of the being, which recedes in spacetime till reaching the non-perception as a relative finitesimal out of the territorial mind- world of the observer, which connects directly derivatives with the 1D first dimotion of perceptive existence. The being might still be of certain size but as a fractal point he has receded in the mental-space of the world of the perceiver.

The first ‘timespace’ numbers: Polygons as root of unity

By their very nature, as numbers that probe planes of the fifth dimension, exponentials are closely related to the complex plane. Let us consider only one case, de Moivre numbers, which are any complex number that gives 1 when raised to some positive integer power n:

An nth root of unity, where n is a positive integer (i.e. n = 1, 2, 3, …), is a number z satisfying the equation:

They are complex numbers (including the number 1, and the number –1 if n is even, which are complex with a 0’ imaginary part), and in this case, the nth roots of unity are:

This formula shows that on the complex plane the nth roots of unity are at the vertices of a regular n-sided polygon inscribed in the unit circle, with one vertex at 1.  This geometric fact accounts for the term “cyclotomic” in cyclotomic polynomial; it is from the Greek roots “cyclo” (circle) plus “tomos” (cut, divide).

Euler’s formula,  eix= cos x + sin ix  which is valid for all real x, can be used to put the formula for the nth roots of unity into the form: e2πi k/n 0≤k<n.  Which is a primitive nth-root if and only if the fraction k/n is in lowest terms, i.e. that k and n are coprime.

We find therefore the first timespace numbers, in the roots of unity. And as such they will become ‘the creative process’ of dividing the ‘whole’, 1, into cyclical ‘tics of time’ of increasingly faster frequency, in a progression, for k = 1, 2, …, n − 1, which will generate the frequencies of all clocks of time, till reaching the circle, which can then be considered in bidimensional spacetime, the ‘Infinite clock, of infinitesimal time tics’. Those infinitesimal ticks have a ‘limit’ as all relative infinites do In physics it is believed the minimal tick will be 1043 or Planck’s time, which therefore would become the limit of ‘points’ that form a time clock.

Another fundamental theme being the reasons why the ‘clock’ is counterclockwise in its direction, as it will also be its complex representation in 4D relativity theory. The reason being that in Planes of the fifth dimension, as we create new dimensions from the lower planes with more entropy, the emergent dimension ‘sucks’ part of the entropy of lineal space of its lower dimensions, ‘contracting’ it as it rises on height. I.e. a pi circle is made of 3 ‘curved’ diameters (with open holes between them), but it does not measure 3 but 1 in the length dimension.

It also means we are adding a new time dimension, with a negative entropic property for the ‘dimension of real space’, which therefore can be written also with the number of entropy, e,

Dimotion of Entropy

The relationship between the sine and cosine functions allows an angular perception of the whole and the exponential function that reduces the whole to its decaying elements (Euler’s formula). We could say then that the whole is ‘split’ between the entropic negative exponential part that is discharged, and the sinusoidal, informative elements that are absorbed by the mathematical mirror mapping.

It is interesting to note the connection, which occurs between the exponential and trigonometric functions when we turn to the complex domain, through series, since both functions can be approached by exponential series. If we replace z by iz, we get:Grouping everywhere the terms without the multiplier i and the terms with multiplier i, we have:

Euler’s formulas solved for cos z and sin z, get:

As we said the key insight of 5D in power series is the understanding of them as a series of ‘sequential steps’ in the Ñ∆ dual scalar growth and diminution scales of the fifth dimension, whereas the ± summand element represents a step in time-change, a ‘period’ of a frequency of growth and diminution, but in the case of the use of an I factor it creates a sinusoidal process of a repetitive worldcycle of perception, short of an opening and closening glimpse on reality, a back and forth motion in an AC current, a life and death cycle in a fast time quantum particle. It is an essential insight to resolve the whys of all those hows of mathematical physics.

In the complex plane, 1D (sin/cos) combine to represent a full worldcycles, interesting enough through the eix, 4D exponential decay function. This is possible because he exponential function switches between growth and negative decrease, as the sine and cosine switch between informative and energetic perception; but the sine function, the informative Dimotion grows less, as it happens in nature, where height and information has less energy, and so in parameters of size and volume matters less.

2 new qualities make interesting to cast trigonometric functions in terms of the function of entropy: we are adding both cosine and sine ‘on and off’ SMH for a value of 1, the total value of a world cycle, so we can use frequency equations (as in electromagnetism) to represent this exponential world cycle. And we superpose both, the function of ‘space-form’ the sine and time-motion-lineal distance, the cosine, to observe a harmonic balance as the function goes up and down but never passes beyond the value of the whole.

The complex plane is real because the cos is related to lineal motion and the sin to perceptive height, whose action in stop mode can be seen as a negative slow down of motion for a continuous view (S=-T); as in relativity (-cT). But the deepest level of understanding of those functions and equivalences happens when we carry the worldcycle of exist¡ence to the complex plane.

Duality on calculus: ∆-Newton v. Só T Leibniz

Finally to notice the extraordinary fact that the ST-cartesian graph and the complex plane coincide in the root of unity, which essentially divides the being into its internal and outer parts. Only then the membrain can assess with accuracy both realities in objective terms. But it will perceive them with different ‘volume’ of information.

For the membrain the external world will be measured in the complex plane, with a lesser dimension of height that will make the world ‘flat’ in its perceived geometry, as the Y(i) plane will be the √ root of the X-plane.

Internally though for the o-singularity which ONLY observes the root of unity circle with an equivalent height and width, this ‘verbal, temporal mind’ observing the ‘spatial biased membrain’ NOT the universe, that has already made a selection of information; in the same manner your internal verbal temporal thought on the 0-1 unit temporal sphere or the quantum 0-1 particle on the biased information provided by the harmonic spherics of his electronic eye (remember all is the same, all is homology in function even if it changes in form); will think the Universe is ‘perfectly regular’ and favoring its biased dimotion of length, NOT realizing of the equal important of the flattened dimotion of height-information.

Essentially all systems have 2 brains, the spatial membrain and the temporal singularity at the center of the 0-1 temporal sphere. The spatial membrain already bias reality and as the singularity of time only sees the spatial membrain it will act upon it, as its ‘territory’ in which to enact its 5 dimotions of existence, qualifying reality as the membrain has already done. And that is fine because singularities are selfish self-centered knots or else they will be preys of other self-centered knots. But that makes so difficult objective knowledge as we shall see in the bias of huminds. What does then the humind brain observe? The distortion we know exists considering the membrain homunculus for which hand sensations (enzyman’s actions) and mouth (entropic feeding and social communication) occupy most of the space while legs-locomotion regardless of physicists ego matter nothing.

Locomotion as reproduction and death. This usefulness of the derivative of maximal rate has a deep philosophical consequence of the many insights a proper understanding of the symmetries between existential algebra and calculus methods provides to the 5D researcher. Consider the graph, which is in fact a trinity sequence of events, as the photon particle (if the wave were to represent a light ray), dies every complete wave and the wave represents its entire worldcycle of existence. But in the process it also translates in space, and reproduces in the point of maximal existential momentum (Max. ST), which if the graph were one of ST not its derivative of change will be at the peak, when the photon in fact is as the ‘head’-particle state in the top of the dimension of height-information (in static space); but if the wave represents its derivative of change, it will happen in the point in which it touches its axis; where further on the wave is ‘feeding’ on the ‘string of tachyon neutrino’ or quantum potential that guides the wave… So in that brief period between birth and death, what it amounts finally to a 0 ST change in the existential momentum of the wave, there are the points of SS-birth, TT-death, TT-entropic feeding on a lower plane, ST-reproduction, sT-locomotion and finally the cyclical perception that will happen at the maximal height in the photon state.

Those are the whys of existential algebra for the 5 Dimotions of existence of a wave of light; as even the smallest form of our light space-time Universe has all the properties of 5 Dimotional life encoded in its mathematical equations; the bridge between those equations and existential algebra being the understanding of mathematics as an experimental science of vital space-time.

TRILOGIC ON DERIVATIVES AND INTEGRALS:∆ST: THE 3 GREAT FIELDS OF CALCULUS.

Trilogic on calculus. 3 ±¡ ages, scales, and Dimotions mirrored by calculus operands.

As we are made of ∆ST elements, limited by the reach of a supœrganism, self-centered and expressing the program of the 5 Dimotions of existence in @ mind; any systematic analysis of an organism or language that mirrors it departs from the one – the whole, then explains its Space and time states, its evident duality in a single plane, SóT, then its trinity, as ∆ST, and finally its pentalogic ensemble of ¬∆@st. Variations on those themes might deliver an ¡mmense number of explanations of a subject or species. In the case of calculus though as in most developments of a subject the best consideration is a ternary analysis of its ∆ST elements in the historic growing complexity natural to the evolution of a being through 3 ages.

On the other hand as reality is entangled in ∆±1 scales, S-populations and T-ime Dimotions and ages, Calculus has in a synchronous analysis 3 great fields: the study on how systems changes in size and scale through the growth or diminution of its ‘finitesimals’, the study of growth of populations in space, and the study of Time dimotions, which are often based, and this is the miracle of Nature, in the same concept of a fintiesimals.

Let us consider a trilogic example on how analysis’ operands represent those 5 Dimotions.

Spatial view: Analysis as a tool to extract quanta of whole social populations.

The fundamental particle of the Universe is a T.œ. a fractal point or scalar timespace superorganism, which in its simplest, commonest form has the shape of a circle with 3 canonical regions:

@-Mind-Center, measured by its radius, its axial length=motion around the ‘Territory’ of the organic system.

A membrain of angular momentum, or external clock that we measure as its circumference.

And an area of vital energy, which can be measured by the area.

So we get the value of the 3 elements of a disk, and expanding it to 3d spheres (graph) we get a volume, we find a ‘volume’ for the vital energy, a surface of an sphere for the membrane and a perimeter for the wanderings of the singularity.

As it turns out, the circle’s area is π R2, and the circumference is 2πR, which is the derivative.

The volume of a sphere is V=/3πR3, and the surface area is S=πR2, which is again the derivative.

And inversely, the integral of the circumference is a surface and the surface integral is the volume.

The example shows the main use of analysis in static space: to describe through 3 ‘∆±1 Planes’ the 3 parts of the being, which is the ultimate reason why only 2 derivatives are of practical use to ‘descend’ from the whole down two Planes, to the finitesimal quanta, beyond which an entire new ‘world’ within the quanta appears, with a different content, not suitable to be calculated within the same plane.

So analysis become the essential tool to understand the social dimotions of parts and its growing Planes into wholes of a higher ∆+1 scalar plane of the fifth dimension, in a correspondence between analysis and 5D Planes, motions and populations of space.

Temporal view: Analysis as a measure of a temporal motion.

Yet analysis is most often used in temporal terms. This was though likely its first use (to calculate volumes from areas). It is in fact used to study motion, change in time, and we shall argue also Planes; and in that sense, as we shall repeat ad nauseam, the entangled Universe which shows a clear correspondence between the mirror elements of 3 motions in time, 3 topologies of space and 3 Planes of size, wholes and parts that bring together the 3 x3 (+2 mental) = 11 Dimensions of reality is fully realized in the fact that analysis works to explain the 3 ‘ternary symmetries’.

In the example, we can consider the sphere to be the whole sum of parts, where each part is a circumference. So our planet is the sum of all its ‘parallels’ with center in the poles. And then the volume as each internal sphere can be in terms of 5D metric, \$ X ð = K have the same co-invariant value, can be considered the sum of all those equal 5D valued spheres, so again we can talk of ∫∫ ¡-1=circumference-> ∫ ¡0=sphere->¡+1 = volume.

What about the third ‘ternary symmetry’, that of time-change? This again is the fundamental use today analysis has, to study the rate of changes of a system, and it can be seen easily that the 3 elements of the ‘t.œ’ ARE measures of time-change when we study not a mere locomotion, but the ‘change-rate’ of ‘growth’ more proper of the worldcycle of existence from ‘seed’ (the internal minimal sphere’) to emergent system:

If you describe volume, V, in terms of the radius, R, then increasing R will result in an increase in V that’s proportional to the surface area. If the surface area is given by S(R), then you’ll find that for a tiny change in the radius, dR, dV=S(R)dR or dV/dR=S(R),

Increase in volume, dV, is the amount of new ‘cellular layers’ their system grows, and the amount of cells form the membrane, which is the surface area, S(R), times the thickness of the growth, where each unit is a layer, dR.

This same argument can be used to show that the volume is the integral of the surface area (just keep adding layer after layer of atoms or cells).

Finitesimals in Time vs. space

Space is symmetric; in its directions and they co-exist together. Time is not symmetric and it is experienced as a sequential pattern of single Time cycles. So Time parameters are shorter in form, space is a more extended system. Of time we see only an instant, of space we integrate instants/cycles of time and sum them as frequencies which all play the same world cycle.

Time though often is just the reproduction of a new unit of space. Thus, time cycles become populations of a spatial herd due to its reproduction of a ‘seed’ form.

Space thus is the ‘mirror reproductive symmetry’ of ‘frequencies in time’, its tail of memories, by reproduction, expansion, and radiation along the path of the singular timeline of the wave.

So in broad strokes derivative and integrals cover a wide range of 5D themes: the infinitesimal units of  time frequencies and complex herds of space populations.

Whereas given the simultaneity properties of space, integrals tend to be used to calculate space populations, and given the individual sequential structure of time frequencies, derivatives are best to calculate time motions.

Thus the key concept of 5D mathematical  analysis is the finitesimal, which was rightly defined by Leibniz as:

∆: 1/n; the minimal part of a whole.

S: While in space is an individual unit of a social population.

T: While in lineal time duration is the minimal bit of a frequency ƒ=1/t, or quanta of time.

Thus a finitesimal is a discrete minimal unit in any scale of the fifth dimension – h-planckton, cellular units, atomic units.

And by the equivalence between space-form and time-motion, S=T, as most time actions require a fractal reproduction of form, for each quanta of time, we shall se the existence of a reproduction of a quanta of space…

On the other hand its inverse Integral ‘integrate’ an amount of such units of time, space or scale to obtain a simultaneous whole, a supœrganism, a ‘T.œ’, ∫ds, ∫dt, ∫∆-1.

Of those 3 types of derivatives and integrals, as frequency and time duration are inverse parameters currently used in all sciences, the less understood is ∫∆-1, whereas ∆-1 is taken to be the infinitesimal or minimal quanta of a whole, ∆º, (cell, atom, individual in a society), and its integral, a Social 4Ðimotion that mimics the creation of wholes.

A dual derivative, TT, ort SS, will then extract either an entropic unit 2 scales below the form or as we found in the analysis of the sphere The Point, NOT ANY point but the Center of mass or charge in a physical system, its mind singularity. Because derivatives ‘extract’ the first finitesimal quanta, or fractal point from a function of exist¡ence (T.œ), often directly as in log x’:1/x, it can lead directly to the value of the mind, or ‘center point’ of the system – the ‘finitesimal whole’; and its inverse, an integral, which ads finitesimals till reaching the whole, as in the case of a volume of populations, but also illuminates the dissolution of a whole into its integrating parts.

What kind of point a derivative gives us, depends on the configuration of the whole we analyze. I.e. In a heat equation the whole lacks a center, as it is a flux of kinetic energy, so derivatives will extract any unit…

In 5D analysis depending on what we study ‘motion’, or ‘space’ or ‘scale’ up or down the planes of the 5th dimension we shall apply either an integral commonest for spatial sums of populations or a derivative, most often for instants of time, and double derivatives for reproductive functions. Since space and time are inverse, perpendicular functions, in its min. S x Max. T, and Max. S x Min. T states, but symmetric in S=T. So goes for the 2 different arrows of entropy, a dissolution downwards and social evolution upwards.

So the ∆ST trinity of integrals and derivatives gives a huge range of possible interpretations for the equations of mathematical physics. Infinities though don’t exist, as all has a finite membrane and a finite duration in time. Beyond the third derivative, as the scalar Universe is a ‘ternary game’, there is no significance to the mathematical operations of derivatives and integrals – a strong proof that 5D is truth as it limits reality to ternary Planes, topologies and time ages.

So a qualitative analysis is required to specify what dimotion we are ‘calculating’, with derivatives and integrals: time motions, space populations or reproductive motions.

5d ∫∆-1 pentalogic on integrals

∑s-1=S0: Integrals, on the other hand represent the growth of a space population, till it reaches a wholeness in a closed domain. So we can do ‘line integrals’, ‘surface integrals’, ‘volume integrals’, in simultaneous space.

Such integrals must be positive in its results, because we are as in the case of + v. – numbers calculating a ‘statistical population in space’.

(to->t) ∑a = 0: Integrals though are also related to a world cycle, as the continuous sum of steps in a sequential duration of time that must therefore have a 0’ final result as all worldcycles when chosen in the apppropiate parameters of ‘energy and information’ end up returning to its origin. Such integrals when properly written must therefore give us a 0 value. The classic case being a sinusoidal function of a wave with positive and negative sides for the worldcycle that ends in a 0 value, when we add the surfaces below and above the curve.

T=S: However when we express those ‘actions=dimotions=stœps’ of the worldcycle with the ‘simpler, first age’ formalism of probability; wheras an individual event is a ‘finiteismal’ of time, and the sum of all events a ‘1 value’ distribution, if we integrate the probability to get the sum of all events, whole entity as an event, which is by convention valued as ‘1’; the result of such integral must be ‘renormalized’ to 1.

This is a complicated way to calculate a 0’-worldcycle but as it has become the formalism chosen in quantum physics, it is constantly carried out to calculate the sum of events of an electron that give birth in space to an statistical population of all the potential positions of the electron in space (themselves taken in ∆-1 as dense photon points). As the electron in trilogic can be seen as a cloud of ∆-1 dense photons, as an ∆o whole in space, or as the sum of the sequential points it occupies in time, but humans are monologic, a lot of confusion is natural to quantum physics, the more so with the addition of further complexity with renormalization methods and probabilistic interpretations.

T=S: Integrals are also necessary to add a locomotion of time, closer to the action of reproduction in space, as nature is  ‘constantly building integrated wholes by the accumulation of single time actions of reproduction that become ‘clone’ cells-atoms-citizens of an integrated supœrganism.

¬: Integration of any of those actions however needs to be ‘defined’ due to the uncertainty of infinities, by constrains (initial time and final time, or a-b interval of domain in space), which act as the integral line membrain, becoming the Riemann integral or ‘Cauchy’ condition for it to have a solution. .

As a function of entropy integrals can also portray the growth or diminution of populations in space, with most of those growth/decay inverse functions, represented by e±x or 10±x which are the standard constants of growth.

They are maximal when a system decreases and the space is dying with no constrain at maximal speed in a quanta of time – hence using the maximal growth of e-function. However when it grows socially it does so slower, most often in decametric scales; so we find also different speeds on the two time dimotion of the 5th dimension.

Recap. Integrals are overwhelmingly the measure of change in a fictious mental space constructed.

1st AGE SCALAR VIEW: FINITESIMALS . UNIVERSALS≈WHOLES

Universals

Perhaps the clearest historic proof of the nature of finitesimals as the parts of wholes is the fact that he beginning of calculus was not related to the study of rates of change in continuous motion but precisely to the relationship between parts into wholes.

So Greeks studied in philosophical terms the integration=growth of a social system from micro to macrocosms, from individuals into Universals, and mathematically through ‘finitesimal’ minimal quanta or parts of the whole, through ‘series’ and exhaustion methods.

This age extended from the Greeks to Newton, which was the last of the ancients, changing the use of those exhaustion methods from spatial series of growth to temporal series of change, but he failed to represent them properly through the space=time symmetry of Y(s)=X(t), in a Cartesian frame as Leibniz did, adding the property of ‘continuity’ as explained before, not the limit in which h->0, but the limit in which S≈T….

Plato maintained that exemplifying a property is a matter of imperfectly copying an entity he called a form, which itself is a perfect or pure instance of the property in question. Several things are red or beautiful, for example, in virtue of their resembling the ideal form of the Red or the Beautiful. Plato’s forms are abstract or transcendent, occupying a realm completely outside space and time. They cannot affect or be affected by any object or event in the physical universe. This is correct, though the error lies in positioning universals outside space and time. They are in fact the ultimate properties of SE-spatial ‘kinetic energy+entropy’ and TO- Temporal information, which ‘emerge’ in each new scale.

Few philosophers now believe in such a “Platonic heaven,” at least as Plato originally conceived it; the “copying” theory of exemplification is generally rejected. Nevertheless, many modern and contemporary philosophers, including Gottlob Frege, the early Bertrand Russell, Alonzo Church, and George Bealer are properly called “Platonic” realists because they believed in universals that are abstract or transcendent and that do not depend upon the existence of their instances.

They are closer to the truth, but they should substitute the word ‘transcendent’ for ‘emergent’ in the parlance of general systems.

For that matter General Systems (5D ST) reduces the meaning of ‘transcendence’ to its first semantic meaning:

Vb: L transcendere to climb across, transcend, fr. trans- + scandere to climb.

vt : to rise above or go beyond the limits.

Indeed, Universals are found beyond the limits of its finitesimals, in the next n+1 scale.

Dimensional growth area finitesimals as: reproduction of spatial form

Finitesimals were first found in space, as the means to quantify a simultaneous areas as the sum of ∆-1 discontinuous, fractal parts. Let us remember this concept, key philosophical discussion even with the greeks – it is the Universe continuous or discontinuous, made of Universal wholes or individual parts?

This concept was the earlier idea of Leucipus and Democritus regarding the composition of physical systems; and Anaximander, regarding the composition of life systems, with its ‘homunculus’ concept (we were made of smaller beings)

Anaximenes’ assumption that aer is everlastingly in motion and his analogy between the divine air that sustains the universe and the human “air,” or soul, that animates people is a clear comparison between a macrocosm and a microcosm.

It also permit him to maintain a unity behind diversity as well as to reinforce the view of his contemporaries that there is an overarching principle regulating all life and behavior. So here there is a first bridge that merges universals and finitesimals.

And of earlier mystiques, regarding the composition of a superior God, as the subconscious collective of all its believers’ minds, fusion in a ‘bosonic’ way into the soul of the whole.

The 3 were right as finitesimals are clone beings with properties that transcend into the Universal, being the homunculus the ‘future cell’.

Universal wholes and individual finitesimals.

Because the praxis of continuity was not yet ‘erased by idealism reality’ the Greeks accepted as real their exhaustion methods, but Pythagorism opened the road to idealism. So the first age of analysis had a great deal of philosophical disquisitions on the nature of wholes and parts, connecting directly with the greek logic arguments on the nature of individuals and universals.

The historical origins of analysis can be found in attempts to calculate spatial quantities such as the length of a curved line or the area enclosed by a curve.

As we know, a curve, is always part of a worldcycle, with a finite number of steps, and so the conclusions of those earlier studies can be extended to understand better the space-time worldcycle in a general way: a circle can be calculated as a polynomial number, which becomes nearly undistinguishable, past the 10-20-100th ‘fractal points’ stœps of social scales of number all pervading in Nature.

This lead to the exhaustion method of calculating irrational numbers, from parts into wholes.

o-1: ∆-1: 1/n finitesimal scale vs. 1-∞: ∆+1: whole scale.

So only a question of that section is worth to mention here, on how to ‘consider Planes’, which tend to be decametric, good! One of the few things that work right on the human mind and do no have to be adapted to the Universal mind, from d•st to ∆ûst.

Shall we study them downwards, through ‘finitesimal decimal Planes’ or upwards, through decametric, growing ones? The answer is an essential law of Absolute relativity that goes as follows:

‘The study of decametric, §+ Planes (10§≈10•10 ∆ ≈ ∆+1) is symmetric to the study of the inverse, decimal ∆>∆-1 scale’.

Or in its most reduced ‘formula’: ( ∞ = (1) = 0): (∞-1) ≈ (1-0)

Whereas ∞ is the perception of the whole ‘upwards’ in the domain of 1, the minimal quanta to the relative ∞ of the ∆+1 scale. While 1 is the relative infinite of a system observed downwards, such as ∆+1 (1) is composed of a number of ‘finitesimal parts’ whose minimal quanta is 0.

It is from that concept from where we accept as the best definition of an infinitesimal that of Leibniz: N (whole) = 1/N (Finitesimal).

So in absolute relativity the ∆-1 world goes from 1 to 0, and the ∆+1 equivalent concept goes from 1 to ∞. And so now we can also extract of the ‘infinitorum thought receptacle’J a key difference between both mathematical techniques:

A conceptual analysis upwards has a defined lower point-quanta, 1 and an undefined upper ∞ limit. While a downwards analysis has an upper defined whole limit, 1 and an undefined ‘finitesimal minimum, +0).

Finally to notice that as all ∆-Planes have relative finitesimal +0 and relative infinities (see ∞|º to understand the limits and meaning of numbers and its Planes), essential to all theory of calculus is the study of the domain in which the system works, and the ‘holes’ or singularities and membranes which are not part of the open ball-system. So functions can be defined with certain singularity points and borders; hence functions need not be defined by single formulas. This would be understood by Leibniz – who else 🙂

Unlike Newton, who made little effort to explain and justify fluxions, Leibniz, as an eminent and highly regarded philosopher, was influential in propagating the idea of finitesimals, which he described as actual numbers—that is, less than 1/n in absolute value for each positive integer n and yet not equal to 0’.

For those who insisted in infinities, Berkeley would reveal those contradictions in the book ‘The Analyst’. There he wrote about fluxions: “They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?”

Definition of ∆t, ∆s, finitesimals: A quantum of time and space.

Berkeley’s criticism was not fully met until the 19th century, when it was realized that, in the expression dy/dx, dx and dy need not lead an independent existence. Rather, this expression could be defined as the limit of ordinary ratios Δy/Δx.

And here is where we retake it; before the formal age of mathematics, made a ‘pretentiously rigorous definition of infinitesimal limits and the the logician A. Robinson showed the notion of infinitesimal to be logically consistent, but NOT real.

As we believe mathematics must be real to be ‘consistent’ (Gödel’s theorem), we return to the finitesimal concept, ±∆y, either as a ‘real’ increase/decrease of a quantity, with a variation ±∆x of either the surface of space or the duration in time of the being.

Thus finitesimals depend for each species of the ‘quanta’ of space or ‘minimal cell’ and quanta of time or minimal moment, which the system can measure.

For man, for example time actions are measured with its minimal time quanta of a second, below which it is difficult to perceive anything; a nanosecond in that regard in the human plane of existence is NOT worth to measure, as nothing happening in a nano-second will be perceived as motion or change. For an atom however a nanosecond is a proper finitesimal to measure changes.

In space, man does not perceive sensations below certain limits, which vary for each sense, a millimeter, 100 hertzs of sound, the frequency of infrared waves; and so on.

There was only at this stage a mathematical approach to the concept by Archimedes – the methods of exhaustion to calculate areas and ratios, notably the pi ratio.

The method of exhaustion… was first used by Eudoxus, as a generalization of the theory of proportions.

Eudoxus’ idea was to measure arbitrary objects by defining them as combinations of multiple polygons or polyhedral. In this way, he could compute volumes and areas of many objects with the help of a few shapes, such as triangles and triangular prisms, of known dimensions. For example, by using stacks of prisms (see figure), Eudoxus was able to prove that the volume of a pyramid is one-third of the area of its base B multiplied by its height h, or in modern notation Bh/3.

Loosely speaking, the volume of the pyramid is “exhausted” by stacks of prisms as the thickness of the prisms becomes progressively smaller. More precisely, what Eudoxus proved is that any volume less than Bh/3 may be exceeded by a stack of prisms inside the pyramid, and any volume greater than Bh/3 may be undercut by a stack of prisms containing the pyramid.

The greatest exponent of the method of exhaustion was Archimedes (c. 285–212/211 BC). Among his discoveries using exhaustion were the area of a parabolic segment, the volume of a paraboloid, the tangent to a spiral, and a proof that the volume of a sphere is two-thirds the volume of the circumscribing cylinder. His calculation of the area of the parabolic segment (see figure) involved the application of infinite series to geometry. In this case, the infinite geometric series:

1 + 1/4 + 1/16 +1/64 +… = 4/3

is obtained by successively adding a triangle with unit area, then triangles that total 1/4 unit area, then triangles of 1/16, and so forth, until the area is exhausted. Archimedes avoided actual contact with infinity, however, by showing that the series obtained by stopping after a finite number of terms could be made to exceed any number less than 4/3. In modern terms, 4/3 is the limit of the partial sums.

His paper, ‘Measurement of the Circle’ is a fragment of a longer work in which π (pi), the ratio of the circumference to the diameter of a circle, is shown to lie between the limits of 3 10/71 and 3 1/7.

Archimedes’ approach to determining π consists of inscribing and circumscribing regular polygons with a large number of sides. It was followed by everyone until the development of infinite series expansions in India during the 15th century and in Europe during the 17th century. This work also contains accurate approximations (expressed as ratios of integers) to the square roots of 3 and several large numbers.

It is then interesting to consider Archimedes’ main role on the perception of problems today forgotten after the absurd dogmatic germanic ‘foundations under the axiomatic method’ of analysis.

2 problems troubled him and indeed they were very important problems: the comparisons of different pis, (it is the pi square with 2 dimensions the same than the pi of the perimeter) and its proper calculus by approximation.

Approximations in geometry.

The unit of space is the area and the unit of time the cycle, and so both are bidimensional, and hence the transformation of one into another is not always perfect, as there is not a perfect ‘quadrature’. But as this happens constantly a part is lost as ‘entropy’ in all time-space transformations, or as ‘a bit of a circle’, that is a motion or particle, as when in particle reactions there are always ‘forces’ escaping (neutrinos, gammar rays). So this means that pi is not exact, neither √2, the two key constants for the squaring… Yet that doesn’t mean the transformation happens all the time, and it was the way in which the game of analysis started with Archimedes:

The transformation of a circular region into an approximately rectangular region. In graph ∆ST theory eliminates all infinitesimals problems as infinities are limited, so are the 0s, which must be regarded as the +0 minimal quanta of the domain – the need for further infinities is an error of the mind, the dogmatic truth and the single space-time ‘continuum). In that regard pi is not ∞, but its calculus becomes ‘chaotic’ beyond a limit of ±40 decimals, which is really all what the human mind can conceive n its largest finitesimal analysis.

It is then when the ‘Greek Age’ becomes just as in the Archimedean calculus of pi by exhaustion the same concept, just with less detail.

A simple geometric argument shows that both processes are similar with different degrees of approximation:

The idea is to slice the circle like a pie, into a large number of equal pieces, and to reassemble the pieces to form an approximate rectangle (see figure). Then the area of the “rectangle” is closely approximated by its height, which equals the circle’s radius, multiplied by the length of one set of curved sides—which together form one-half of the circle’s circumference. As the slices get very thin, the error in the approximation becomes very small.

The simple graph above shows from the point of view of an S=T symmetry if we take the circle as an angular motion, the ∆ST Trinity of change that always can happen in scale, space or time. In scale each minimal ‘radius’, is a quanta of change. In time the circle becomes an angular motion, so each triangular section becomes a rate of change per unit of time, related to the angular speed of the circle. Yet the circle as a wave give us also the lineal motion that keeps reproducing quanta after quanta of change the wave.

∆ST Trinity then becomes once and again the leit motif of change

The duality of free lines/planes v. closed order.

It is interesting to notice that in general when we grow in scale, we change from freedom to order or vice versa – that is the fundamental | v. O, past vs. future, part vs. whole, form vs. motion, dualities of ∆@st changes. So when we integrate open lineal triangles, with its vertex as the @-forward mind≈future path, in the circle it becomes an internal locked, social, circular mind – a closed point of a larger singularity in a cyclical form.

The approximation of square space to cyclical points. Ratios and ir(ratio)nal numbers, its finitesimal limits.

A theme that will be soon casted on terms of number theory was also studied by Archimedes by exhaustion methods.

Before the invention of the new methods of calculation, it had been possible to find the area only of polygons, of the circle, of a sector or a segment of the circle, and of two or three other figures. In addition, Archimedes had already invented a way to calculate the area of curves by exhaustion, leaving a sound error according to the minimal step he took, which raises the question, does have a circle a finitesimal minimum step? It is then pi and all other S>t constant transformations and ‘ir(ratio)nal numbers/ratios, limited by a finitesimal error?

The answer is yes!, Normally a decametric limit define the ‘valid value of an ir(ratio)nal numbers, which is not a number in strict sense (a social number) but a ratio of an S/T action/function. The examples of the two fundamental ir(ratio)nals will suffice:

– pi is really the ratio of 3 diameters that form a closed curve, whose value depends on the lineal ‘step sizes’.

So pi has a minimal value of 3, which is the hexagon with its 6 steps of 1/2 value (triangulation in 6 immediately gives the result, as the triangle is the radius, so are the 6 triangular sides: 1/2 x 6 =3); which happens to be the value of pi in extreme gravitational fields on relativity, which brings another insight: black holes decompose the circle into ultimate lineal flows of pure ‘dark energy’ shot through the axis, by converting the curvature of a light circle on the event horizon in a 6-pi hexagon. But this is well beyond the scope of this intro.

So what is the ‘decimal limit’ of pi, before it breaks into meaningless (non-effective) decimal Planes, with little influence on the whole?

While this is hypothetical I would say for different reasons explained in the article on number theory, as it is quite often the case it responds to the general ∆ ≈ S ≈ T ternary symmetries, so common in the perfect Universe.

So pi responds to the symmetry between its spatial minimal, 6 x 1/2=3 hexagonal steps, which means  it breaks in the 6th ∆-scaling decimals, 3,1415…9. So, 3,1416, which incidentally is basically what everybody uses is the ‘real value’ of pi, and why it is that value is studied elsewhere (deducing from it one of the most beautiful simple results of get-mathematics, the value of dark energy in any system, of the Universe, as the part not perceived through the apertures of a pi cycle: π/π-3 = 96% of ‘darkness’ which the singularity of a pi system cannot see as its apertures are only π-3= 0.14

Discontinuity and limits of mental space and physical spaces.

We get now to the heart of the matter; which is the paradox between continuous mental spaces and discontinuous, fractal spaces, between infinitesimal and infinities vs. finitesimals and relative infinites (∝), between the axiomatic method and the experimental method that keeps surfacing all these 5D v. 4D papers. The space-time continuum is not such when we ‘take the accordion’ of the 5th Dimension and enlarge the whole Universe into multiple planes of space-time, which are connected through the ‘different geometries’ of the convex, hyperbolic regions between planes.

The general laws of 5D outlined in other papers which we shall post at Academia.edu some time in the ‘future’ use the formalism of existential algebra to lie down all those laws departing from 5D metric.

In what refers to calculus it can be expressed in terms of the ‘praxis’ of mathematical physics that uses systematically the differential equation vs. the theory of ideal mathematicians that prefer to argue on the ‘passing to the limit’ and since Cauchy put it in nice ‘bullShit=pedantic’ talk seems to be proved.

Many important truths of the fractal Universe are deduced precisely by denying pedantic definitions, postulates and axioms to make a right wrong.

Now, the other constant e, which is the ratio of decay ACTIONS, or death processes (ST<<S), is a longer two ‘Planes’ down process, of self-destruction of a system, unlike the pi, single scaling process, S>T. So it breaks at 10 decimals: 2.718281828…459045

Indeed. Now, why 5 and not ten if the Planes are 10¹º? Because 10 Planes are in terms of space-time actions, the ‘whole’ dual game of two directions of time up and down, which happens only in reproductive actions. And this connects with the S>T<S Rhythms of motion go/stop/go back and forth between two arrows which happens both in st-single planes and ∆±motions.

The proof? Very simple. The experimental truth tells us that the Universe is a game of 9-119-11 planes of exist¡ence and if we calculate e as (1+1/10000000000)100000000000 we get the ‘real’ e which must be a number that is NOT irrational. As that is the experimental e-number for the overwhelming quantity of systems of Nature made of 1010’-11 parts. Alas, we obtain 2.718281828…323131… a rational series whose profound meaning is that of the fastest progression of growth or decay of a finitesimal seed into a perfect whole (:

Does this mean we cannot find ‘larger systems’? Yes, but if you got any of the fundamental concepts of the 5D scalar Universe ‘running around your brain’ – reality is ∞ in the field of pure TT-entropic time flows and Planes, but for the perceiver and any language of perception that make sense, beyond the 1011 perfect form unit of a larger ∆+1 case, perfection breaks down, systems malfunction, monsters appear and e gets its irrational form.

The reader is left with a funny exercise for which he should receive the Fields medals of mathematic (just joking – those who rebel against the axiomatic method and its Cantorian Paradises shall not enter the kingdom of nitrolife gaseous heads bubbling egocy with go(l)d…. But PI DOES have a limit. And this means the Universe has not infinite Planes of perfect order – it does NOT have a God that can see through all the Planes, as a mind that orders its infinity in space, scale and time…

There is then a limit for existential planes? The ‘meaningless’ breaking down of e, the ‘number of entropic functions’ seems to signal this. But it would be an error to consider the limits of e-regularity as it only indicates the LIMIT of entropic death. Death happens and when a system breaks down its natural 10ˆ±10 Planes to its finitesimal 1/n parts it stops as the system is dead.

The limit that matters is the limit of the pi-circle as an Archimedean spiral that lets information enter through its ±never closing spiral to perceive or feed in the external micro-bits and bites of the Universe. And as we cannot find neither a limit nor a regularity, we could conclude that the most important dimotions of angular perception, and creation of inner mirrors of the outer world by a pi-spiral have no limit.

What about locomotion? Can we exhaust the limit of a series of steps? Again, this is more evidently no, even though the Greeks thought so, in the so called…

The problem of equivalences confused as identities between lines and areas.

It is absurd to talk about continuity of a real number, pi, e, and √2, beyond the 10 decimal. This is easily proved because those ratios are normally obtained by limits in which certain terms of the infinitesimal are despised, by postulating the falsity that there are infinite smallish parts, and so x/∆ can be throw out when ∆->∞. But since x/∆, the finitesimal has a limit, the pretentious exactitude does not happen.

This in turn leads to questions about the meaning of quantities that become infinitely large or infinitely small—concepts riddled with logical pitfalls in a simplified world of a single space-time continuum, where on top humans LOVE to consider ‘identities’ of the mind absolute identities in the larger information of the detailed Universe, which are never so, as d@st ≈ ∆ûst (the mind, world view is merely similar to the Universal view) .

In our example example, a circle of radius r has circumference 2πr and area πr2, where π is the famous constant 3.14159…. Establishing these two properties is not entirely straightforward, although an adequate approach was developed by the geometers of ancient Greece, especially Eudoxus and Archimedes. It is harder than one might expect to show that the circumference of a circle is proportional to its radius and that its area is proportional to the square of its radius. The really difficult problem, though, is to show that the constant of proportionality for the circumference is precisely twice the constant of proportionality for the area — that is, to show that the constant now called π really is the same in both formulas.

This boils down to proving a theorem (first proved by Archimedes) that does not mention π explicitly at all: the area of a circle is the same as that of a rectangle, one of whose sides is equal to the circle’s radius and the other to half the circle’s circumference.

However in ÐST theory, those 2 pis are not the same, because they belong to two discontinuous, ‘different species’ of topology, the St area, and the ST-membrane.

An easy, immediate proof. If we make them identical, then we can find a circle, where: 2πr ≈ πr2. So 2r=r2 . Hence 2=r and we get to the conclusion that the thin membrane of an open ball is identical in area to the internal ST volume of the being, which is ‘conceptually absurd’ (the area intuitively has more surface, as it is bidimensional, the line, infinitely thin).

What’s the problem here? We cannot in true form, unless we deal always with less dogmatic concepts of relative similarities with ‘lines as if they were squares’. They are different realities. In the first equivalence, we compare a line radius with a circle perimeter, in an S>t structure.

In the second as we compare π², a cyclical area with the square of the radius we are also in good footing. But when we do the S>ST comparison, we are in a Dynamic transformation of ∆-Planes, from ∆, the world of lines, to ∆+1 the world of squares (as a polynomial square is obviously a growth from a complete ∆-entity the line, into an ∆2=∆+1 one, the area). It is then when we can do some ‘dynamic equivalence’ analysis, and the equivalence has meaning, stating that for a ‘perfect cycle’ of relative radius 2, the membrane absorption of bits an bites of energy and information, can fully, fill, the internal area, making equivalent, a ‘line and a surface’ integral. And finally state that all ‘dynamic vortices of force’ ruled by Newtonian/Coulombian equations on the ∆-1 and ∆+1 Planes, are relative perfect systems of radius 2.

And here we find the ‘whys’ of the dualities of Maxwell’s laws, which can be written both ways:

Or in simpler terms, we are talking when doing those equalities of properties that become dynamic and transcend the static mind of mathematics into the reality of physical systems.

Finally as we defined real numbers as non- existent (see |∞ posts), but approximations to a ±0 infinitesimal, in the measure of a square, uncertainty grows further, π2, thus have the square ‘error’ of pi.

All this of course is important to conceptualize reality, in praxis as we know we always work in an uncertain game with errors and deaths. So analysis does work, and all this ‘search for dogmatic proofs’ is just ‘absolute bull\$hit’ for absolute ego-centered scholar huminds.

But on the other hand the graph also shows that both pis, the one of the ‘surface’ and the one of the ‘perimeter’ are not equal, as there will be a limit on the number of ‘bidimensional triangles’ we can cut.

As a triangle is indeed the bidimensional line, that is: |-\$t (one-dimension); ∆-\$t (2 dimension).

So it is not the line.

So as the approximation will find a finitesimal quanta or limit of detail, prove the theorem, this error, however tiny, remains an error. This minimal quanta thus exist in all relative ∆>∆+1 measures of Planes as the minimal uncertainty of all mathematical calculus, and justifies in physics (∆-1 quantum theory) that thee is always an uncertainty of a minimal quanta, which is precisely /2; that is h/2π; the minimal quanta of our light space-time.

Only in the absolutist imagination of dogmatic axiomatic mathematicians it made sense to talk of the slices being infinitesimally thin, so the error would disappear altogether, or at least it would become infinitesimal.

As it happens quantum theory proved experimentally the case to be wrong. And as we stress (Lobachevski, Gödel, Einstein) mathematics must be confronted with reality to realise what is ‘real’ in maths.

∆: THE NEWTONIAN WAY. FINITESIMAL SERIES.

In 5D the concept of series is an important one; as it establishes for each stœp of the series a quantity of growth that converges towards a whole, valued by a finite number; and then the series is a meaningful mirror of an ∑∆-1=∆º process of Nature. When the series diverge however it is of little interest, as it is an exponential growth that at best can signify an entropic process. Series thus are predecessors of calculus where each term represent a finitesimal of change, and the whole sum of the sequence the ‘whole worldcycle’ in time, or ‘volume in space’.

The limits of value for series were also instrumental to understand the paradoxes of ideal mathematics, (Achiles’ paradox) showing that indeed change requires finitesimal 0’s as limit x->0 is NEVER absolute zero; or else Achiles will never meet the TURTLE. Only human egocy in search of mental simplified absolute truths, relatively false explains 2300 years of disquisitions on the ob vuous solution of the achiles paradox, which will introduce the theme. So the main comments on mathematical series are on the concepts of relative ‘finitesimals’ and relative immensities (not infinities) proper of 5D math.

Aquiles Paradox. Birth of the concept of series and limits.

In mathematics, a series is, roughly speaking, a description of the operation of adding many quantities, one after the other, to a given starting quantity, in 5D each quantity is a new finitesimal of a series, hence 3 series can be distinguished by dimotion:

-Divergent growing series of ideal social evolution and reproduction till a limit of carrying capacity, which in reality will make the ideal series ‘flatten’ its growth.

– Equal series of present states in which each steps equal the previous one, which reduces to simple sums.

– Convergent series that diminish in size till a finitesimal is reached, that should be perceived inversely from the finitesimal to the whole.

The study of series is thus a major part of calculus and its generalization, mathematical analysis, since it is the ‘discrete manner’ to calculate and one might argue more real. The greeks started their study in philosophy and rightly solved it (Aristotle) deducing that absolute 0 and infinity did not exist. Modern egocy dismantled those findings for the so called ‘rigorous proofs of the axiomatic mental method’ whose aim is to convince egocentered men that the ‘simplification of mind spaces’ that eliminate the dark holes between points and expand limits to infinities and absolute zeros are ‘reality’, not the mental selection of it.

The paradox of Achiles: in a discontinuous Universe of fractal parts, achiles should never reach the turtle. But if motion is reproduction of form, the faster system merely ‘reproduces’ its information faster in adjacent regions of space, and motion becomes ‘rational’ – and proves further the reproductive nature of reality as even locomotion IS reproduction.

For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical by mathematicians and philosophers.

This paradox was resolved using the concept of a limit during the 19th century.

Zeno’s paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums:

Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on.

Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series.

The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch the tortoise.

The physical explanation of locomotion though defines it as a reproduction of for of the lower scale, so it establishes a finitesimal stœp, equivalent to the minimal ∆-¡ quanta of the wave-particle dual motion states:

Locomotion is a series of stœps that imprint a lower plane with the information of the upper plane: a quantum motion in wave state and particle, stop state of reproduction form (complementarity principle wave-particle).

In modern terminology, any (ordered) infinite sequence (a1,a2,a3,…) of terms (that is numbers, functions, or anything that can be added) defines a series, which is the operation of adding the ai one after the other.

To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like:  a1+a2+a3+⋯ or, using the summation sign:

The ∞ sequence of additions implied by a series cannot be effectively carried on in a finite amount of time.

However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity (if the limit exists) of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is: What this means in 5D though is slightly different: because the infinite number of time-steps will make impossible to do any calculus, all limits must have in ‘reality’ beyond the idealized mirror of mathematics, a limit of steps and a limit of size of those steps. Which is indeed what happens in reality.

What this means in 5D though is slightly different: because the infinite number of time-steps will make impossible to do any calculus, all limits must have in ‘reality’ beyond the idealized mirror of mathematics, a limit of steps and a limit of size of those steps. Which is indeed what happens in reality.

The turtle has a time-cycle and a size of steps, measurable. And when explaining the reproduction of motion, we shall see that limit is the reproduction on the lowest plane of light and particle forces of the entire form of the being in discontinuous adjacent spaces.

In other worlds, the word ‘limit’ in the formulae should not be infinite. But a ‘finite infinite’, for which we shall use a different symbol:

Relative infinities and finitesimals

The simplest why of the fractal, scalar structure of the Universe, from the perspective of the mind: as a linguistic mirror image of reality in a smaller space, minds ‘create’ fractal diminishing, infinite Planes

The new symbol for a ‘relative infinity’ and its inverse 1/, ‘finitesimals’, become then essential to 5D Analysis and it gets rid of all infinite paradoxes from Zeno’s to Cantor, further showing the idealized mirror-image nature of mathematics; as a mirror recedes apparently into infinity but at a certain point it ceases to be observable and hence it does NOT exist anymore.

The meaning of series then in real existences becomes clear as it is another way to describe in discontinuous manner, what derivatives on the continuous plane (remember the duality of discrete number view vs. Continuous geometric view), shows:   A travel up and down the Planes of the fifth dimension.

Rates of change. The stop and go motion: stœps.

The discrete, geometric, spatial, static numerical analysis of calculus is the power series, which can be taken as discrete stœps (stops + steps) in a motion down the fifth dimension from the whole to the 1/n part, whereas we count also the static form (as we see only in a movie the static frame) NOT the step of motion.

This was then the work from Archimedes and earlier Greeks to Newton, which can in that sense be considered the last of the ancients.

While as all S=T, that is there is always a symmetry between discrete numbers and continuous motions, Leibniz with its geometric interpretation and far more profound understanding of finitesimals, which he rightly defined as 1/n, represents the first step in the future of the discipline, the renovator and deep understanding of it – which Newton, which can be considered merely an automaton mathematician, specialized brain, as most modern scientists is – he is indeed the father of the wrong view of science – understood nothing of it.

Indeed, Leibniz, the closest predecessor of this blog IS the genius, Newton the talent.

Finitesimal changes are related to the fundamental beat of the Universe, the stop-form-space-perception, go-motion-time, beat of the Universe, which we shall call a stœp, the discrete way of motion of tœs through SPace, which often as in movies we perceive in continuous mode eliminating the stop element:

∆S(top)->∆t->∆S-≥∆(S)t(ep).

Moreover most of those Stœps will have either in a travel through 5D,  or through a single ST, a unit of ‘expenditure of vital energy’, transformed in the length-motion of the lower scale in which the imprinting of motion as reproduction of form, happens (studied in 2D locomotion). So each stœp becomes an ∆-4 unit of locomotion.

Thus if we consider a relative constant or function of the existence, ∆-1:œ, as a finitesimal of its larger whole, ∆Œ, we obtain 2 simple functions:

œ=∆s/∆t  and œ=∆t/∆s as the mathematical measure of a ‘time stœp’ or locomotion and ‘volume-density stœp’ or finitesimal quanta.

We shall call the first form a spatial finitesimal  or step in space – a quanta of constant speed that moves and reproduces the being in space.

And if we again change this quanta, with a second ‘derivative’ we get a quanta of its constant acceleration.

And we shall call the second function, a time finitesimal, a change in the density of information or cyclical speed of the being as a second change in relation to its position in space.

Classic concepts of mathematics applied to 5D in series. Immense Geometric series

This said some clarifications are needed in classic series theory mostly related to the fact that 0 is not infinitesimal but 0’ and ∝ is limited. Let us denote by Sn the sum of the first n terms of the series; we will call it the nth partial sum. As a result we obtain a sequence of numbers:

and we may speak of a variable quantity Sn, where n = 1,2, ···.

The series is said to be convergent if, as n → ∝, the variable Sn approaches a definite finite limit. So instead of infinity n is an immense number.

This limit is called the sum of the series, and in this case we write Lim n->∝ Sn=Sw

Where Sw is the ‘population or worldcycle value of the series’ – its total in space or time. It follows that of interest are series that converges to 0’ sums, as they will be worldcycles in time, or to finite values, as they will represent a carrying capacity of the whole as a population in space.

But if, as n → ∝, the limit Sn does not exist, then the series is said to be divergent and in this case there is no sense in speaking of its sum. The series is an inflationary case of the mathematical mirror.

But thanx God things are not so simple, because as we have seen there are two digital mirrors of worldcycles of existence, the 0’-1 unit circle (palingenetic worldcycle) and the 1-, which differ in the ‘certainty of one of its terms’ – in the 0’1, the whole 1 is certain the finitesimal 0’ uncertain, in the 1-, the 1, fintiesimal is certain and the relative uncertain. As both are mirrors of each other, we can consider the certainty of one of the two limits to calculate how far n in the Sn series of the other limit reaches (remember in 5D n does not tend to infinity). So we can make useful some infinity series by calculating its ¬entropic n-> value.

While we can discover that infinite series in the 0’-1 sphere are not.

As a simple example (we shall always use simple examples in all our texts and stiences, as we want to educate the ‘pro’ in a philosophy of stience common to all planes of space-time, for him or future 5D researchers if ever there is one besides this writer to complete the work:), let us consider the series:

whose terms form a geometric progression with common ratio x.            The sum of the first n terms is equal to: if | x | < 1 this sum has a limit: 1/1-x = 1+x+x2

If | x | > 1, then obviously the limit is ∝, which has no value in classic mathematics as the series diverges, but it does in 5D as will be a number, normally of the trinity->decametric scale.

A different situation holds for x = 1, as the series becomes then a definition of the natural numbers, such as Sn gives us the value of the n natural number, and so it expresses how natural numbers are born in sequential time.

Finally, if x = –1 the values become, 1, -1, 1, -1, which are inverse values for a dimotion, representing therefore in its partial sums that take the values +1 and 0 alternately, a worldcycle of existence in repetitive pairs (0,1).

The example illustrates our case for 5D 0’, ∝ realist values for the ‘∆-1’ and ∆+1 limits of a T.œ domain: we obtain more information in such a case, as all the cases of the series DO have a meaning, while in classic mathematics only for |x|<1 the series is meaningful; all the other values are divergent Sn->∞

These differences can be breached with the next theoretical axiom of classic series:

To each series there corresponds a definite sequence of values of its partial sums S1, S2, S3, ··· such that the convergence of the series depends on the fact that the sums approach a limit, but also on 5D series the inverse that there is a limit to the number of sums; that is the concept limit is NOT only applied to the whole Sn(x) but the parts N(S); which is the pentalogic justification, if we were to develop here the more advanced ‘concepts’ of multiple time logic (that is, there are arrows of time, from ∆-1 to ∆1, from SSóTT and StóTs and SóT) so for everything it is a worthy exercise to study the inverse, for A->B, B->A.

It is then possible to define conversely, an arbitrary sequence of numbers S1, S2, S3, ··· which corresponds to a series     partial sums of which will be the numbers of the sequence.

Thus the theory of variables ranging over a sequence may be reduced to the theory of the corresponding series, and conversely. Yet each of these theories has independent significance in 5D. The previous series is relevant because it signifies the commonest process of ‘erasing’ of previous terms in a time sequence; hence the series in reality tends not to be the value of the sum, but the steps of time, as ‘previous generations’ die away; and this indeed is the case for the most famous series of them all, the Fibonacci series, which mimics best processes of reproduction in time, and similar more complex concepts as the ‘log curve’

It is then when in 5D series we can prove the ‘natural tendency of all worldcycles’ towards zero, as If the series converges, then its general term approaches zero with increasing n, since:

Moreover, the divergence=uselessness of a geometric progression with common ratio x > 1 follows immediately from the fact its general term does not approach zero. So we might say that all memoriless series represent a worldcycle of existence, which approaches to a zero sum, as more ‘time quanta’ happen.

An other similar criteria to find then if a series is useful can be obtained not from the simple ‘memorial time sum or memoriless substaction’ as the previous methods, but through the next level of dimotion operands, the x,÷.

It is the so called D’Alambert method: Let us suppose that, as n approaches immensity, the ratio (Un + 1)/Un has a limit q. Then for q < 1 the sequence will certainly converge, while for q > 1 it will diverge. But for q = 1 the question of its convergence remains open.

Thus the useful series are those that converge either in its sum as a whole in space towards a number or in its difference between terms towards a 0’ sum in time and often have a reflection on Nature. We already mentioned the Fibonacci series; we can consider another Finitesimal series that converge, example of:

Geometric series.

Graphical illustration of the points of view of a -finite geometric series. Before understanding calculus mathematicians were concerned with ‘relative’ infinitesimal series.

Since similar paradoxes occur in the manipulation of infinite series, such as: 1/2 + 1/4 + 1/8 +

This particular series has its value precisely at 1, the whole, which is the conceptual meaning of Immensity, a world with a limiting membrain. But the way to define it is actually the inverse to which the power series is written. That is to consider the 0’ undefined finitesimal where the series starts and then as it happens in any process of reproduction, consider it a 2x series that will give birth to the 1= relative whole after the finitesimal 0’=1 becomes that whole, illustrating the relativity of the concepts of finitesimal and infinite. It is thus not a constant partition of a system, but rather a constant growth.

To see why this should be so, consider the partial sums formed by stopping after a finite number of terms. The more terms, the closer the partial sum is to 1. It can be made as close to 1 as desired by including enough terms. Yet once we arrive to the Minimal quanta of the physical reality we describe (cell, atom, individual, etc.) there is NO need to go beyond except in errors of the mind.

Thus a series can be considered both a scalar ‘search for its finitesimal part’ but also the inverse growth of a seed to the whole. Because from the human whole perspective the ¡ndifferent element is the finitesimal we tend to write it backwards in time, but for the Universe most series happen from the finitesimal to the whole. Yet both cases they are always ‘limited’ by the size of the ‘finitesimal’.

A geometric series is a series with a constant ratio between successive terms. So the series ½+1/4+1/8… is geometric, because each successive term can be obtained by multiplying the previous term by 1/2.

Each of the purple squares has 1/4 of the area of the next larger square (1/2×1/2 = 1/4, 1/4×1/4 = 1/16, etc.). The sum of the areas of the purple squares is one third of the area of the large square.

We can then consider to be a series that diminishes till it reaches the ‘finitesimal’ 1/n part of the whole. And it can easily be casted as a polynomial; since the terms of a geometric series form a geometric progression, meaning that the ratio of successive terms in the series is constant. This relationship allows for the representation of a geometric series using only two terms, r and a. The term r is the common ratio, and a is the first term of the series.

In the example we may simply write:

a+ar+ar2+ar3…   a=1/2 and r= 1/2

The behavior of the terms depends on the common ratio r:

If r is between −1 and +1, the terms of the series become smaller and smaller, approaching 0’ in the limit and the series converges to a sum. In the case above, where r is one half, the series has the sum one.

If r is greater than one or less than minus one the terms of the series become larger and larger in magnitude. The sum of the terms also gets larger and larger, and the series has no sum. (The series diverges.)

If r is equal to one, all of the terms of the series are the same. The series diverges.

If r is minus one the terms take two values alternately (e.g. 2, −2, 2, −2, 2,… ). The sum of the terms oscillates between two values (e.g. 2, 0, 2, 0, 2,… ). This is a different type of divergence and again the series has no sum.

For example in Grandi’s series: 1 − 1 + 1 − 1 + ···.

Geometric series are among the simplest examples of immense series with finite sums, although not all of them have this property.

Historically, geometric series played an important role in the early development of calculus, and they continue to be central in the study of convergence of series.

Geometric series are used throughout mathematics, and they have important applications in all sciences, as all of them are obviously scalar in its form, and respond to any of the 3 possible behaviors of systems, ‘convergent information’, divergent entropy and repetitive=reproductive oscillation. And finally they are stable in its time motion given by the constant ratio between its geometric terms.

Thus we can say a geometric series is a good mirror of a balanced ∆ST repetitive ‘present’ event and as such real.

Of the many mirror correspondences between series and 5D we want now to stress the relationship between the part and the whole, as elements the ternary structure of any T.œ with its singularity, that can be considered the a, initial term, the FINITESIMAL above all other finitesimals, the king of the hill so to speak, its membrane and the space between them.

This relationship is truly enlightening of the symmetry between the 3 regions in space of a being, and its 3 regions in scale. Whereas the central finitesimal @-mind is the finitesimal of the lower plane, the external membrane the ‘larger term’ arn of the series, and vital energy within them, the intermediate terms of the series which are irrelevant.

So as the singularity @=a, of the series expands through the vital energy elements in growing ‘circles’ to reach the final ‘membrane’ arn, magically those irrelevant vital space cells will disappear in the final calculus of the value of the series.

Further on, those sums will be limited by n, which IS THE value of the NUMBER OF ‘Planes’ within the vital energy (concentric circles) required to arrive to the surface of it.

So a can also be viewed as the relative ‘radius’ of the singularity mind, which gives conceptual birth to the formula of the angular momentum of the series, where rmv, signifies r=sum of singularity radius (imagine the inner region of the system as an Archimedean spiral)  m the vital energy mass, and v the membrane.

All this is expressed in terms of discrete numbers – not geometric continuous motion – by the classic formula:

For r≠1, the sum of the first n terms of a geometric series is:

As we see, the @ singularity value and its final term, arn are the ONLY values that matter, with all the intermediate terms ‘absorbed’ in the dynamic relationship between membrane and singularity by them. If s, is the value of the series for the singularity, without the membrane, rs is the value of the system for the membrane, without the singularity. As the vital energy within has both the singularity and the membrane as its ‘Klein’ limits of a non-euclidean sphere, which they never reach. And so we rest from the ‘Singularity’, S plus ITS  perception of the vital energy, the membrane, ‘rs’, and its feeding (negative value) of the vital energy, S-rS, to search for the Solution of the power series which is not the membrane view but the singularity view, s:

And so the solution as always is that of the mind view (in any discrete, numerical self-centered analysis) s=a (value of the singularity) multiplied by the parenthesis.

Then we can easily see the symmetry of that topological explanation of the series, with its scalar translation as a travel down a scale from the whole to the finitesimals. Since as we differentiate those series to converge and make sense, because we are traveling down the scale to the finitesimals, r, as n goes to Immensity, must be less than one for the series to converge. The sum then becomes:

…the left-hand side being a geometric series with common ratio r.

The beauty and simplicity of the formula shows by Occam’s razor principle indeed its ‘essential nature’ in terms of time-space laws.

It is quite interesting then to understand in terms of the 5 Dimotions and o-1=1-, time-space dual sphere (essential for quantum physics) the variations of the power series. As they work for the o-1 sphere, in which the series travels a scale of the fifth dimension: from 1∆ down to ∆-1 vs. its entropic divergent expansion when r is larger than ±1, as it travels in the 1- sphere, which should have a solution, when we define a relative infinite as the value of the whole perceived from the finitesimal point of view, which means a relative infinite. Then we make a travel upwards from the ∆-1 finitesimal or ∆-being to the ∆+1 world.

So those series represent the 1D and 4-5Dimotions, while the 3rd reproductive dimotion happens when r=1, as the reproductive sum that creates terms of a reproductive wave, which in a lineal sum of steps will represent the 2D locomotion of the being. Finally if r is -1 the series forms a ‘steady state’ 0’ sum world cycle, an oscillation of two values.

So the key concept of a proper 5D scalar interpretation of series (this analysis on the simplest of all series for 5D advanced theory would obviously expand to power series Taylor series etc, but we leave this work for the future pouring of my notebooks or in case I likely die earlier, for future researchers) is the concept of finitesimals and relative infinites,

The limit of a sequence

In that regard we amend the work of the German mathematician Karl Weierstrass and its formal definition of the limit of a sequence as follows:

Consider a sequence (an) of real numbers, by which is meant an infinite list:  a0, a1, a2, ….

It is said that an converges to (or approaches) the limit a as n tends to Immensity, if the following mathematical statement holds true: For every ε > 0, there exists a whole number N such that |an − a| < ε for all n > N. Intuitively, this statement says that, for any chosen degree of approximation (ε), there is some point in the sequence (N) such that, from that point onward (n > N), every number in the sequence (an) approximates a within an error less than the chosen amount (|an − a| < ε). Stated less formally, when n becomes large enough, an can be made as close to a as desired.

For example, the sequence in which an = 1/(n + 1), that is, the sequence: 1, 1/2, 1/3, 1/4, 1/5, …,  goes on forever.

Every number in the sequence is greater than 0’, but, the farther along the sequence goes, the closer the numbers get to 0’. For example, all terms from the 10th onward are less than or equal to 0.1, all terms from the 100th onward are less than or equal to 0.01, and so on. Terms smaller than 0.000000001, for instance, are found from the 1,000,000,000th term onward. In Weierstrass’s terminology, this sequence converges to its limit 0 as n tends to Immensity. The difference |an − 0| can be made smaller than any ε by choosing n sufficiently large. In fact, n > 1/ε suffices. So, in Weierstrass’s formal definition, N is taken to be the smallest integer > 1/ε

This example brings out several key features of Weierstrass’s idea. First, it does not involve any mystical notion of infinitesimals; all quantities involved are ordinary real numbers. Second, it is precise; if a sequence possesses a limit, then there is exactly one real number that satisfies the Weierstrass definition. Finally, although the numbers in the sequence tend to the limit 0, they need not actually reach that value.

Now this n > 1/ε is exactly what Leibniz without so much pedantic formalism considered the finitesimal, what we call the quanta of an ∆-1 scale and what physicists call in its study of different Planes, the minimal ‘error-quanta’ /2π, k-entropy, or ‘Planck mass’ (Black hole of a compton wavelength volume, or minimal quanta of gravitational ∆+1 Planes).

In the graph, 1/±10² is the limit considered the finitesimal of this particular ‘graph perception’. And also the error of our measure, as if we add another 1/±10², the series becomes a whole.

Thus most paradoxes of mathematics arise from not understanding those simple concepts, as well as the meaning of ‘inverse negative numbers’ .

For example an infinite series which are less well-behaved are the series: 1 − 1 + 1 − 1 + 1 − 1 +

If the terms are grouped one way: (1 − 1) + (1 − 1) + (1 − 1) +⋯,  then the sum appears to be: 0 + 0 + 0 +⋯ = 0.

But if the terms are grouped differently, 1 + (−1 + 1) + (−1 + 1) + (−1 + 1) +⋯ the sum is 1 + 0 + 0 + 0 +⋯ = 1.

It would be foolish to conclude that 0 = 1. Instead, the conclusion is that the series has a due value, and so it is creative oscillatory series with a time dynamic that cannot be merely said, not to have a solution, but has 2.

It has therefore an internal dual structure, which in modern ¬Algebra is the group:

‘a’: 1-1=0.   And so if we accept that internal ∆-1 unit for the series grouping and its ‘real value is:

a+a+…. = 0+0+0…=0.

So we can write it in terms of the generator as:

∑ \$t (+1) <≈> ∑ðƒ (-1), which defines generically a feed-back ‘world cycle’ whose sum is 0’.

In classic maths of a single space-time continuum, the difference between both series  is clear from their partial sums. The partial sums of 1/2+1/4… get closer and closer to a single fixed value—namely, 1. The partial sums of a+, without its internal ∆-1 (a) structure, alternate between 0 and 1, so the series never settles down.

A series that does settle down to some definite value, as more and more terms are added, is said to converge, and the value to which it converges is known as the limit of the partial sums; all other series are said to diverge. But in ∆ST many diverging series become when considered also its internal structure, convergent and well-behaved.

Actually, without even experimental evidence, there exist subtle problems with such ‘infinite’ construction. It might justifiably be argued that if the slices are infinitesimally thin, then each has 0’ area; hence, joining them together produces a rectangle with 0’ total area since 0 + 0 + 0 +⋯ = 0. Indeed, the very idea of an infinitesimal quantity is paradoxical because the only number that is smaller than every positive number is 0 itself.

The same problem shows up in many different guises. When calculating the length of the circumference of a circle, it is attractive to think of the circle as a regular polygon with infinitely many straight sides, each infinitesimally long. (Indeed, a circle is the limiting case for a regular polygon as the number of its sides increases.) But while this picture makes sense for some purposes—illustrating that the circumference is proportional to the radius—for others it makes no sense at all. For example, the “sides” of the infinitely many-sided polygon must have length 0, which implies that the circumference is 0 + 0 + 0 + ⋯ = 0, clearly nonsense.

So by reductio ad absurdum, the limits of infinitesimals are always an ∆-1 quanta. THIS of course also resolves all the Cantor’s nonsense of different infinities and its paradoxes. It is just ‘math-fiction’ and worthless to study.

In 5D maths, lies the exhaustion method does limit the parts to finitesimals, as a realist method, which implies nature also limits its divisions. This concept would be lost in the 3rd formal age, also with the ‘lineal bias’ introduced on Dedekind’s concept of a real number NOT as a proportion/ratio, between quantitative parameters of the ‘parts’ of a whole, or the ‘actions’ of a system and its St<ST>Ts parameters, which is what it is, but as an ‘abstract cut’ in a lineal sequential order of ‘abstract numbers’.

In the classic STi balanced age, both the limits method and finitesimal method of Leibniz considered infinitesimals, finitesimals that is with a ‘cut-off limit’ and real nature.

Those limits are minimal ‘steps’ of any scale (in time-motion), or minimal parts (in space-forms).

As usual we cannot be exhaustive in any theme of 5D but just give a ‘feeling’ of the discipline and how it corrects the errors of the axiomatic mental method of justification of humind’s mathematical space, as reality.

Series in that sense are also connected to the concepts of different infinities, so cherished in modern algebra (Cantor’s cardinal infinities and all that jazz)… Their paradoxes disappear though when the number of elements of the series reduces to ∝, so then we can always compare. In fact it is the classic method to define a series as divergent or convergent.

If we are given two series:

with positive terms such that for all values of n, beginning with a certain one, we have the inequality:

then the convergence of the second series implies the convergence of the first, and the divergence of the first implies the divergence of the second. Consider the simplest case of the harmonic series:

which it might seem to converge to K, as the numbers diminish. But if we compare it with the terms of a series where the sum of the underlined terms in each case is equal to ½ but the last term of those partial sums Sn coincides with the same term of the harmonic series (S4=S4, S8=S8, etc.):

It is clear that the sum Sn of the 2nd series (S2) approaches Immensity with increasing n, (∑1/2) and consequently the harmonic series diverges to ‘potential infinity’ even if it remains smaller than S2

Can then we make the harmonic series converge towards a constant, as it seems a ‘natural series’ of growing finitesimals (when added inversely from 1/max. n). Yes we do, when we rise to a power its terms:

Again this can be proved comparing with a series that converges to 1, the whole:

with positive terms converges to unity as its sum. Since its partial sums Sn, are equal to:

On the other hand, the general term of this series satisfies the inequality:

from which it follows that the series:

converges.

Again we do fin in the previous series 1/n-1 – 1/n, which written backwards starting in the finitesimal, forms a natural progression from 1/n to 1, with the memorial erasing of the previous term, a simple natural form to grow to 1; which do have many alternative paths/series.

Universal constants as series.

It is then obvious that all the ‘numbers’ we considered ‘ratios’ of fundamental ‘dimotions’ of reality; that is, Universal constants can be written as power series, which shows their symmetry in ∆¡-1>∆, essential to understad the constant entanglement between scale, space and time in the Universe.

As we study them in different parts of those texts on calculus and algebra, we refer the reader to them.

Such series converge, the more so when we transfer them to the complex plane, which due to its ±1 ‘I’ variation, converts lineal processes into cyclical ones, mirroring better all functions related to time.

Series are also a justification for polynomials beyond the simplest spatial view of them in 3 steps of dimensions of space (point, line, volume) or motions of time (distance, motion, acceleration):

Polynomials as divergent or convergent scalar series.

In mathematics, a power series (in one variable) is an infinite series of the formwhere an represents the coefficient of the nth term and c is a constant. an is independent of x and may be expressed as a function of n (e.g., an=1/n!). Power series are useful in analysis since they arise as Taylor series of infinitely differentiable functions.

In many situations c (the center of the series) is equal to 0’, for instance when considering a Maclaurin series. In such cases, the power series takes the simpler form

Any polynomial can be easily expressed as a power series around any center c, although most of the coefficients will be 0’ since a power series has infinitely many terms by definition. For instance, the polynomial f(x)=x²+2x+3 can be written as a power series around the center c=0 as

Or around any other center c One can view power series as being like “polynomials of infinite degree,” although power series are not polynomials. These power series are also examples of Taylor series, which are the key dimotions of scalar motion (1/1-x), entropy (exponential)  and 1Dimotion (Sin).

II AGE OF CALCULUS: THE OPERANDS ∫∂ AND ITS ‘SENTENCES’: PDES & ODES

We study the second age of Algebra, the age of calculus with its reflection of the 5 Dimotions of Timespace, as the best discipline to study the laws of times=changes, which are the fundamental laws of a Universe made of ‘timespace dimotions’, in which spatial mental spaces are a Maya of the senses.

The concepts of mathematical analysis, such as the derivative or the integral, as they presented themselves to Newton and his contemporaries, had not yet completely “broken away” from their physical and geometric origins, such as velocity and area. In fact, they were half mathematical in character and half physical. The conditions existing at that time were not yet suitable for producing a purely mathematical definition of these concepts. Consequently, the investigator could handle them correctly in complicated situations only if he remained in close contact with the practical aspects of his problem even during the intermediate (mathematical) stages of his argument.

Newton was guided at all stages by a physical way of looking at the problem. But the investigations of Leibniz do not have such an immediate connection with physics, a fact that in the absence of clear-cut mathematical definitions sometimes led him to mistaken conclusions. On the other hand, the most characteristic feature of the creative activity of Leibniz was his striving for generality, his efforts to find the most general methods for the problems of mathematical analysis; and its depth of philosophical understanding of finitesimals and wholes, shown also in his superior symbolism. The evolution of the concepts of mathematical analysis (derivative, integral, and so forth) continued, particularly in the work of Cauchy, which idealized the concept of a limit and used it as the basis for his definitions of continuity, derivative, differential, and integral. These definitions are widely used in present-day analysis and must be corrected back to the Greek and Leibniz’s scalar view.

Regarding practical application such idealism requires the limitation of the true finitesimals of changer of the actual world, solved with the expedient method of using differentials. This means that at every step of our mathematical argument the results obtained will contain certain errors, which may accumulate as the number of steps in the argument increases. But mathematical idealism denies it.

Still no other discipline of science was so close to understand time=change as calculus was even if its whys were hidden in the ‘magic’ of its techniques, as it subconsciously applied the pentalogic of different derivatives for each different function representing each distinct dimotion of space-time. And then once those derivatives of changes were applied, the whole function of existence could be integrated as a whole (ODEs). While multiple processes of finitesimal calculation of different variables, S=T equivalences between curvature and motion, etc. could be applied to the resolution of complex events between multiple T.œs performing different dimotions through PDEs and calculus of variations.

So not only pentalogic special analysis of change could be performed with derivatives and integrals of different operands, but very complex ‘sentences’ of sequential changes modeled with the 3 fundamental complex syntactic equations of calculus, ODEs, PDEs and calculus of variations, within the entropic limits of an A-B definite integral or the first and final moment on a time path, or the maximal and minimal or the points in which the function became zero cutting the real line, or in its first or second derivative.

So each equation of mathematical physics studying the motion of physical systems hid a sequence of ‘existential algebra’ for the physical parts of the simultaneous ensembled of variables studied.

We shall consider in this brief introduction to the golden age of calculus, a pentalogic analysis of derivatives and its inverse integrals, considering how calculus represents a second layer of complexity over the operands it can further analyze extracting its minimal quanta of change, and/or integrating them in new ‘dimensions’ .

To study then the complex combinations of calculus (multiple derivatives and integrals on time and space) –ODEs, PDEs and variational methods, and finally consider the fundamental simpler equations of mathematical physics, growth, reproduction and decay, studied with them…

THE CONNECTION BETWEEN INTEGRALS AND DERIVATIVES IN ∆ST: FINITESIMALS AS LIMITS: 5D APPROACH

Those subtle philosophical considerations could only be done, once Leibniz established the tangent as the proper measure of change, where Y(S) and X(t), could be considered similar; S≈T.

Then once the condition of present balance is reached; calculus works on praxis in most cases with the value of a differential, which is the equivalent to the finitesimal minimal time quanta of reality.

In other words, calculus consists on ‘calculating’ for a spatial present state, its quantity of time-change through a period of existence, which for worldcycles will be zero as we shall add, as explained in the worldcycle, both the positive growth and negative decreases of the y’ sinusoidal functions.

Those functions then will be limited by the points of birth and extinction.

While inversely in the integration of exponential growth or decay, we will reach relative ∝ growth and the function will be ‘cut’ by a log of maximal growth.

The finitesimals of physical scales. H, t, cc.

In the S=T homology a mind stops motion into form, hence converts T into S. And this happens in an asymmetric manner according to the choice of upper or lower scales of 5D: Huminds are built, to see the larger whole as slow space and the smaller parts as time, because of 5D metrics (smaller faster clocks, slower larger wholes). And so we distinguish finitesimals of time in lower scales (angular momentum); and finitesimals of space associated to larger scales (c2); with an intermediate k finitesimal of Boltzmann in the thermodynamic scale.

In praxis the existence of minimal quanta would only become evident with the discovery in mathematical physics of the minimal quanta of energy, which cannot be cancelled, as it becomes the ‘minimal’ amount that gives origin to virtual particles (Heisenberg’s residual h/2). Yet again this minimal Planckton that cannot be eliminated WAS interpreted with the weird mental point of view that it was an ‘uncertainty of humind’s measure. A bizarre way of complicating reality; which amounts to say that the first cell is an uncertain measure of life, the first atom an uncertain measure of matter – and so the Planckton, minimal quanta of energy became the origin of one of the most arrogant deluded interpretations of reality humind’s imagination has deviced; ever since an ass breeder saw a bush burning and thought it was G. Bush talking to him – Copenhagen interpretation of quantum physics in terms of mathematical creationism still going strong, along Mosaic creationism for the throne of humind’s philosophy of the Universe. Back to reality, in any system we shall find a minimal quanta that does not go away. In physical systems is the ‘Planckton’ (H-Planck constant), the first quanta of angular perception, the first dimotional spin, of light space-time, our minimal part in the ∆-3 scale of timespace – as scale IS the absolute reality and appears always associated to a minimal space or time parameter, in which a scale starts to exist. The same finitesimal would be found in the Thermodynamic molecular scale as absolute zero cannot be reached, and in the scale cosmological scale as cc, a quanta =area of entropic light motion exists.

It is then essential to understand in depth 0’s finitesimals for the workings of the Universe to fully grasp the relationship between the elements of Planes of spacetime – its fractal points and lines that ad them in time or space and analysis; in both directions – down as derivatives and up as new dimensions called integrals.

Finitesimal Quanta, as the limit of populations in space and the minimal action=dimotion in time.

In the idealized view of mathematics this amounts to a sum of smallish areas either taken as finitesimals of space (Riemann integral) or finitesimals of time (Lebesgue integral), with different applications to calculate its sum through an interval of events or populations. It is then only needed to be aware that what mathematicians calla limit of h->0 IS, a limit that stops when h=0’; that is when h finds its real finitesimal value on the ‘units of measure’ we use to calculate. This finitesimal might be as small as an atom which from the human scale in terms of the Avogadro number reduces to 10-24 parts but regardless of its tiny size the concept we want to stress is that for a limit to exist h must stop at 0’.

The method of integrals (or rather its reality) becomes then a clear proof of the discontinuity of scalar spacetime systems, which are made of steps, since the procedure of integration calculates the value of an Y(S) parameter for each min. X(T)=1/t finitesimal ‘stœp of time quanta’ measured by its frequency in time.

It is then NOT real to pretend that this 1/t=minimal time event must reach absolute zero. As Leonardo noticed, an instant (zero) has no time. So as a point with no parts has no space, and does not exist. Time exists because it is a minimal interval of motion, separated often by a stop state of space, and a dimotion of perceptive information, conforming a discontinuous series of stœps, S=T, the basic beat of reality, which is what we integrate by moving up and right X x Y to find an area and then summon them up.

So to solve an integral we proceed as follows. We divide the interval [a, b] into n parts, not necessarily equal. We denote the length of the first part by Δx1, of the second by Δx2, and so forth up to the final part Δxn. In each segment we choose points ξ1, ξ2, ···, ξn and set up the sum:

Let us suppose that a curve above the x-axis forms the graph of the function y = f(x). We attempt to find the area S of the segment bounded by the line y = f(x), by the x-axis and by the straight lines drawn through the points x = a and x = b parallel to the y-axis.

The magnitude Sn is obviously equal to the sum of the areas of the rectangles shaded in figure:

The finer we make the subdivision of the segment [a, b], the closer Sn will be to the area S. If we carry out a sequence of such constructions, dividing the interval [a, b] into successively smaller and smaller parts, then the sums Sn will approach S.

The possibility of dividing [a, b] into unequal parts makes it necessary for us to define what we mean by “successively smaller” subdivisions. We assume not only that n increases beyond all bounds but also that the length of the greatest Δxi in the nth subdivision approaches 0’. Thus the calculation of the desired area has in this way been reduced to finding the limit:

We note that when we first set up the problem, we had only an empirical idea of what we mean by the area of our curvilinear figure, but we had no precise definition. But now we have obtained an exact definition of the concept of area. It is the limit:

We have now an intuitive notion of area, on the basis of which we can calculate the area numerically.

We have assumed that: ƒ(x)≥0’. If f(x) changes sign, then in figure, the limit will give us the ¬Algebraic sum of the areas of the segments lying between the curve y = f(x) and the x-axis, where the segments above the x-axis are taken with a plus sign and those below with a minus sign.

Definite integral. The entropic limits of a domain.

The need to calculate the integral Sum limit arises in many other problems in which a new dimension is reached by the sum of finitesimal paths. For example, suppose that a point is moving along a straight line with variable velocity v = f(t). How are we to determine the distance s covered by the point in the time from t = a to t = b?

Let us assume that the function f(t) is continuous in the sense aforementioned (S≈T; h->0’); that is, in small intervals of time the velocity changes only slightly. We divide the interval [a, b] into n parts, of length Δt1, Δt2, ···, Δtn. To calculate an approximate value for the distance covered in each interval Δti, we will suppose that the velocity in this period of time is constant, equal throughout to its actual value at some intermediate point ξ1. The whole distance covered will then be expressed approximately by the sum:

and the exact value of the distance s covered in the time from a to b, will be the limit of such sums for finer and finer subdivisions; that is, it will be the limit:

Whereas the limit ∆t will be the minimal step in space and frequency of time of a single event.

Since as the word says a real limit is each relative S(0’)=T(0’) of change.

Ideal mathematicians treat those finitesimals under the obsession with perfect measure as real zeros because they will discharge them to obtain a finite solution in ∆+1. We have discussed the falsehood in philosophical terms of such approach. But and this is the paradoxical marvel of the Universe, as for the ∆+1 system all ∆-1 finitesimals matter nothing, and are ‘expendable’, as a citizen is expendable for the state or army that ‘doesn’t count corpses’.

We shall try to give some idea of these concepts. For this purpose we consider the following example.

We wish to calculate the area bounded by the parabola with equation y = x2, by the x-axis and by the straight line x = 1. Elementary mathematics will not furnish us with a means for solving this problem. But here is how we may proceed. We divide the interval [0, 1] along the x-axis into n equal parts at the points:

and on each of these parts construct the rectangle whose left side extends up to the parabola. As a result we obtain the system of shaded rectangles, the sum Sn of whose areas is given by:

Let us express Sn in the following form: = 3+¡n

The quantity ¡n, which depends on n, possesses a remarkable property: If n is increased beyond all bounds, then αn approaches 0’, the ∆-1 finitesimal. This property may also be expressed as follows: If we are given an arbitrary positive number, ‘, , in classic calculus, then it is possible to choose an integer N sufficiently large that for all n greater than N the number ¡n will be equal to the given ‘ in absolute value.

Whereas ‘ is the ‘real physical value’ of the finitesimal that ‘exists’; where our 5D calculus stops. While in the axiomatic method e can be chosen at will as small as the idealist mathematician wishes, which is NOT what reality shows. E will be h in quantum physics, it will be the residual k temperature and the minimal c2 part of a mass or minimal unit of vacuum spacetime in the galaxy; it will be a cell in an organism, an atom in a matter state. Or else what will be our counting made of? Entelechies?

This said, as ‘ is so small, for the larger ∆+1 an ¡ndifferent it is OK to discharge the residual finitesimals to obtain an approximate result. Since absolute precision for ∆+1, the plane of the observer is not needed and we discharge in specific methodological calculus the reminder smallish sum of some 0’s to calculate a result that to be more accurate should be adjusted with a ‘±’ symbol or an ≈ not an = identity one.

So we obtained the area below the parabola as 1/3rd by discharging a finitesimal term of the sum. This ‘discharged’ quantity however is real – it is the toll we pay but never gets to zero. In the measure of a fractal coast, the coast grows in size and precision with smaller steps, but we must stop the steps at a certain scale or else we would spend an infinite time measuring it. Information thus becomes idealized to make possible in finite time to measure it.

Formal symbols of 5D calculus from existential algebra: -¡n, , , finitesimal, ¡ndifferent, ¡nfinitesimal

It is then necessary as always in 5D to slightly change the symbols of 5D calculus, adapted to existential algebra – nor that I think huminds will ever upgrade its chips but once we are gone, faster than you think, likely more rigorous AI robots will adopt it.

The finitesimal is then defined as -¡n, the minimal amount we discharge, that is subtract (reason why it has a negative symbol), from the real result to obtain the idealized mathematical mirror of the event or spatial population, as for ∆+1 it is indistinguishable, which we call ¡ndifferent (a simpler world, with an ethical component, because -¡n is real and for itself it matters, even if for the ∆+1 world is expendable). It can also be called a finitesimal (preferred) or -¡n≈finitesimal if you like. The symbol -¡ also means it is the unit of ∆-¡, the minimal quantity of the whole; and finally the symbols, , ‘, , mean they ‘exist’, the are ‘real’, 0’ does have parts, it is a non-euclidean point of its own.

It would be easy to give many examples of practical problems leading to the calculation of such an imperfect limit. We will discuss some of them later, but for the moment the examples already given will sufficiently indicate the importance of this idea that adapts classic calculus to reality. As now each of the classic finitesimals used in calculus are as Leibniz put it ‘a world in its own’, and its study in real, philosophical terms reveals important properties of the structure of space-time and its modes of change. The following list of finitesimals can be then assessed in its properties under those conditions:

It is clear that xn, yn, and zn are -¡n≈finitesimals, the first of them approaching 0’ through decreasing values, the second through increasing negative values, while the third takes on values which oscillate around 0’. Further, un → 1, while υn does not have a limit at all, since with increasing n it does not approach any constant number but continually oscillates, taking on the values 1 and −1 – so the ¡nfinitesimal is the whole changing its direction of existence. All of them will be subject to further scrutiny when/if my finitesimal lifetime allows me to publish a few papers on mathematical physics, as they appear in multiple physical equations. The most important of them, being, Xn, the fundamental finitesimal, which we shall often comment on (the minimal part of an N whole, the dimensionless angle/curvature of a motion) and its closely connected -1/n2, for accelerated motions.

Finally to notice that in the example, -¡n makes the volume slightly smaller than 1/3rd as -1/2n>1/6n2; which in reverse fashion if we consider the inner region of the curved parabola, makes it slightly larger than 2/3rds which is a general rule for the internal volume of curved surfaces, always slightly larger than the polygonal, lineal form it encloses. I.e. the hexagon has a 3 perimeter, in its inscribed π=3,14… circle; which again has important consequences for the real, vital structure of organs, as the smaller parts are ‘lineal’, and fit in the larger curved enclosures, leaving a safety space to its walls, which often has apertures when constructed as most circles are with 3 diameters, equal to π-3/π=4%, which is the ideal amount of outer reality perceived through those apertures left by the 3 diameters that construct the porous membrain (percentage of energy and matter in the Universe we observe).

Finitesimals treated with other operands.

If we shall study how operands are treated with calculus it is customary to consider the reverse action of treating finitesimals with the polynomial operands, and as now ‘finitesimals’ are real fractal point with parts, it is obvious all the laws, properties and operands of polynomials work with ‘finitesimals’, the so-called laws of operations with limits of classic mathematics.

It is then not necessary (but possible to prove with the axiomatic ideal method, consistent in itself beyond the limitations we include, that if the variables xn and yn approach finite limits, then their sum, difference, product, and quotient also approach limits which are correspondingly equal to the sum, difference, product, and quotient of these limits. This fact may be expressed as:

The only case that deserves further analysis, which is when a quotient of two finitesimal 0’s of different value is considered. Here it is impossible to state in advance whether the ratio xn/yn will approach a limit, and if it does, what that limit will be, since the answer to this question depends entirely on the character of the approach of xn and yn to 0’; that might result in either a 0’ or an , thus proving ad lateral the falsehood of Cantorian equal infinities, as N2 regardless of Cantor’s musings IS larger than N but has paradoxically LESS information as a set of numbers (because in 5D larger Spatial forms have paradoxically less information, stored in the faster time cycles of smaller beings, SxT=C). Now, if we just care for the ‘size’ then the previous examples, Xn=1/n & Yn=1/n2 means that, Yn/Xn=1/n->0’ & Xn/ Yn=n->∝

While Xn/ Zn:(-1) n/n -> (-1) n does not approach any limit because Zn does not; and in quotients the dominant element is the denominator which tends to be the predator that imposes its properties to the whole.

DIFFERENT DIMENSIONAL MOTIONS OF SPACE-TIME CHANGE

We then need to consider in how many dimensions finitesimal change and its aggregated account into a continuous ∆+1 parameter of the whole change of the event can be observed, and how it can be diversified into Time, scalar or spatial change. Let us then consider two example of dual dimensional, holographic change, in Ts-speed and St-volume, which were the first 2 themes solved historically, to see how calculus methods can be used equally for quanta=frequency=steps of time, or quanta=populations=finitesimals of space: by virtue of S=T :

There are many different parameters of change in space and time in human sciences, due to the lack of clear-cut unifying concepts of space and time.

But in all we need to find a ‘finitesimal quantity’, either in time or in space, to measure ‘changes of speeds and frequencies of time motion for each spatial step’, ∆s/∆t , or changes on volumes of space and populations of simultaneous space-beings.  The difference between both analysis is one of ‘persistence of change’ or ‘simultaneity of change’ studied in space, vs. sequential time changes. In spatial analysis we often calculate a ‘whole’ domain in which populations have a gradient of change in its parameters, even if they co-exist together.

In time, this ‘gradient’ of change or ‘acceleration’ is calculated at ‘each instant of time’, for a single point, and thus its change, has lesser ‘dimensionality in space’.

So the study of change in space tends to have more ‘volume and dimensionality’ and ‘simultaneity’, as the study of pure time changes (locomotion, entropy) is analyzed with the being reduced to a time point, or even loosing its spatial simultaneity through entropic processes.

Adding a new dimension of ‘width-energy-population-intensity-density-pressure.’

Let us put an example and resolve it in terms of space-quanta (method of limits) which is the first ‘basic understanding’ of calculus in terms of its finitesimal units:

Quanta of space.

A spatial use of the limit concept calculates not a time but a space volume, forebear of differential calculus:

Example 2. A reservoir with a square base of side a and vertical walls of height h is full to the top with water (figure 1). With what force is the water acting on one of the walls of the reservoir?

We divide the surface of the wall into n horizontal strips of height h/n. The pressure exerted at each point of the vessel is equal, by a well-known law, to the weight of the column of water lying above it. So at the lower edge of each of the strips the pressure, expressed in suitable units, will be equal respectively to:

We obtain an approximate expression for the desired force P, if we assume that the pressure is constant over each strip. Thus the approximate value of P is equal to:

To find the true value of the force, we divide the side into narrower and narrower strips, increasing n without limit. With increasing n the magnitude 1/n in the above formula will become smaller and smaller and in the limit we obtain the exact formula:

P = ah²/2

Leibniz rightly considered  1/n the ‘finitesimal unit’, whereas we consider 1 the whole, and n, its minimal fraction, usually 1 of its 1010 elements (1/1010): the standard value of finitesimal units.

In the example again the finitesimal limit is extremely small. How much? We should consider statistical mechanics to find it is the size of molecules of water, which form bidimensional layers of liquid to shape the 3D volume, and are about 1020 times smaller than the whole in terms of Avogrado’s Mols.

The error ε is so small as to be P=(ah²/2) x 1.00000000000000000001 (1 +1/n)

And this is a general rule in most cases: the finitesimal error is as small as 1/n, where n is the quanta of the scale. So when we do ∆+1 calculations as in most cases it is irrelevant. But theoretically it is important and in fact it will give us a ‘realist’ concept for the uncertainty principle of Heisenberg.

Hence unnoticeable, truly ¡n=finitesimal, still important to understand the idealization of mathematical rules. As it means theoretically that the correct concept is a differential equation, where the finitesimal is ‘real’.

The idea of the method of limits is thus simple, accurate and amounts to the following. In order to determine the exact value of a certain magnitude, we first determine not the magnitude itself but some approximation to it. However, we make not one approximation but a whole series of them, each more accurate than the last. Then from examination of this chain of approximations, that is from examination of the process of approximation itself, we uniquely determine the exact value of the magnitude. by ignoring the finitesimal error.

The same practical problem can be resolved with the differential used as an approximate value for the increment in the function. For example, suppose we have the problem of determining the volume of the walls of a similar closed cubical box whose interior dimensions are 10 × 10 × 10 cm and the thickness of whose walls is 0.05 cm. If great accuracy is not required, we may argue as follows. The volume of all the walls of the box represents the increment Δy of the function y = x3 for x = 10 and Δx = 0.1. So we find approximately:

Speed and acceleration: 2D TT

We used a simple spatial case of a gradient with a clear equation, P=ah2/2 to compare it with one case of time change in which the gradient also caused by gravitational ‘weight’ is not constrained by a wall, hence the force is released to become a time dimotions. Not surprisingly in such a case, as it was established experimentally by Galileo, the distance s covered in the time t by a body falling freely in a vacuum is expressed in terms of TT-acceleration by a similar formula:   s=gt²/2

Whereas g is a constant that measure the acceleration on Earth, equal to 9.81 m/sec².

What is the velocity of the falling body at each point in its path?

Let the body be passing through the point A at the time t and consider what happens in the short interval of time of length Δt; that is, in the time from t to t + Δt. The distance covered will be increased by a certain increment Δs. The original distance is s1 = gt²/2.

From the increased distance we find the increment:

This represents the distance covered in the time from t to t + Δt. To find the average velocity over the section of the path Δs, we divide Δs by Δt:

Letting Δt approach 0’, we obtain an average velocity which approaches as close as we like to the true velocity at the point A. On the other hand, we see that the second summand on the right-hand side of the equation becomes vanishingly small with decreasing Δt, so that the average υav approaches the value gt, a fact which it is convenient to write as follows:

While both formulae are never compared in classic physics, it is worth to notice that they are mimetic by virtue of S=T, so now the spatial gradient, ‘h-eight’ becomes the temporal gradient, t-ime; the intensity of the gradient, giving for P by the a-mass of liquid, is now given by the g-force of the mass of the Earth; and the outcome is an inverse dynamic parameter of time change Pressure vs. a static parameter of spatial length, distance.

Pressure is an energy ST density. So a pressure gradient is an energy density gradient, acceleration is a TT-double time motion. And yet as both are ultimately ‘holographic ST, TT’ dual functions its equations in ‘scalar terms’ are the same, a key concept for all the similar equations we will find regardless of the ‘holographic dimotion’ we integrate or derviate.

THE UPPER LIMIT: ¡MMENSITIES:

The relative . +¡mmensity.

It is then customary in calculus to teach the inverse concept of an infinite magnitude, which we also reduce to a relative Immensity, ∝; as infinities loose meaning and become entropic, uncertain in the borders of ∆, or beyond the domain of existence in time and space of the function we deal with.

Following the rules of ¡nglish that change slightly the wording of 5D we thus substitute the word infinity for +¡mmensity. Immensity as we all know is not infinite, but it is immensurable which is the meaning that matters here, as such ‘largesse’ becomes the +¡ whole world for the -¡nfinitesimal that finds it ‘infinite’, because it no longer can measure it.

We does talk of an ¡mmensely large magnitude, which is defined as a variable xn (n = 1, 2, ∝), with the property that after choice of an ¡mmense large positive number M, the limit of measure for the -¡n=finitesimal it is not possible to find a number N>M that we can ‘count in reality’, within the limits of time and space of our ¡n≈finitesimal existence, such that for all n > N,

M+¡ thus become the +¡mmense value, ∝, that limits the world of X. Such a magnitude xn is said to approach +immensity. If it is positive (negative) from some value on, this fact is expressed thus: xn → + ∝ (xn → − ∝).

For example, for n = 1, 2, ∝; lim log 1/n = -∝, lim n2= +∝; limg tan (π/2+1n)=-∝

It is easy to see then that if a magnitude +¡M is ¡mmense, then -¡n = 1/M is immensely small, and conversely.

Something that as, 5D mathematics is experimental, moving into the realm of ‘reality’, not an ‘ideal mathematical entelechy’ must have an experimental consequence on the study of ‘real immense creatures’. And indeed, we immediately notice that the largest species feed on ¡ndifferent ¡n≈finitesimals that are paradoxically ¡mmensely small. I.e. the largest mammals, Whales, feed on the smallest animals, Krills and Planckton; the largest cosmological bodies, gravitational black holes, feed in the smallest quanta, ‘gravitons≈neutrinos?’, etc.

Which justifies the ∆±¡ structure of nested Universes, where the largest beings are made of the smallest parts.

This also shows that when we talk of an ¡ndifferent ¡n≈finitesimal, we are mostly referring to a quanta of entropic feeding, which is indifferent to the being. I.e. we reduce even further our food to its ¡ndifferent amino acids that we will then reform to our specific information.

The topological view is simple; the limits of the domain of a function for its finitesimal parts, are the membrain and singularity they cannot reach, as the whole for its inner parts is an open ball transited by the finitesimal.

It is thus essential in calculus the question of the ‘boundary conditions’, in which the membrane determines the volume which is integrated as the Space-time area, surrounded by the being that is meaningful to its territory.

The area integrated on a function has then 2 S=T meanings. Either is a measure of its vital energy between the singularity at 0’ point and its membrane, the limits of the domain in space (albeit elongated in the lineal Cartesian frame) or if we integrate a motion between its initial condition where it receives its momentum and its final stop.

When a point moves back and forth within its world as it performs repetitive dimotions of existence we can integrate the path to extract information of its motion. For the larger ∆+1 world, the finitesimal integrated along the path is often a point of energy or information shared between the membrain and singularity as the initial point and boundary condition of its world seen as a topological open ball, where the membrain is its ‘birth-seed’ state and the singularity its final point of death-entropy. It is thus the quanta that the membrain ‘sends to the singularity’ for it to perceive or feed, which moves in a path of minimal action=consumption of energy between both. And finally the finitesimal might represent , an ‘ex-foliated’ unit of angular momentum, a skin layer of vital space subtracting piece by piece, finitesimal by finitesimal, for the T.œ to communicate in the outer world

The limit is called the definite integral of the function f(x) taken over the interval [a, b], and it is denoted by:

The expression f(x)dx is called the integrand, a and b are the limits of integration; a is the lower limit, b is the upper limit. And very often they are the initial and final point, the 0’ singularity and membrain that cancel a worldcycle. And so in 5D a and b are NOT part of the integral, as the finitesimal if it is a time path does not exist on those 2 limits, we might consider to belong to ∆±1, or if it is a volume, represent the membrain and singularity of an open ball, of a ‘different substance’, not to be integrated.

RECAP. We change following the transformation of sciences into slightly different stiences, the concept of infinite from a relative infinite, , and an infinitesimal for a finitesimal. The first being the whole of an ¡-plane of reality, the second its minimal part.

The connection between differential and integral calculus.

The problem considered then reduces to calculation of the definite integral:

Another example IS the problem of finding the area bounded by the parabola y = x².

Here the problem reduces to calculation of the integral:

We were able to calculate both these integrals directly, because we have simple formulas for the sum of the first n natural numbers and for the sum of their squares. But for an arbitrary function f(x), we are far from being able to add up the sum  (that is, to express the result in a simple formula) if the points ξi, and the increments Δxi are given to suit some particular problem. Moreover, even when such a summation is possible, there is no general method for carrying it out; various methods, each of a quite special character, must be used in the various cases.

So we are confronted by the problem of finding a general method for the calculation of definite integrals. Historically this question interested mathematicians for a long period of time, since there were many practical aspects involved in a general method for finding the area of curvilinear figures, the volume of bodies bounded by a curved surface, and so forth.

We have already noted that Archimedes was able to calculate the area of a segment and of certain other figures. The number of special problems that could be solved, involving areas, volumes, centers of gravity of solids, and so forth, gradually increased, but progress in finding a general method was at first extremely slow. The general method could not be discovered until sufficient theoretical and computational material had been accumulated through the demands of practical life.

The work of gathering and generalizing this material proceeded very gradually until the end of the Middle Ages; and its subsequent energetic development was a direct consequence of the rapid growth in the productive powers of Europe resulting from the breakup of the former (feudal) methods of manufacturing and the creation of new ones (capitalistic).

The accumulation of facts connected with definite integrals proceeded alongside of the corresponding investigations of problems related to the derivative of a function. The reader already knows from that this immense preparatory labor was crowned with success in the 17th century by the work of Newton and Leibnitz. It is in this sense that Newton and Leibnitz are the creators of the differential and integral calculus.

One of the fundamental contributions of Newton and Leibnitz consists of the fact that they finally cleared up the profound connection between differential and integral calculus, which provides us, in particular, with a general method of calculating definite integrals for an extremely wide class of functions.

To explain this connection, we turn to an example from mechanics.

We suppose that a material point is moving along a straight line with velocity v = f(t), where t is the time. We already know that the distance a covered by our point in the time between t = t1 and t = t2 is given by the definite integral:

Now let us assume that the law of motion of the point is known to us; that is, we know the function s = F(t) expressing the dependence on the time t of the distance s calculated from some initial point A on the straight line. The distance σ covered in the interval of time [t1, t2] is obviously equal to the difference: σ= F(t2) – F(t1)

In this way we are led by physical considerations to the equality:

which expresses the connection between the law of motion of our point and its velocity.

From a mathematical point of view the function F(t), may be defined as a function whose derivative for all values oft in the given interval is equal to f(t), that is:

F'(t)= ƒ(t).    Such a function is called a primitive for f(t).

We must keep in mind that if the function f(t) has at least one primitive, then along with this one it will have an infinite number of others; for if F(t) is a primitive for f(t), then F(t) + C, where C is an arbitrary constant, is also a primitive. Moreover, in this way we exhaust the whole set of primitives for f(t), since if F1(t) and F2(t) are primitives for the same function f(t), then their difference ϕ(t) = F1(t) − F2(t) has a derivative ϕ(t) that is equal to 0’ at every point in a given interval so that ϕ(t) is a constant.*

From a physical point of view the various values of the constant C determine laws of motion which differ from one another only in the fact that they correspond to all possible choices for the initial point of the motion.

We are thus led to the result that for an extremely wide class of functions f(x), including all cases where the function f(x) may be considered as the velocity of a point at the time x, we have the following equality:

where F(x) is an arbitrary primitive for f(x).

This equality is the famous formula of Newton and Leibnitz, which reduces the problem of calculating the definite integral of a function to finding a primitive for the function and in this way forms a link between the differential and the integral calculus.

Many particular problems that were studied by the greatest mathematicians are automatically solved by this formula, stating that the definite integral of the function. f(x) on the interval [a, b] is equal to the difference between the values of any primitive at the left and right ends of the interval. It is customary to write the difference (30) thus:

Example 1. The equality: (x³/3)’=x² shows that the function x³/3 is a primitive for the function x2. Thus, by the formula of Newton and Leibnitz:

Example 2. Let c and c′ be two electric charges, on a straight line at distance r from each other. The attraction F between them is directed along this straight line and is equal to:

F=a/r²   (a = kcc′, where k is a constant). The work W done by this force, when the charge c remains fixed but c′ moves along the interval [R1, R2], may be calculated by dividing the interval [R1, R2] into parts Δri.

On each of these parts we may consider the force to be approximately constant, so that the work done on each part is equal to:. Making the parts smaller and smaller, we see that the work W is equal to the integral:

The value of this integral can be calculated at once, if we recall that:

So that:

In particular, the work done by a force F as the charge c′, initially at a distance R1, from c, moves out to Immensity, is equal to:

From the arguments given above for the formula of Newton and Leibnitz, it is clear that this formula gives mathematical expression to an actual tie existing in the objective world. It is a beautiful and important example of how mathematics gives expression to objective laws.

We should remark that in his mathematical investigations, Newton always took a physical point of view. His work on the foundations of differential and integral calculus cannot be separated from his work on the foundations of mechanics.

The definite integral

Returning now to the definite integral, let us consider a question of fundamental importance. For what functions f(x), defined on the interval [a, b], is it possible to guarantee the existence of the definite integral:Namely a number to which the sum:

Tends as limit as max Δxi, → 0? It must be kept in view that this number is to be the same for all subdivisions of the interval [a, b] and all choices of the points ξi.

Functions for which the definite integral, namely the limit (29), exists are said to be integrable on the interval [a, b]. Investigations carried out in the last century show that all continuous functions are integrable.

But there are also discontinuous functions which are integrable. Among them, for example, are those functions which are bounded and either increasing or decreasing on the interval [a, b].

The function that is equal to 0’ at the rational points in [a, b] and equal to unity at the irrational points, may serve as an example of a nonintegrable function, since for an arbitrary subdivision the integral sum sn, will be equal to 0’ or unity, depending on whether we choose the points ξi, as rational numbers or irrational.

Let us note that in many cases the formula of Newton and Leibnitz provides an answer to the practical question of calculating a definite integral. But here arises the problem of finding a primitive for a given function; that is, of finding a function that has the given function for its derivative. We now proceed to discuss this problem. Let us note by the way that the problem of finding a primitive has great importance in other branches of mathematics also, particularly in the solution of differential equations.

As we stated before integrals are mostly useful when we are studying a ‘defined’ full S<ST>T system with a membrane or contour closing the surface. As integrals are more concerned with ‘space’ and ‘derivatives’ with time.  And further on, those which integrate space-time systems, or double and triple integrals.

Calculus of ALL type of vital spaces, enclosed by time functions, with a ‘scalar’ point of view, parameter that measured what the point of view extracted in symbiosis with the membrane, from the vital space it enclosed. Alas, this quantity absorbed and ab=used by the point of view, on the vital space would be called ‘Energy’, the vital space ‘field’, the membrane ‘frequency’, the finitesimal ‘quanta or Universal constant’, and the scalar point of view ‘active magnitude.

The fundamental language of physics are differential equations, which allow to measure the content of vital space of a system. The richness and varieties of ‘world species’ will define many variations on the theme. Sometimes there will not be a central point of view, and we talk of a liquid state’, where volumes will not have a ‘gradient’, but ‘Pressure’, the controlling parameter of the time membrane will be equal, or related to the gradient of the eternal world p.o.v. of the Earth (gravitational field).

Then we shall integrate along 3 parameters, the density that defines the liquid, the height that defines the gradient and the volume enclosed. Liquids, due to the simplicity of lacking an internal POV, would be the first physical application of Leibniz’s findings by his students, the Bernoulli family. Next a violin player would find the differential equation of waves – the essential equation of the membranes of present time of all systems. The 3rd type of equations, those of the central point of view, will have to wait a mathematician, Poisson – latter refined by Einstein in his General Relativity.

This is the error of Newton. All cycles are finite, as they close into themselves. All worldcycles of life and death are finite as they end as they begun in the dissolution of death. All entropic motions stop. All time vortices once they have absorbed all the entropy of their territory become wrinkled, and die. Newton died, his ‘time duration’ did not extend to infinity.

But those minds measure from their self-centered point of view, only a part of the Universe, and the rest remains obscure. So all of them display the paradox of the ego, as they confuse the whole Universe with their world, and see themselves larger than all what they don’t perceive. Hence as Descartes wittingly warned the reader in his first sentences ‘every human being thinks he is gifted with intelligence.

The mean value theorem.

A differential expresses the approximate value of the increment of the function in terms of the increment of the independent variable and of the derivative at the initial point. So for the increment from x = a to x = b, we have:

ƒ(b) – ƒ(a)≈ ƒ'(a) (b-a).

It is possible to obtain an exact equation of this sort if we replace the derivative f′(a) at the initial point by the derivative at some intermediate point, suitably chosen in the interval (a, b). More precisely: If y = f(x) is a function which is differentiable on the interval , then there exists a point ξ, strictly within this interval, such that the following exact equality holds:

ƒ(b)-ƒ(a)=ƒ'(ξ)(b-a)

The geometric interpretation of this “mean-value theorem” (also called Lagrange’s formula or the finite difference formula) is extraordinarily simple. Let A, B be the points on the graph of the function f(x) which correspond to x = a and x = b, and let us join A and B by the chord AB.

Now let us move the straight line AB, keeping it constantly parallel to itself, up or down. At the moment when this straight line cuts the graph for the last time, it will be tangent to the graph at a certain point C. At this point (let the corresponding abscissa be x = ξ), the tangent line will form the same angle of inclination α as the chord AB. But for the chord we have:

tan α = ƒ(b) – ƒ (a) / b-a.       On the other hand at the point C: tan α = ƒ’ (ξ):

This equation is the mean-value theorem, which has the peculiar feature that the point ξ appearing in it is unknown to us; we know only that it lies somewhere in the interval (a, b).

Its interpretation in ∆st is that ƒ'(ξ) corresponds to the value of a finitesimal lying between both.

FIRST, the fact that ‘membranes must determine the beginning and end point of any function for it to be meaningful and solvable. And indeed, only because we know when it starts and ends the domain, we are sure to find a mean point.

If we consider then a T.œ mean value theorem, where ƒ(b) > ƒ(a) if we are deriving in space, where f(b)=Max. S represents the parameter of the membrane, ƒ(a) will represent the singularity and so we shall find in between a finitesimal part of the vital energy of the T.œ with a mean value within that of Max. S (membrane) and Min. S (singularity). And viceversa, if we are deriving in search of the minimal quanta of time, ƒ(a) > ƒ  (b), where ƒ(a) represents the time speed of the singularity and ƒ(b) the time speed of the membrane. And the mean value will be that of the infinitesimal.

But in spite of this indeterminacy, the formula has great theoretical significance and is part of the proof of many theorems in analysis.

The immediate practical importance of this formula is also very great, since it enables us to estimate the increase in a function when we know the limits between which its derivative can vary. For example:

|sin b – sin a| = |cos  ξ| (b-a) ≤ b-a.

Here a, b and ξ are angles, expressed in radian measure; ξ is some value between a and b; ξ itself is unknown, but we know that |cos  ξ |≤1

Another immediate expression of the theorem which allow to derive a general method for calculating the limits and approximations of polynomials with derivatives is:

For arbitrary functions ϕ(x) and ψ(x) differentiable in the interval [a, b], provided only that ψ′(x) ≠ 0 in (a, b), the equation, holds where ξ is some point in the interval (a, b).

From the mean value theorem it is also clear then that a function whose derivative is everywhere equal to 0’ must be a constant; at no part of the interval can it receive an increment different from 0’. Analogously, it is easy to prove that a function whose derivative is everywhere positive must everywhere increase, and if its derivative is negative, the function must decrease.

And so the ‘classic function of mean-value theorem’ allow us to introduce an essential element of ∫∂ which will open up the ∆st calculus of worldcycles of existence, the standing points of a function.

Maxima and minimum. The 3 standing points of a world cycle. The mean value sets for the region between the limiting points of the curve – which must be taken in higher step-timespace as two sections of a bi-podal spherical line, part of the membrane of a 3D form, gives us then a value for the vital energy to be expressed with a scalar. And the initial and final point of the segment become the maximal and minimal of the function in F(f)=x values.

It is then between those two limits a question of find points of the vital energy among them the singularity Max. S x t, to have a well-defined TOE in its membrane (maximal minimal values) volume of energy, mean value and Maximal point of the Singularity.

We can then dissect the sphere in antipodal points related to the identity neutral number o-1 the sphere of time probabilities that the largest whole maximises in its antipodal points. If we consider the antipodal points the emergent and final death point, which imperfect motions still close to 0’-sums, the maximal middle point will be the singularity, Max. S x Max t.

Dimensional integration. Dimensions of form that become motions and vice versa.

Now the key to fully grasp the enormous variety of integral and derivative results obtained in all sciences, is to understand that all space forms can be treated as instants in time, or events of motion, and all motions in time can be seen as fixed present moments in space.

These series of combinations of time and space, S>T>S>T, which leaves a trace of steps and frequencies and its whole integration, which emerges as an ∆+1 new scale of reality is at the core of all fractal, reproductive processes of reality.

For example the s-T duality is at the core of the Galilean paradox of relativity (e pur se muove e pur no muove), of Einstein’s relativity, of Zeno’s paradox.

So we can consider motion in time as reproduction of form in adjacent topologies of discontinuous space.

We can consider the stop and go motions of films, picture by picture, integrating those ‘spatial pictures’ into a time ‘motion picture’.

We consider the wave-particle paradox, as waves move by reproduction of form and particle collapse by integration of that form in space into a time-particle.

In those cases integration happens because a system that moves in time, reproduces in space. And vice versa, steps in space become a memory of time.

Now it is important also to study case by case and distinguish properly what are we truly seeing population in space or events in time, as we can and often it happens that humans confuse in quantum physics where motion is so fast that time cycles appear as forms of space. We shall then unveil many errors, where a particle in time is seeing as a force in space (confusion of electroweak, transformative force as a spatial force,and so on).

All systems can be integrated, as populations in space to create synchronous super organisms  and as world cycles in time, creating existential cycles of life and death. The population integral will be however positive and the integral in time will be 0’.

Since. systems of populations in space do have volume. Yet the whole motion in time, can be integrated as closed paths of time, or conservative motions that are 0’ sums, and this allows us to resolve what is time integration and space integration.

Consider to fully grasp this, the reproduction of a wave, which constantly reproduces its form as it advances in space, and cannot be localised (Heisenberg uncertainty) because it is a present wave of time, as light moves NOT in the least space but the least time. Now, consider a seminal wave – you, which reproduces in time, but becomes a herd of cells that integrated emerges into a larger scale. In both cases the final result is in space and so it is positive.

So each case must be studied to conclude we are either observing a time event or a spatial organism.

In that regard the most important and hence first view of the Rashomon Effect on ∫∂ is: this

∫≈∂ are time=space beats/steps in any d²

It follows then that we can escape the memorial creation, step by step of the spatial form, as something which for us is no longer needed, when we are interested only in integrating the space, and for that reason the integral work merely as an integral of a volume, a surface – whose creation in time has already happened.

But we still have to find a quanta of that ‘creation’ now a mere ‘population in space’.

WHY DIFFERENTIALS=FINITESIMALS NOT INFINITESIMAL 0s ARE THE REAL THING

Finitesimals as Differential of a function. The stœps of motion and form that shape the flows of timespace.

The essential connection between ∆ST actions and dimotions performed by a larger whole and algebra occurs in calculus, whereas the whole is the integral and the minimal indistinguishable part its (in)finitesimal. Again here the key concept is one of perception. So in fact we talk NOT of an infinitesimal limit but more on the thought of Leibniz of an 1/n quanta, so small that the whole doesn’t distinguish and so it can be discharged. Themes those we shall analyze in depth when studying calculus.

This deep thought fact – that small steps are ‘lineal’ and longer ones are curved and ultimately 0’ sum closed paths is the justification for the use of differentials instead of derivatives in most applications of calculus to reality. Because while the limit to infinity does not exist, there is a fundamental paradox between lineal approximations and open free steps in the small realm and curved closed worlds in the upper realm, which makes every ‘finitesimal being’ to feel happy and unbounded, tracing ‘zig-zag’ stop and go motions, where the motion is always a lineal step (try to walk in curved fashion)… but during the formal observation in stop-mode of the next step to run, the ∆+1 enclosure whole will deviate your absolute motion into a cyclical step. And so while small steps are differentials, the sum of them, with the length-motion, height-perception, represented by the X and Y components of the tangent gives us finally a closed curve, or one which will have limits of its validity – mathematical domain – imposed by the higher ∆+1 world.

Thus when we observe reality in any scale at the maximal detail of its stœps it appears exactly as the two sides of the tangent of a derivative, if we consider the ‘absolute frame of coordinates’, where X is the measure of steps of motion in lineal continuous time, and Y the coordinate of form and perception, which we already defined for evolution in biology, vital topology in ¬E Geometry and wave-particle states in quantum physics, in other papers, turn out to be in mathematics the ideal form of ‘ST, space-time, form and motion states.

So we see electrons in a stop and go motion, deviating its path at each stage even if finally will trace its natural flow of time, and we see Brownian motion in particles that try to go straight but are constantly deviated by the larger world. And the Earth looks flat but Elcano returned went to the west and returned through the east, and we think we shall live for ever when we are young looking at the future but when we are old we only see the past; and so in time, space and scale the paradoxes of curved order and small freedoms carry the exist¡ential momentum through its worldline that always becomes a worldcycle for those who go beyond the ‘shallow’ 4D continuum into the sudden stops and discontinuities of moving on, to assess the new direction to take in front of insurmountable larger walls.

Differentials in essence are ‘lineal’ rates of change in small ‘intervals’ of any function that is curved, and whose exact, ideal, non-lineal rate of change in a long stretch is difficult to calculate. And in reality is used everywhere instead of the ideal derivative. And the justification in 5D is the concept of a finitesimal minimal quanta, and the fractal nature of points and stœps, the minimal quanta of change. That is change is never infinitesimal, but a change implies a minimal ¡-1 unit of the being, either its frequency step or reproductive cell, etc. So that ‘quanta’ of change, which is better measure by the ‘diameter’ or ‘height’ or length of the spherical or tall or flat form (cell, atom, individual) is a differential.

The maths of it, are well known to any student. As it is so essential to 5D ‘experimental mathematics’ we shall bring it here for further comments.

Let us then consider a function  S = ƒ(t) that has a derivative. The increment of this function: ∆s = ƒ (t+∆t) – ƒ(t) corresponding to the increment Δt, has the property that the ratio Δs/Δt, as Δt → 0, approaches a finite limit, equal to the derivative:   ∆s/∆t->ƒ'(t)

This fact may be written as an equality:   ∆s/∆t->ƒ'(t) +a

where the value of a depends on Δt in such a way that as Δt → 0, a also approaches 0’; since in ∆st the minimal step of any entity always has a lineal form.

Thus the increment of a function may be represented in the form:

∆s=ƒ'(t)∆t + a∆t     where a → 0, if Δt → 0.

The first summand on the right side of this equality depends on Δt in a very simple way, namely it is proportional to Δt. It is called the differential of the function, at the point tn corresponding to the given increment Δt, and is denoted by:   ds=ƒ'(t)∆t

The second summand has the characteristic property that, as Δt → 0, it approaches 0’ more rapidly than Δt, as a result of the presence of the factor a.

It is therefore said to be a finitesimal of higher order than Δt and, in case f′(t) ≠ 0, it is also of higher order than the first summand.

By this we mean that for sufficiently small Δt the second summand is small in itself and its ratio to Δt is also arbitrarily small.

Practical stience only needs to measure a differential either in space dy=BD+BC or  in time, as a fraction of the unit world cycle, ƒ(x)=cos²x+sin²x=1 which becomes a minimal lineal st-ep or action, ƒ(t)=S step.

In graph, decomposition of ΔS into two summands: the first (the principal part) depends linearly on ΔT and the second is negligible for small ΔS. The segment BC = ΔS, where BC = BD + DC, BD = tan β · ΔT = f′(t) Δt = dS, and DC is an infinitesimal of higher order than Δt.

For symmetry in the notation it is customary to denote the increment of the independent variable by dx, in our case dt to call it also a differential. With this notation the differential of the function  is:   ds= ƒ'(t) dt

The derivative is the ratio, f′(t) = ds/dt of the differential of the function, normally a ‘whole spatial view’ to the differential of the independent variable, normally a temporal step or minimal change-motion in time.
The differential of a function originated historically in the concept of an “indivisible”, similar to our concept of a finitesimal and so much more appropriate for ∆st than the abstraction of an infinitesimal with ∆t->0, since time is discrete and there is always a minimal step of change, or reproductive step in a motion of reproduction of information.

Differentials of calculus are practical infinitesimals and its knowledge for any function acts as an ∂st limit.

On the other hand, there is for any group that we can take as vital space-time, finds us a middle point.

Rightly then the indivisible, and later the differential of a function, were represented as actual infinitesimals, as something in the nature of an extremely small constant magnitude, which however was not 0’.

According to this definition the differential is a finite magnitude, measurable in space, for each increment Δt and is proportional to Δt. The other fundamental property of the differential is that it can only be recognized in motion, so to speak: if we consider an increment Δt which is approaching its finitesimal limit then the difference between ds and Δs will be arbitrarily small even in comparison with Δt – till it becomes 0’. The error of interpretation in classic calculus being that it is the difference what approaches 0 as finally the function will be also lineal, not ∆t, which will become a ‘quanta’ – as quantum physicists would latter discover.

As this is the real model, the substitution of the differential in place of small increments of the function forms the basis of most of the REAL applications of the now-called ‘finitesimal analysis’ to the study of nature.

The ¬Algebraic/graphic duality.

On view of our deeper departure from the ultimate essence of Analysis, which is to study steps of space-time.  That is to put ¬Algebraic S=T symmetries in motion; the ¬Algebraic vs. graphic interpretations of calculus responds to yet another symmetry of spatial vs. temporal methods, considered on our posts of @nalytic geometry and ¬Algebra.

It does show more clearly what we mean by those ‘steps’ as basically the ‘tangent’ of the curve is in most cases a space-time step expressed by the general function: X(s) = ƒ(t)

Obviously as s and t are ill defined, it was only understood for lineal space-distance and time-motion.  And so the ‘geometrical’ abstract concept remains, void of all experimental meaning… as a… tangent… it was…

Spatial:geometric view.

We are led to investigate a precisely analogous limit by another problem, this time a geometric one, namely the problem of drawing a tangent to an arbitrary plane curve.

Let the curve C be the graph of a function y = f(x), and let A be the point on the curve C with abscissa x0 (figure 10). Which straight line shall we call the tangent to C at the point A? In elementary geometry this question does not arise. The only curve studied there, namely the circumference of a circle, allows us to define the tangent as a straight line which has only one point in common with the curve.

To define the tangent, let us consider on the curve C (figure up) another point A′, distinct from A, with abscissa x0 + h. Let us draw the secant AA′ and denote the angle which it forms with the x-axis by α. We now allow the point A′ to approach A along the curve C. If the secant AA′ correspondingly approaches a limiting position, then the straight line T which has this limiting position is called the tangent at the point A. Evidently the angle α formed by the straight line T with the x-axis, must be equal to the limiting value of the variable angle β.

The value of tan β is easily determined from the triangle ABA′ (figure up):

It is then clear that h is the frequency quanta of time, or if we are inversely using the ∫∂ method to measure space populations, the minimal unit.  And so the ultimate concept here is that h NEVER goes to 0. And the clear proof is that if it were arriving to 0’, x/h=∞.

So infinitesimals do NOT exist, and it only bears witness of the intuitive intelligence of Leibniz that he so much insisted on a quantity for h=1/n… (and the lack of it of 7.5 billion infinitesimals of Humanity, our collective organism, which memorise this h->o that so much abstract pain gave me when a kid – one of those errors I annotated mentally with the absurd concept of a non-E point with no breath, or else how you fit many parallels, of the limit of c-speed, how Einstein proved that experimentally?, and other ‘errors’ that ∆st does solve in all sciences).

But for other curves such a definition will clearly not correspond to our intuitive picture of “tangency.”

Thus, of the two straight lines L and M in figure below, the first is obviously not tangent to the curve drawn there (a sinusoidal curve), although it has only one point in common with it; while the second straight line has many points in common with the curve, and yet it is tangent to the curve at each of these points.

And yet such a curve is ultimately the curve of a wave, and we know waves are differentiable. So the tangent IS NOT the ultimate meaning of the ∫∂ functions – time/space beats are. The question then is what kind of st beat shall we differentiate in such a transversal wave?

A different dimension, normally as waves are the 2nd dimension of energy, as in the intensity of an electric flow… a mixture of a population and a motion; or ‘momentum’ (the derivative of energy)…

And so the next stage into the proper understanding of ∫∂ operations is what ‘kind of dimensional space-time change-steps’ we are measuring.

∆ view: The inversion of the finitesimal calculus of ∆-1 is the integral calculus of 5D.

The transition to ∆nalysis: new operations

The mathematical method of limits was evolved as the result of the persistent labor of many generations on problems that could not be solved by the simple methods of arithmetic, ¬Algebra, and elementary geometry.

The inverse properties of Space problems and temporal problems.

What were the problems whose solution led to the fundamental concepts of analysis, and what were the methods of solution that were set up for these problems ? Let us examine some of them.

The mathematicians of the 17th century gradually discovered that a large number of problems arising from various kinds of motion with consequent dependence of certain variables on others, and also from geometric problems which had not yielded to former methods, could be reduced to two ST types:

Temporal examples of problems of the first type are: find the velocity at any time of a given nonuniform motion (or more generally, find the rate of change of a given magnitude), and draw a tangent to a given curve. These problems led to a branch of analysis that received the name “differential calculus.”

Spatial examples: The simplest examples of the second type of problem are: find the area of a curvilinear figure (the problem of quadrature), or the distance traversed in a nonuniform motion, or more generally the total effect of the action of a continuously changing magnitude (compare the second of our two examples). This group of problems led to another branch of analysis, the “integral calculus.”

Thus 2 S=T problems are singled out: the temporal problem of tangents and the spatial problem of quadratures.

Now the reader would observe that unlike the age of Arithmetics and ¬Algebra, which stays in the same ‘locus/form’; here we observe a key property of analysis: the transformation of a temporal cyclical question, into a lineal spatial solution.

I.e. the solution of acceleration/speed by a lineal tangent, through an approximation; and the calculus of a cyclical, spatial area by the addition of squares. And the deep philosophical truth behind it, which only Kepler seemed to have realized at the time:

‘All lines are approximations or parts of a larger worldcycle

And so we can consider in terms of modern fractal mathematics, that ‘the infinitesimal is the fractal unit, quanta or step’ of the larger world cycle, and as a general rule:

‘All physical processes are part of a conservative 0’-sum world cycle’.

Which explains ultimately the conservation of energy and motion, as motions become ultimately world cycles, either closed paths in a single plane, or world cycles balanced through ∆±1 planes.

Such is the simple dual ÐST justification of Analysis, as always based in ∆…  finitesimals and St… the inverse properties of ∫∂.

Are there other operands of mathematics beyond those of calculus and hence other dimensional motions and complex minds beyond humind’s maximal understanding of the game of exist¡ence reflected in those texts?

Yes… We have not analyzed the most important of all operands, the symbol of equality, which mathematical praxis uses so merrily even of logicians and rigurous mathematicians have rightly studied the fact that equality as an identity is the rarest of all occurrences. The more so in a world in which fractal points hold an invisible inner mind-world which can only be equalized with the restricted view of Euclid’s points with no breath.

Basically we need to apply the ‘pentalogic’ of the 5 Dimotions to = which means really <=>, ó, as a feed back equation, describing any of the ‘changes’ between SS, ST, TT, Ts, St states (<, >, =. «. »). As only A is identical to A when in the identical position of A, hence A is ultimately unique and all A= B must be written as A ó B

Differentials – any St-Dimensional Steps

The disquisition of which ‘minimalist finitesimal’ allow us to differentiate an S≈T ¬Algebraic symmetry, brings us to the ‘praxis’ of calculus techniques that overcome by ‘approximations’ the quest for the finitesimal quanta in space or time, susceptible of calculus manipulation, which gave birth to the praxis of finding differentials, which are the minimal F(Y) quanta to work with and obtain accurate results (hence normally an spatial finitesimal of change under a time dependent function). This was the origin of the calculus of differentials.

As always in praxis, the concept is based in the duality between huminds that measure with fixed rulers, lineal steps, over a cyclical, moving Universe. So Minds measure Aristotelian, short lines, in a long, curved Universe.

So the question comes to which minimalist lineal step of a mind is worthy to make accurate calculus of those long curved Universal paths.

It is then obvious that the derivative of a lineal motion has more subtle elements that its simplest ¬Algebraic form, the x ÷ lineal operation of ‘reproductive speed’ and so the concept of a differential to measure the difference between steady state lineal reproduction and the variations observed by a curve appeared as the strongest tool of approximation of both type of functions.

As we have considered that most differential equations will be of the form: F(s) ≈ g(t), where s and t are any of the 5 Dimensions of Space (\$, S, §, ∫, •) or 5 Dimensions of time (t, T, ð, ∂, O), whose change respect to each other, we are bound to study…  showing how a spatial whole is dependent on the change and form of a world cycle, we shall consider generally that y->s and x->t…

The result of this change will be a much more generic concept of speed of change in any of the dimensions of entropy, motion, iteration, information or form that defines the Universe, letting us introduce its 3 fundamental parameters, S/t=speed, t/s=density and s x t = momentum/force/energy… in a natural way with its multiple different meanings, Ðisomorphic to each other – as we repeat the s and t of the general

Space finitesimals vs. Time finitesimals

We must ‘differentiate’ when differentiating (:

-Space finitesimals, which are the minimal quantity of a closed energy cycle or simultaneous form of space, easier to understand, as they are ‘quanta’ with an apparent ‘static form’, which can be ‘added’, if they are a lineal wave of motion-reproduction, along the path; or can be integrated (added through different areas and volumes), to give us a 3D reality.

-Time finitesimals, which are the minimal period for any action of the being and will trace a history of synchronicities as the actions find regular clocks, which interact between them to allow the being to perform ALL their 5D actions needed to survive. So we walk (A(a)), but then eat  energy (Å(e)), and we do not do them often together. Actions have different periodicities, for EACH species that perform 5 actions. So to ‘calculate’ all those periodicities in a single all-encompassing function we have to develop a 5D variable system of equations.

– Spacetime finitesimals. But more interesting is the fact that Nature works simultaneously integrating populations in space and synchronising their actions in time. So we observe also space-time finitesimals where the synchronicity consists in summoning the actions of multiple quanta that perform in the same moment the same ‘D-motion’, which is ‘reinforced’ becoming a resonant action.

And for the calculus of those space-time finitesimals the best way to go around is by ‘gathering the sum of ∆-1 quanta’ into a ‘larger ∆º quanta’ treated as a new ‘1’ adding up its force. Even if most of them are just complex ensembles of the simplest actions of  many cellular parts – steady state motions, reproduction of new dimensions and vortex of curvature and information absorption.

All functions of analysis thus can be considered operations on actions of space-time.

Groups of Finitesimals and their synchronous actions thus meet at ∆º in the mirror of mathematical operations, through the localisation of a ‘theoretical’ tangent≈ infinitesimal of the nano-scale (∂s/∂t proper) or an ‘observable’ differential, a larger finitesimal, which is the real element, as any finitesimal is a fractal micro points that have a fractal volume, expressed in the differential.

Then we gather them, in time or space and study their ‘inverse’ action in space or time.

So the first distinction we must do is between finitesimals expressed as functions of time frequencies and finitesimals expressed as areas of space. And the actions described on them. In practice though most finitesimals are spatial parts whose frequency of action is described by the ƒ(x)=t function.

The 3 parts of T.œ.

Every event and form must be analysed ternary, and so happens with integrals and derivatives, which often represent integrals of space-time quanta belonging to the vital energy of the system, constrained in time or space by the singularity and outer membrane. Or might be of time quanta. So how can we differentiate them?

Thus we establish a correspondence between 5 Dimotions, TT, St, Ts ST and SS and its integrals, such as:

TT: Both elements are time like. So in TT-dimotions we integrate a frequency of Time dimotion, the finitesimal over a time duration:

SS: Both elements are space-like. So we integrate a quanta of population over a volume of space:

Ts: If we call energy, Ts, a motion with a little form, we are thus considering more complex equations to integrate, and we can use the obvious example of physical systems, in which energy becomes the integration of existential momentum, mv, a combined ST parameter of space and time, along a path of time, giving us E=1/2 mv2 . So in general any Ts form will be integrated in the form:

St: Inversely for changes in the information of systems, which are perceived as spatial forms with a little bit of motion, the integration will be of the form:

ST: And finally for systems that experience a reproductive process in space-time, a single integration will not suffice, so we will require a double or triple integral of the system either in space or time, requiring a more profound analysis for each case:

∫S∂t = ∆e  becomes the integral of the inner spatial quanta of the open ball, surrounded by the membrane of temporal cycles, which conserves its Energy and by the sum of all T.œs that of the Universe. Its calculus, after finding a ‘continuous derivative’, surrounded by the membrane is then an integral: ∫ Sp x ðƒ = Ke.

And inversely. If we consider a single quanta of space or a single frequency of time, a moment of lineal or angular momentum, the result is a derivative.

So Analysis becomes the fundamental method to study travels upwards and downwards of the 5th dimension.

In general if we call a spatial quanta a unit of lineal momentum of each scale and a time cycle a unit of angular momentum, the metric merely means the principle of conservation of lineal and angular momentum.

Thus analysis studies the process which allows by multiplication of ‘social numbers’ , either populations in space or frequencies of time, a system to ‘grow in size’; which is the ultimate meaning of travelling through the 5th dimension. For example, when a wave increases its frequency, it increases the quantity of time cycles of the system. When a wheel speeds up its increases the speed of its clocks. And vice versa, when a system increases its quanta, growing in mass, or increasing its entropy (degrees of motion in molecular space), it also grows through the 5th dimension.

And the integration along space and time, of those growths, is what we call the total Energy and information of the system

It is what physicists call the integral of momentum or total ‘Energy and information of the system ‘

So we shall only bring about here some examples of analysis concerned with the definitions of the fundamental parameters of the fractal Universe, that is the conservation principles and balances of systems which can be resumed in 2 fundamental laws:

Continuity of functions

All this understood we can then return to the inflationary nature of languages, which in the case of mathematics means that without a mirror reflection in reality, it tries to introduce false concepts of infinity and continuity with pedantic axiomatic methods, origin of the concept of absolute continuity of a function; when the true concept is the ‘stop and step’ nature of motions, and dark, non perceived regions between continuous points, or finitesimals. So it is irrelevant if the finitesimal is a natural number to talk of discontinuity, as the system will have contiguous finitesimals of 1 number size. We then talk of measure more than continuity and errors of measure, from an upper ∆º mind”

Intuitively, a function f(t) approaches a limit L as t approaches a value p if, whatever size error can be tolerated, f(t) differs from L by less than the tolerable error for all t sufficiently close to p.

Just as for limits of sequences, the formalization of these ideas is achieved by assigning symbols to “tolerable error” (ε) and to “sufficiently close” (δ). Then the definition becomes: A function f(t) approaches a limit L as t approaches a value p if for all ε > 0 there exists δ > 0 such that |f(t) − L| < ε whenever |t − p| < δ. (Note carefully that first the size of the tolerable error must be decided upon; only then can it be determined what it means to be “sufficiently close.”)

But what exactly is meant by phrases such as “error,” “prepared to tolerate,” and “sufficiently close”?

Again it is the relative ¡-1 quanta of the system studied. The ‘error’ of measure will then become ESSENTIAL to the explanation of the Uncertainty principle of Heisenberg, which indeed can be obtained from theory of measure and error, by pure mathematical methods.

So in ideal mathematics, having defined the notion of limit in this context, with no limit to the infinitesimal size of the error, it is straightforward to define continuity of a function. Continuous functions preserve limits; that is, a function f is continuous at a point p if the limit of f(t) as t approaches p is equal to f(p). And f is continuous if it is continuous at every p for which f(p) is defined. Intuitively, continuity means that small changes in t produce small changes in f(t)—there are no sudden jumps.

But as that small change will always be in detail an ε-quanta, in great detail there are quantum jumps. In fact, as there is always an ε-quanta, in any process in space or time, in form and motion (as we have shown when considering the nature of motion as reproduction of form in adjacent spaces) there will always be a quantum jump for all motions. And motion will be the reproduction of form in quantum jumps of  ε, nature.

1/n: the (in)finitesimal (in)finite

With the convention that ƒ  (x) is normally a function of time frequencies, ƒ (t), of motions of time, whose synthonies of synchronicity in space are expressed by an ¬Algebraic equation, we bring the following understanding:

Infinitesimal quanta in any scale is the departure point to build any function, as such it must have a minimal size, and ƒ'(t) is normally a good measure.

The infinitesimal study as perceived from the finite point of view is the view of fractals, when in detail and observing the closed worldcycles that separate and make each infinitesimal a whole.

A derivative is the finitesimal of the function observed, and so when we go even further and study as enlarged into our scalar view tin maximal information we are in the fractal view of reality.

So as we expand our view the fractal view becomes more real, till finally the enclosures observed ∆-1 become fractal and we recognise its self-similarities: ∆-1 ≤ ∆º.

For each derivative thus a function shows its 1/n infinitesimal (not necessarily this function, which is the derivative of the logarithm).

It follows that functions which grow ginormously have a ‘quanta of time’ reproduced and so its minimal derivative finitesimal is the function itself, eª.

In the next graph we see inverse equations of exponentials and logarithms.

Exponentials express better decay than exponential growth, with the exponent “negative”.

Mathematics is a reflection of nature. A small mirror of its ∆º±i Structure and so we need for exponential growth that Nature provides unlimited energy for growth, which happens only in the 0’-1 generational dimension of the being, or in its inverse decay/ in its 4D entropy age. of death.

On the other hand the limit of logarithmic growth maps out better in logistic curves real growth being a good function to express ∆§cales.

So numbers reflect those processes in their inverse exponential/logarithm mathematical graphs and numerical series.

ST: As the three coordinate systems, self-centred into an ∆º pov, which reflects each of the three ‘topologies of space-time’ (Cylindrical: lineal, polar: cyclical and cartesian: Hyperbolic); while the infinitesimal o-1 scale, and the infinite 1-∞ scale divided by the ‘1’ ∆º relative element, represent perfectly the ∆-scalar nature of super organisms.

∆º±1: Further on, we can ‘reduce’ each relative Immensity to those 3 Planes, and represent all timespace phenomena with the different families of numbers that close ¬Algebra  (entropic, positive numbers, informative, negative numbers, present space-time, complex bidimensional numbers, s/t ir-ratio-nal numbers, etc.), mathematics becomes essentially the more realist language to represent the scalar, organic, ternary Universe.

The 0’-1 scale is equivalent to the 1-∞ scale for the lower ∆-1 Universe, where 1=∆º, the whole and 1-∞ is the ∆+1 eternal world.

And this is the symmetry to grasp the consequences of the o-1-∞ fundamental graph of the fifth dimension. Let us see how with a simple example:

The mirror symmetries between the 0’-1 universe and the 1-∞ are interesting as they set two different ‘limits’, an upper uncertain bound for the 1-∞ universe, in which the 1-world, ∆º exists, and a lower uncertain bound for the 0’-1 Universe, where the 1 does not see the limit of its lower bound. Are those unbounded limits truly infinite?

This is the next question where we can consider the homology of both the microscopic and macroscopic worlds.

Of course the axiomatic method ‘believes’ in infinity – we deal with the absurdities of Cantorian transinfinities in articles on numbers. But as we consider maths, after Lobachevski, Godel and Einstein, an experimental science; we are more interested in the homologies of ∆±1. For one thing. While 0 can be approached by infinite infinitesimal ‘decimals’, so it seems it can never be reached, we know since the ‘violet catastrophe’ that the infinitesimal is a ‘quanta’, a ‘minimum’, a ‘limit’. And so we return to Leibniz’s rightful concept of an 1/n minimal part of ‘n’, the whole ‘1’.

This implies by symmetry that on the upper bound, the world-universe in which the 1 is inscribed will have also a limit, a discontinuity with ∆+2, which sets up all infinities in the upper bound also as finite quanta, ‘wholes of wholes’.

So the ‘rest’ of infinities, must be regarded within the rest of ‘theory of information languages’ and its inflationary nature, inflationary information. What is then the ‘practical limit’ for most infinities and infinitesimals? In ÐST, the standard limit is the perfect game of 3 x 3 + 0(±1) elements, where the o-mind triples as it is an ∆-1 ‘god of its infinitesimals it rules subconsciously, as you brain rules you cells’, ∆º, consciousness of the whole and ∆+1 infinitesimal of the larger world.

An o-1 time mirrored quantum world of probabilities of existence, as indistinguishable infinitesimals through the surface limit of its statistical description in the thermodynamic scale of atomic beings end in the 1 unit of our human cellular space, where thermodynamic considerations are reduced to temperature gradient towards the homeostatic mass based forces of our human level of existence, ∆º.

So we consider as usual the Kaleiodoscopic, multiple function of analysis, and the multiple meanings of its inverse, ∆±1 operations, derivatives and integrals; since as usual the potency of ∆st is on the search of whys, not on the discovery of new equations, which humans always exhaust by the Monkey Method of trials and errors, sweat and transpiration more then the inspiration of pure logic thought…

Physical equations in differential form, a general overview of its main species. History

Differential equations first came into existence with the invention of calculus by Newton and Leibniz. Newton listed three kinds of differential equations: those involving two derivatives one of space and time (or fluxions) and only one undifferentiated quantity (space or time parameter); those involving 2 derivatives and two quantities of space and time; and those involving more than two derivatives.

Its analysis thus was right in the spot as he referred changes to change in space or time, thus ∫∂ with ST-eps – a fact latter forgotten and today thoroughly missed with the ‘view’ of time as a single dimension of space (1D-lineal motion confused with 4D-entropy in philosophy of science)

It is still a good classification of partial differential equations as ‘time-like’ (∂x, ∂²x, ∂³x), or space like (∂²y, ∂y, ∂³y) or space-time like (∂x∂y, ∂y∂x) as the main variations that represent, T, TT, TTT; S, SS, SSS, ST, TS steps, which are the main 5D, 4D and 1,2,3D changes of the Universe.

And it speaks of the enormous range of real phenomena ∫∂ functions can describe as the essential operands of mathematical physics and any ∆st phenomena.

What allow all those ∆st phenomena to enter the world of quantitative mathematics was the discovery of a pendulum clock to measure time in lineal fashion and a telescope to measure space. Both gave birth to the 2nd age of science, the mathematical/scientific method, added to the experimental Aristotelian method, which now the isomorphic ÐST age of stience completes.

In 1609 appeared the “New astronomy” of Kepler, containing his first and second laws for the motion of the planets around the sun.

In 1609 too Galileo directed his recently constructed telescope, though still small and imperfect, toward the night sky; the first glance in a telescope was enough to destroy the ideal celestial spheres of Aristotle and the dogma of the perfect form of celestial bodies. The surface of the moon was seen to be covered with mountains and pitted with craters. Venus displayed phases like the Moon, Jupiter was surrounded by four satellites and provided a miniature visual model of the solar system. The Milky Way fell apart into separate stars, and for the first time men felt the staggeringly immense distance of the stars. No other scientific discovery has ever made such an impression on the civilised world.

It also killed a method equally valid of thought represented by the Greeks and Leonardo: the idealised understanding of the canonical perfect ÐST game of existence, of which we were all impure platonic forms, bond to dissolve unlike the perfect game of the ∞ Universe, which is immortal.

Man never went back because alas! what really mattered was ballistics, mechanisms, power. Idealism died away:

The further development of navigation, and consequently of astronomy, and also the new development of technology and mechanics necessitated the study of many new mathematical problems. The novelty of these problems consisted chiefly in the fact that they required mathematical study of the laws of motion in a broad sense of the word. And now we had machines to measure it better than the artistic Sp-eye-T=words of the human space-time mind.

Points of constrain, balance and limits of integrals.

Any equation with a real, determined solution must be a complete T.œ. Hence it will have limits either in space (membrane and singularity of the open ball), or in time, initial and final conditions, bridged by an action in the ‘least time’ possible.

This is the key ∆st law that applies to the search for solutions in both ODE and PDEs.

Maximise its ðƒ/Sp, density of information/mass, its Sp/ðƒ density of energy and hence, reach a balance at ðƒ=Sp

This simple set of equations: max. ðƒ x Sp -> Tƒ=Sp: max Tƒ/Sp and Max. Sp/Tƒ are therefore the fluctuation points of systems that constantly move between the two extremes of information and spatial states across a preferred point of balance Sp=Tƒ as this is the max. Sp x Tƒ place.

Thus integrals, Lagrangians and Hamiltonians are variations of those themes. The motion of springs; the law of least time etc. all are vibrations along a point of balance, Tƒ=Sp, and 2 maximal inverse limits.

The different time-space beats.

This of course must be done because reality is bidimensional and a dimension of space goes accompanied by a dimension of time, generating as in the previous graphs, the motions=changes, S≈T≈S≈T that shape reality.

And it is the justification on why differential equations that make systems dependant of such pair of variables happen.

But then it follows we shall be able to apply pentalogic and find a use for the pair ∫∂ as expression of an inverse beating for each pair of dimensions of space-time.

And decompose both space-time forms and time-space events in S>T<S beats.

And in the process of doing so, learn further insights about the symmetries between space and time.

PRODUCT AND INVERSE DIVISION INTEGRALS

The most abundant of all operand, the merging product requires therefore a more complex rule than a direct sum, which acts by ‘superposition’ of EQUAL BEINGS.

It is also susceptible to be operated by calculus and ‘derivatives’ as now we involve for the first time, both, a scalar level, since multiplication tends to happen in the lower scale of the being and different states of time and space. So we no longer operate as in additions, with the same type of T.œs in the same plane.

The most abundant of all operand, the merging product requires its own rule which interesting enough shows how indeed product is a merging operation, as the derivative of a product of functions merges first each function with the change rate of the other, and then once both are merged, superposes them by addition:

The Product Rule used to find the derivative of a product of two functions, is thus more complex than the sum even though it also keeps as in polynomials the distributive property – which shows once again that the product is a ‘democratic merging’ that can go both ways.

So h'(x) = [ƒ(x) x g(x)]’ = ƒ(x) • g'(x) + ƒ'(x) • g(x).

The rule, interesting enough shows how indeed product is a merging operation, as the derivative of a product of functions merges first each function with the change rate of the other, and then once both are merged, superposes them by addition.

In that sense it keeps with the ‘rule’ of merging at the lower ‘plane level’ of its infinitesimal parts, in this case, taking instead of the spatial elements of X and Y, its ‘temporal’ quanta of change, f(x)’ and g(x)’, MERGING them with the other wholes, before a ‘superposition’=addition can be effected.

In the product rule thus Derivatives act in inverse fashion to power laws, searching for the infinitesimal.

While power wholes (integrals) search the wholeness, and as we know the two directions of space-time are different in curvature, quantity of information and entropic motions.

Here we shall bring a little explained fact – derivatives act in the inverse fashion to power laws, searching the infinitesimal, while power wholes (integrals) search the wholeness, and as we know the two directions of space-time are different in curvature, quantity of information and entropic motions.

So an external operation that reduces a whole which is NOT integrated as such but a lineal product of two wholes, ƒ(x) and g(x), a COUPLE, is mixing the infinitesimals of one, with the other whole before herding them; in a process of ‘genetic mixing’ of the parts of the first shared with the second whole and the parts of the second shared with the first whole.

This law of Existential ¬Ælgebra simplified ad maximal as usual in mathematical mirrors surprisingly enough is the origin also of genetic ‘reproduction’, which occurs at two levels, mixing the ‘parts’ – the genes of the whole – in both directions to rise then the mixing to the ∆º level of the G and F gender couple.

Then what will come out of that genetic multiplication is its division into two equal parts, showing how the interaction of inverse operands does not cancel reality but merely completes a dimotion moving ahead the eternal time space universe.

So if a power followed by a logarithm grows a finitesimal seed into a whole herd, the multiplication followed by a division of the reproduced new layer of mixed ‘axons, genes’ or parts, brings the replication of identical forms.

While the simplest definition of a division is as usual in huminds an entropic destructive feeding action, the complex view from the perspective of information is a genetic mitosis. And both are reflected in the derivative of a division, which is impossible for two equal functions (resulting in 0 constant) and viceversa can give us any constant value in its integral – so it does not give us any information.

While in most cases is NOT a positive communicative act but a perpendicular negative reducing game, where the DOMINANT element is the ‘predator’ larger denominator that cuts the function, multiplying its infinitesimal f'(x) parts, to which it will deduce the lesser parts absorbed by the f(x) function from it, and then cut it at the ‘lower’ level of its potential elements (G(x)²) :

So the numerator, the victim, shared by the denominator the predator so to speak is first absorbed in its ƒ'(x) parts, g(x) ƒ'(x), subtracting the g'(x) parts that the prey has absorbed in the ‘fight’, ƒ(x) g'(x), and then shared by the parts, g(x)² of the whole as entropic feeding.

So we can consider the derivative of a divisive function as an ‘idealized’ expression of the process of killing and feeding of a system, whereas the predator absorbs the infinitesimal parts of the other being, and feeds its cellular, i-1 elements with it. Which obviously is NOT a commutative process.

We love to bring vital interpretations to abstract math, but as we apply such rules to particular cases, interpretation varies even if we can reduce all cases to of sub-equations of the fractal generator, SóT.

Unlike in an ‘abstract’ dimensional explanation of the rules of power laws, existential algebra brings a vital analysis of ∫∂ operands in terms of biological processes, showing why calculus of change is the king of mathematical mirrors on real st-ep motions and actions, reason why its use is so wide spread.

So the fundamental law of operands to vitalize them is this:

By pentalogic all differential operands can become an action in one of the 5d dimensional vowels (a,e,i,o,u) that define the five dimensions of existence, as vital quanta-actions of the being.

This is the logic concept that truly vitalizes the operands of ¬Algebra.

So those properties tell us new things about the meaning of ∫∂.

Finally the chain rule which is truly the one that encloses all others is used in the case of a function of a function, or composite function writes:

And this truly an organic rule, as we are not derivating on ‘parts’ loosely connected by ± and x÷ herds and lineal dimensional growth, but the ‘function’ is a function of a function – a functional, as all ∆+1 is made of ∆º which are also functions of xo fractal points.

So this is the most useful of all those rules to mirror better reality.  And we see how the derivative, the change process deeps in at the two levels, at the ∆º=g(xo) level, which becomes g'(xo) and at the whole level, which becomes ƒ'[g(xo)], which tell us we can indeed go deeper with ∫∂ between organic Planes, which is what we shall learn in more depth when consider partial derivatives and second derivatives and multiple integrals.

We are getting so to speak into the infinitesimal of the parts of a whole from its ∆+2 perspective, and this rule encloses all others, because it breaks into the multiplication of its parts – dwindling truly a scale down, and separating the whole and the parts derivated into loose parts and finitesimals now multiplied.

And what will the parts do when they see their previous finitesimals now camping by themselves but ‘at sight’ to get them to ‘produce’ an operative ‘action’ (a,e,i,o,u actions are ALL subject to the previous operands), ON them.

And what will come of that multiplication. Normally it will capture them all again and then normally will not re=produce on them (one of the operands actions which are possible under pentalogic) but divide and feed on them the last operation to treat:

And its inverse, which is NOT a positive communicative act but often a perpendicular negative reducing game also consequently differs.

In that sense the most important ad on that ∆st will bring to the use of differentials in existential ¬ælgebra, is its temporal use as the ‘minimal action in time’, of a being, a far more expanded notion that the action of physics (which however will be related to the lineal actions of motion on 1D).

Finally in the next stage of ¬Algebra, when @nalytic geometry allowed a more clear representation of those polynomials in more detail, as usual through its 3 AGES of evolution and through its PLANES of complexity and through its pentalogic ‘Rashomon effect’; that is, how analysis operates independently to extract information from the 5 DImotions of a being.

S=T: ANALYSIS ON SPACE

A 2nd consideration on the pentalogic should be on analysis of SPACE and trans-form-ations between space= form and motion=time states. SO FIRST we shall remember what space is made of – namely ¬E points:

Dimensions and analysis are possible because points have volume.

A Universe of fractal spaces point-particles have an inner volume of information as Non-Euclidean points which gauge information in the stillness of a mind syntax, language mirror of the Universe. As points have volumes, lines are waves and planes topological networks, which ensemble in ternary a(nti)symmetries to form the topological super organisms of reality across 3 time ages, 3 topological forms and 3 Planes. It is this physical T.œ which we shall study in mathematical physics, explaining the meaning in 5D of the main mathematical laws of physics, which are enhanced by the understanding of an enhanced geometry and logic of time, born of the fractal cyclical structure of both, a priori elements of reality that the language of mathematics and its operands so accurately mirror.

The fundamental truth derived from this simple analysis of derivatives is profound. First it connects them immediately with the pure geometric nature of dimensions, which in non-Euclidean geometry (graph) are relevant in as much as they represent motions in time but also dimensions in space.

In that regard, it is important to understand that in the fractal Universe a dimension has ‘always inner breath’ as the points grow when we see them closer.. So it is very simple to consider a single dimensional being, simply as one, whose preferential X-dimension is much larger than the others, but still the other exist as the particle-point in detail is big:

1D being: X>>Y ≈ Z, for example a string, a lineal momentum…

And then a two dimensional being one whose two D are larger than Z:

2D Being: X ≈ Y >> Z; for example a graphene sheet; a plane wave.

Whereas a 3Dimotion being has volume, motion on the 3, for example a spherical being, an entropic explosion.

A derivative then merely ‘annihilates’ one dimension or one motion in space or time – we have here to split dimotions, as humans do, even if it is not the proper unit of the Universe, which is always bidimensional. I.e. even in a motion there is a particle that moves, so you have a point-dimension for the particle and one for the motion in time…

So indeed analysis IS the main mathematical instrument to study the 5D Universe and its ternary mirror symmetries between Planes, topologies and modes of time-change. And we can consider a general formulae for analysis, as a specific version of the fractal generator:

∂(Bodywave of vital energy) = Membrane; ∂Membrane = Singularity path and its inverse, better known as line integrals, surface and volume integrals.

Because analysis is mainly used in mathematical physics, in praxis, the previous relationship is connected to the 3 elements of a physical system:

Field (entropic, locomotion source) < wave (reproductive body) > Particle.

So we make double derivatives to obtain the field (Laplacians), and single derivatives to relate particles and waves – ‘one-dimensional species.’ (Fourier series). And those are the all pervading analytical functions of the 3 parts of the being:

Spherical harmonics and electron orbitals are the same, because our light space-time in particle state are photons that form the electronic nebulae. So both are homologic.

The result are spherical harmonics, (graph)  a set of functions used to represent change on  bidimensional membranes – the surface of a sphere – the higher dimensional homology of Fourier series – periodic, single variable functions on the circle – namely electronic motions, as electrons act as the membrain of the atom. In terms of abstract quantum physics we say that spherical harmonics are the eigenfunctions of the angular part of the Laplacian, representing solutions to partial differential equations in which the Laplacian appears. Since the Laplacian appears frequently in physical equations (e.g. the heat equation, Schrödinger equation, wave equation, Poisson equation, and Laplace equation) ubiquitous in gravity, electromagnetism/radiation, and quantum mechanics.

The orbitals of the hydrogen atom in quantum mechanics in fact are totally undistinguishable from spherical harmonics, showing indeed that we are all topologic beings, and mathematical functions for the simplest forms of spacetime as the electron is – a dense function of ‘light spacetime particle-points.

The intimate connection between the 3 elements of the being is perfectly explained by the dual ∫∂ functions.

In that regard, variations over the same theme respond to the ternary structure of all T.œs:

In the graph, when deriving and integrating, most operations refer to a ‘limited’ system, in which first we extract the finitesimal part-element, and then we integrate it to obtain a whole; so most likely the system described with depart from a time-changing-variable quanta, and integrate it to obtain a ‘static whole-spatial view’.

But variations on the same theme happen by the natural symmetry of space and time states.

So we can also start with a quanta of space integrated over time to get a spatial area or volume.

What we shall always need to find ‘single solutions’ is the parameters that describe in time or space the 3 elements of the T.œ: So we shall start with initial or final conditions (definite integrals), and define mostly in space as a whole, the enclosure or membrane that the limits the domain of the function (which might include as a different limit the singularity).

All in all the analytical approach will try to achieve a quantitative description of the unit/variable of ‘change’, the ‘finitesimal quanta of space – interval, area, volume’ or the ‘steps of time’ (frequency), and then integrate it over a super organism of space or an interval of time, we wish to study, often because it forms a whole or a 0’ sum world cycle.

Galilean Paradox. lineal vs. Cyclical view.

In that regard, the S=T symmetry will once more become essential to the technical apparatus of analysis as it has done in all other sub disciplines.

Of them the 3 key ‘dualities’ between lineal perception in short and cyclical perception in large, is the key to obtain solutions, as the mind of measure is lineal made of small steps that approximate larger cyclical wholes. It is in essence the method of differential equations, where the differential dy= ƒ'(x) ∆x + α∆x, approaches to a lineal derivative, ƒ'(x) ∆x in short increases, and so we can get away with the smaller element that curves in longer distances the solution.

Finally the third Galilean paradox between continuity and discontinuity is also at the heart of analysis (and most forms of dual knowledge). Analysis has accepted as a dogma the continuity of the real number and so it considers continuity a necessary condition for differentiability but we disagree in a discontinuous Universe, continuity has a loose definition (as neither the axiomatic method is the proof of mathematical statements but experience also matters). So continuity is defined by a simpler rule: that the term α∆x of the discontinuity between the lineal and cyclical view of an infinitesimal derivative does indeed diminish faster the closer we are to the point ‘a’ in which the differential equation is defined. In brief, continuity means no big jumps and big changes in the direction of a function and the T.œ it reflects.

PENTALOGIC ON DERIVATIVES

Pentalogic of finitesimal change.

Leibniz defined the finitesimal as 1/x, where X is the whole. In terms of topological curvature and motion, 1/x also defines in Leibniz’s work the curvature in space (as the oscular curve, 1/r of a cycle), which by virtue of S=T IS the minimal unit of cyclical time of a whole). Yet in lineal terms, curvature which is dimensionless can also be seen as the ‘angle’ of change in the direction of a motion, each quanta of time or step of space. So an angle changes in lineal, discrete quanta (1st Dimotion of change). It changes in curvature in a continuous longer measure. It becomes an acceleration, in S=T motion. Then a finitesimal of curvature becomes the fundamental concept that carries derivatives of motion into geometry in the gravitational theory of Einstein, not yet understood conceptually in Relativity theory as the S=T relativity principle on 5D philosophy is ignored.

Further on, a finitesimal represents an ∆-1 unit of change in scale; and finally, the ‘ultimate finitesimal’, is the @-mind singularity of the system.

So finitesimals can reprsent all forms of changes of ∆@st as minimal scalar parts, minimal steps of motion; discrete angular changes, curvature of an accelerated motion seen as form; and mind singularity… And its limit is absolute change or entropic change, which cannot be calculated because the function breaks and there is no derivative, or rather the derivative is the whole being, who changes so much that it is no more…

The actions it describes.

The minimal unit for any T.Œ are its a,e,i,o,u actions of existence, its accelerations energy feedings, information processing, offspring reproduction and universal evolution. So the immediate question about mathematical mirrors and its operations is what actions reflect. We have treated the theme extensively in the ¬Algebraic post, concluding that being mathematics a mostly spatial, social more than organic language, its operations are perfect to mirror simple systems of huge social numbers=herds; and as such to describe the simpler accelerations=motions, which are reproductions between two continuous Planes of the fifth dimension; informative processes, where the quanta perceive are truly finitesimal ∆-i elements pegged together into the mirror images of the singularity and so we talk of motions, simple reproductions and vortices of information, and time>space processes of deceleration of motion into form, as the key actions reflected by mathematical operations.

It follows that when we study the more complex systems and actions of reality, reproduction and social evolution of networks into organisms, mathematics will provide limited information, and miss properties for which illogical biological and verbal languages are better.

And it follows that physical and chemical systems are the best to be described with mathematical equations, either in ¬Algebraic terms or analytic terms, which fusion together when we try to describe the most numerous, simpler systems of particles and atoms (simpler because by casting upon them only mathematical mirrors we are limited to obtain mathematical properties).

Let us now consider the ∫∂ operations for the different dimotions of reality on the main functions with fundamental roles in ∆st and its derivatives by dividing them in 3 great ∆st ‘groups’:

@: ∫∂ of identity elements – forms that do not change expressed with the concept of an identity number, as 0 is the identity of sum and 1 of product. But they also have a clear meaning as the interval 0’-1 of the generation ‘seed’ dimension from ∆-1 to ∆º.

And indeed, the surprising result that ∫o dx = C does indeed suggest that the 0’-point is a fractal point that ‘has volume’, or else how integrating the nothingness of existence shall we get a ‘constant’ which is a social number? But if we do start from a o-1 unit its ‘integral’ sum will give us a reproductive group, or ‘social number’.

And if we integrate the full ‘1 being’, we shall get a new dimension, the variable plus the constant, which suggest also a little understood process related to the operations of derivatives and integrals, the switching caused by operations on motions of sets (our definition of analysis), which change a spatial state into a time state and vice versa. So the spatial 1-form-whole becomes a time-variable X, while the variable X becomes a spatial derivative constant.

Since  constant number does NOT change. So a time variable gives us the spatial identity number.

Finally, the deepest thought on those seemingly well known operations regards the subtle difference between both operations: the derivative localises a single ‘finitesimal solution’, or minimal ∆-1 past part of the system…

But the inverse, ‘integral’ or ‘future 5th Dimensional arrow’ of social wholes opens up the possibility of multiple constant solutions to add to the variables, as the future is open to subtle variations (∫) but the past is fixed by the infinitesimal identity number (∂).

Of course if we instead consider the integral not in time but as a fixed spatial path, this concept of future vanishes and we get a determined single solution to the integral where the constant is just the starting point.

Other way of seeing it though is to consider the identity element @, the constant mind that does NOT change.

Let us now study the seemingly simple ‘equations of change’ for the basic functions. Yes, of course, the scholar will find no interest on them. What can we really find of something so ‘obvious’ as ∂x2 = 2x.

But we shall remind again the reader of 2 of my favorite masters’ quotes, ‘genius targets what nobody sees’ and ‘simplicity is genius’ (Leonardo). Since the beauty of 5D spacetime consist on seeing relationships that nobody cares to wonder about, which are found in the simplest realities that hold fundamental laws of time=change.

∫∂ of POLYNOMIAL GROWTH: POLYNOMIAL ACTIONS vs. DERIVATIVES

A polynomial can be understood as a regular T.œ, with greater symmetry between parts and wholes, expressed as a quantitative sum of its parts, xa . That is in space it represents a ‘line’, ‘square’, or ‘cubic form’.

And so as the fundamental form of change happens when S=T, in the form of reproduction, the first insight of 5D calculus on the outcome of polynomial change is this:

When we change a square, X2 change does NOT happen only as it would seem natural at an X rate – that is the frontal line of the system, but at a 2X rate, in 2 of the 4 sides of the square. So change is NOT a motion in a single direction that would reproduce the square by an X rate, but it happens in 2 sides of the square.

And as a consequence it preserves the form of the square.

We thus realize by this simple analysis of ∂X2=2X that the main form of change which ultimately appears in most realities is ‘reproductive change’, NOT mere locomotion (ultimately a form of reproduction)…

Again when we consider the cube, X3, the rate of change is 3X2; so change now happens in 3 sides of the cube, and it is also a reproductive growth, that PRESERVES the form of the cube. While in a line, ∂nX=n, change happens in one of the sides of the line that preserves the form of the line, reproducing it, something that can be perceived as the motion of the forehead that advances the line.

Inversely we can consider that to generate through an integral of the unit of change, the finitesimal of a polynomial volume from 1 to 4 Dimensions ads from 3 sides of the cube, two sides of the square and 1 side of the line (as we know from algebra, a quintic does not exist in terms of radicals; because the Universe is made of 4 Dimotions plus dissolving death=entropy. So it is not worth to bother for more complex ones; while the fourth dimotion of social evolution or motion of a cube makes the system grow in scales of the fifth dimension; so it is a different case to the 3 basic polynomials that happen in a single scale).

Now this might also seem simple but we realize that if we are creating a whole cube from 3 faces, they are becoming intertwined, webbed together as they penetrate each other to form the cube. So they act in fact as ‘networks’ of points with ‘dark spaces’ through which the other two sides penetrate. And the result is not merely a cube but a network of 3 different ‘orthogonal’, networks, which is the idealized concept of a physiological trinity. The same can be said of the process of creation of a square from two ‘orthogonal’ dimotions of time-space. But, and this is also interesting to meditate in depth, the line is NOT webbing from the 2 extremes but merely reproducing and hence moving as a wave, which is NOT connected with the ‘steps’ left behind. Thus the line constantly moves because it does not ‘entangle’ with the memorial tail of its previous stœps of reproduction. While the square and the cube become entangled ending its reproduction when the area is filled with memorial persistence.

We conclude another startling consequence of all this:

– Only lineal inertia is eternal locomotion because the line does not ‘last’ and hence can be displaced ad eternal, but a holographic or trinity dimotion becomes a reproductive action that stops in its final completion of the higher whole form, once it is integrated.

Then it might as a quartic, develop a new reproductive motion, whereas the cube is a point of a larger scale.

So the derivative give us the ‘units of reproduction’ of the polynomial in space, its minimal growth parts and the integral reproduces a whole by webbing a number of parts, that entangle to form the ∫X whole.

Trilogic on polynomial derivatives for space and time.

Once all this is resolved we can consider how the trilogic of calculus widens the possibilities of a simple polynomial derivative, considering the simplest case of its application to space and time.

Exponential derivatives and integral

The first result already considered are the polynomial ‘reduced’ dimension by means of searching its infinitesimal, which however is for simple polynomials quite larger, compared to a direct xˆn-1 reduction.

Further on, the logarithm IS clearly the 5D social scaling operation and its derivative is indeed the absolute finitesimal, 1/n.

And inversely the maximal growth is its inverse, the absolute decay of e¯x.

It is worth to talk of those 3 co-related results from the philosophical pov: the maximal expansion of an event is an absolute future to past, ∆+1 <<∆-1 entropic death expressed by the exponential:

The minimal process of growth (Log) is an infinitesimal, the maximal process of decay (e¯ª) is equivalent to the whole, in a single quanta of time. We state in the general law that death happens in a single quanta of time, in which the entire network that pegged together the being, disappeared.

1D (singularity) + 2D (Holographic principle) = 3D (vital energy).

In practice this means the ‘synchronicity in time of the clocks of the 3 parts of the being’ and the superposition of the solutions that belong to each of the 3 elements of any T.œ

1. EXPONENTIAL DECAYS.

Finally the logarithmic function and exponential function the ratio of change (derivative) diminishes from the absolute maximal, eª, which is its own derivative, to the absolute minimal 1/a the log derivative which is the definition of an infinitesimal part (Leibniz), till it peaks, converting an ∆-1 first unit into an ∆º whole in the peak of an existential world cycle that then will start an inverse function of decay with -1/x diminution and a final fast collapse in the 3rd age<<death moment at eˆ-a speed.

Finitesimal actions.

The logarithm’s derivative thus ads as ratio of change only an infinitesimal, so it tends always to a balanced static form (y=c).

The quantity a system absorbs to create an action is generally defined as a ‘finitesimal’, not infinitesimal. Infinite does not exist in a single continuum, but through multiple discontinuities as all systems in time and space are limited in space and time, both in a single membrane, and in within the Planes of the 5th dimension (as information and energy doesn’t flux between those Planes without loss of entropy).

A finitesimal is the quantity of energy, motion, information etc. used by a T.œ for an action of space-time IN any of the 5 Dimensions of the being, ‘put in motion’ to that aim.

4D: Thus entropy has a negative exponential which show the rhythm of decay of the system. And in this case there is no need for ‘a logarithmic’ limit, since for the predator the death body is ‘unlimited ¡-1 energy’, though once the ‘relative infinite’ number of its ¡-1 parts are absorbed the ‘e-function’ will have a cut off.

How the function combines with other functions shows then how the superposition and merging-product processes combine as Dimotions with the reproductive growth and decay processes, whose results are intuitive: the product=merging of two powers of reproductive growth, when its ‘base=toe’ is identical ‘superpose=add’ that growth.

So the combination of ± exponentials and logarithm curves are also the best way to graph as a bell curve the worldcycle of existence in lineal terms.

4th dimension: Entropy: S∂ polynomial death dimension of decay.

Polynomials do not evolve reality towards an impossible  infinite growth. They are the inverse decay process; of exponential extinction, e-x.

Wholes are physiological networks, which we analyse mathematically in its parts, mostly performing a motion of space-time, an action that exchanges most likely bits and bites of time and space. So the  logarithm is an operation that reflects the processes of minimal transmission and gathering of information and energy, of bidimensional holographic quanta… reason why it is so pervading in the concepts of entropy and information.

5th dimension: ∫T…

This is understood better observing that the inverse function does in fact model growth in the different models of biology and physics, limited by a carrying capacity straight flat line:

The logarithmic function has as derivative an infinitesimal, 1/x, which makes it interesting as it models better the curve of growth from o to 1 in the emergent fast explosive ∆-1 seed state, while the inverse eˆ-x model the decay death process.

Integrals and derivatives which have a much slower growth, than polynomials on the other hand do model much better as they integrate the ‘indivisible’ finitesimal quanta of a system, its organic growth and ‘wholeness’ integrated in space.

Thus integrals do move up a social growth in new ∆+1 5D planes. And its graphs are a curved geometry, which takes each lineal step (differential) upwards, but as it creates a new whole, part of its energy growth sinks and curves to give birth to the mind-singularity @, the wholeness that warps the whole, and converts that energy into still, shrunk mind-mappings of information, often within the 3D particle-head.

We will retake the analysis of the more complex st-eps on 3, 4 and 5D, since most of the complex process related to the 3rd dimension, as a mixture of S and T inner Planes, will require a more complex double or triple derivative and integrals – only the 4D decay entropic explosion can be satisfied as the decay of the single ∆-1 finitesimal with a single variable.

1. Entropy Equations

One Example. The law of decay of radium says that the rate of decay is proportional to the initial amount of radium present. Suppose we know that a certain time t = t0 we had R0 grams of radium. We want to know the amount of radium present at any subsequent time t.

Let R(t) be the amount of undecayed radium at time t. The rate of decay is given by the value of – (dR/dt). Since this is proportional to R, we have:

-dR/dt=kR  where k is a constant. In order to solve our problem, it is necessary to determine a function from the differential equation. For this purpose we note that the function inverse to R(t) satisfies the equation: – dt/dR=1/kR,  since dt/dR = (1/dR)/dt. From the integral calculus it is known that equation is satisfied by any function of the form: T= – 1/k ln R+ C.

where C is an arbitrary constant. From this relation we determine R as a function of t. We have:

From the whole set of solutions we select one which for t = t0 has the value R0. This solution is obtained by setting C1 = R0ekt0.

From the mathematical point of view, equation (3) is the statement of a very simple law for the change with time of the function R; it says that the rate of decrease – (dR/dt) of the function is proportional to the value of the function R itself. Such a law for the rate of change of a function is satisfied not only by the phenomena of radioactive decay but also by many other physical phenomena.

We find exactly the same law for the rate of change of a function, for example, in the study of the cooling of a body, where the rate of decrease in the amount of heat in the body is proportional to the difference between the temperature of the body and the temperature of the surrounding medium, and the same law occurs in many other physical processes. Thus the range of application of those equations is vastly wider than the particular problem of the radioactive decay from which we obtained the equation.

1D: PERCEPTIVE ACTIONS: Derivatives as angles of perception.

The 3rd type of functions are concerned not with ∆±1 past to future to past d=evolutions but with present sinusoidal wave repetitions of the same time-cycle, hence change is cyclical repetitive, and so those functions are very useful for the 3rd reproductive dimotion in space, but also for a time dimotion or cycle:

Both functions thus are clearly inverse not only in Γst but also in the ∆±1 Planes – being the negative symbol one of conventions regarding the chosen ± direction of the cyclical, sinusoidal motion.

Here though the interest resides in comparing both type of present vs. ∆ past-future functions: the present derivative is self repetitive, as we return to the sin after 4 quadrant derivatives; and we return to the present considering also the generational cycle, after 4 ages of life. So we can model a sinusoidal function as a world cycle of existence in its 4 quadrants.

SS: Sinusoidal functions of angles and sins and cosines relate to the SS-trigonometric perceptive function of organisms.

growing distortion and a ‘blind’ spot for its inverse 5th dimotion of existence (whose performance is the denial of its self).

St: Thus sinusoidal functions are also good to measure St-motions dominant in information. I.e. wave forms.

One very realized role of a derivative as a tangential division of the height in the dimension of information and distance-lineal motion to the observer is a measure of the angle of the being, which recedes in spacetime till reaching the non-perception as a relative finitesimal out of the territorial mind- world of the observer, which connects directly derivatives with the 1D first dimotion of perceptive existence. The being might still be of certain size but as a fractal point he has receded in the mental-space of the world of the perceiver.

While as all S=T, that is there is always a symmetry between discrete numbers and continuous motions, Leibniz with its geometric interpretation and far more profound understanding of finitesimals, which he rightly defined as 1/n, represents the first step in the future of the discipline, the renovator and deep understanding of it – which Newton, which can be considered merely an automaton mathematician, specialized brain, as most modern scientists is – he is indeed the father of the wrong view of science – understood nothing of it.

Indeed, Leibniz, the closest predecessor of this blog IS the genius, Newton the talent.

The Ðimotion of cyclical perception is expressed by the negative ¡ number, with its cyclical rules of summation, which implies the sum of the angle of perception of the self-centered number in its argument.

Thus next then in the entangled representation of reality through those ‘basic operand’ comes then the duality of addition and subtraction, and its attached physical meanings of superposition and fusion of ‘parts’ into ‘whole numbers’, or its entropic inverse operands of negative subtraction.

It starts to be then obvious that all operands have its inverse function to maintain the balance of the Universe.

Addition by superposition in ever tighter spaces of similar clone species, is the simple ¬Algebraic expression of the social dimotions, both in its positive 5Ð and negative entropic 4Ð whose addition of decaying ¡-1 T.œs is so fast that it can be expressed as a negative exponential growth, which in this manner would complete the 3 ‘Planes’ of addition: +, x, xª…

Moreover addition can happen in sequential time or adjacent space, forming growing probabilities or populations. So as the simplest mode of operands extends its diversification through space or time it will mean different things. If we consider the happening of an event or full world cycle 1, probabilities will represent parts of the whole event. If we project it into space it will be a population of similar event, entering the region of maximal frequency. Both will be mathematically projected as a bell curve. S=T. Same function for the addition of events and populations, in time or space.

The first marvel of the Universe is the simplicity of its original principles, made complex by the differentiation across the symmetries of scale, topology or time. Indeed, something so simple as the sum and inverse subtraction IS still the most important operands of the Universe, which gives us new numbers, social gatherings of identical beings, which herd together into parallel flows adopting most likely a bidimensional ST superposition on laminar states that keep adding the 3rd dimension of the being. Like the simplest first masterpieces of Bach, the architectonical Universe is a simple principle before organicism twists its form, in which beings which are equal come together.

Superposition of bidimensional holographic fields is so important that the whole of quantum physics is based in this superposition principle. The sum thus is still the master of operands. But for sums to happen, the beings must be externally identical, to be perceived as parts of a quantified mass, each of them the same value.  Addition thus is the ultimate proof of the social nature of the Universe.

infinitesimal ‘curved’ exponential changes that happen between two planes, where linearity is lost.

We notice immediately that 4 changes turn a sin x cycle into itself, as 3-¡, 3 ages and an entropic death closes a 0’-sum worldcycle of any species back to its initial point. , starting in this case with the simplest cyclical clock-motions, which as they do NOT move in space, and repeat its form in time, are in fact not operated by ∫∂ measures of change.

Next comes from the bottom of that list, the functions of perception, sin and cos angles; and the result have some ‘metaphysical’ meanings. Indeed, the rate of change of our informative angle measure (the sine), becomes the cos, the rate of change of our motion, or in other words, we SWITCH from sin-stop states to cos-moving states, in stœps. We go from stop-sin to step-cos; but the inverse doesn’t hold. That is if we go from motion cos to stop sin, this will be perceived from the perspective of cos-motion as a ‘negative’ reduction motion – sine.

1D: cyclical clocks, angular momentum

In the graph, in the simplest physical systems 1D is merely the angular momentum of its cyclical clocks of time, maximised in the membrane that encloses the system. Strictly speaking it does not change but becomes the ‘present function’ of a repetitive frequency clock without a derivative of change as the time-space steps seem not to vary. When we introduce a torque, change happens, called ‘acceleration’, the second dimension of time motion in physics, which we shall latter study when analysing in 5D with the Galilean Px. Newton’s laws. Here we just shall briefly explain why in lineal time, as humans only use t to measure change, the 1D is the invariant one and its derivative is 0’.

What about ‘higher’ more complex, cyclical, and scalar Dimensions? The answer is that as we change the form of the dimensions, we have to change the operands we use; and specifically when we study the Dimensions of change, which is the one differential/integral equations quantify, those equations must adapt not the other way around as mirrors of reality to the form of the dimensions of space-time they describe.

So as 1D is  a steady state rotary motion, strictly speaking it does NOT change in space-time locomotion (which is what humans with its lineal single time express in derivatives). Hence basically the derivative of those angular momentums is 0’. It is conserved.

Let us recall briefly those classic definitions and maths:

Angular momentum is a vector that represents the product of a body’s rotational inertia and rotational velocity about a particular axis. In the simple case of revolution of a particle in a circle about a center of rotation, the particle remaining always in the same plane and having always the same distance from the center, we discard the vector nature of angular momentum, and treat it as a scalar proportional to moment of inertia, I and angular speed, ω:

L= Iω:   Angular momentum = moment of inertia × angular velocity, and its time derivative is

dL/dt =dI/dt ω +I dω/dt is 0’, and dL/dt=0+I dω/dt, which reduces to dL/dt =Iα.

Therefore, angular momentum is constant,  dL/dt=0 when no torque is applied. And this is the essence of its conservation law, a specific case of the conservation of the 5Dimensions of space-time of the Universe:

‘In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque. Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant’.

But when a torque is applied in a single present plane, or much more relevant to our inquire: when a system is submitted to the organising or disorganising entropic force of a higher or lower plane of existence, and acceleration exists, a vortex of time-space happens and we enter into the social dimensions of evolution – the 5th Dimension of the mind.

ODES =TIME-LIKE VS PDES. =SPACE-LIKE EQUATIONS: MATHEMATICAL PHYSICS.

A differential equation is a mathematical equation that relates some function with its derivatives. Let us apply simple trilogic to its meaning.

∆: A differential equation basically give us the duality of its finitesimal local states and its global whole organic forms but at the simplest level, most often with a single field state, or network. Thus it is a good representation of the parts and wholes of 5D but for more complex systems, discrete, organic analysis fares better.

T: ODEs

In applications, the functions usually represent physical quantities, the derivatives represent their rates of change of a given dimotion, such as the first derivative is its ‘time speed’ and the 2nd its ‘time acceleration’, which are the two derivatives that give us all the maximal and minimal standing points of the ‘function of existence’ of the change in time we study.

Once we find the rate of change, or finitesimal of the dimotion of the space parameter; which overwhelmingly belong to those two meaningful ones, we integrate through an interval of existence of a worldcycle and thus the system is solved.

Philosophically then the interest of 5D calculus is to analyze the geometry of the solution that will indicate us what type of SS«Ts<ST>St»TT organ or event of a function of existence we are studying.

And if the solution is possible which will give us even deeper insights on the structure of existential algebra whose illogic is mirrored by the laws of mathematical algebra.

When we work with a single parameter we are ‘obviously’ working on change in lineal time.

This is the concept behind any ODE. Which is therefore clearly a time-like equation, as it probes mostly a single dimotion of the function of existence of a system and its rate of change.

S: PDEs.

On the other hand, a PDE work on several parameters which tend to be the 3 coordinates humans establish for space; hence it is the analysis of a motion in space as a simultaneous whole – in the simplest forms as a herd in motion with continuity equations. Hence more often concerned with locomotions of the herd or system through space. And its changes of position in space.

ST: True PDEs. It is essential to understand 5D Calculus the following concept. While a mere locomotion in space might appear as a PDE because of the use of ‘vectors in x, y, z’ coordinates that artificially multiply the variables for the position in space (which is a single parameter, SS, in terms of 5D) of a given point-T.œ-function, there is a clear case in which true PDEs appear of enormous importance for mathematical physics, and any analysis of reality in terms of the ‘fractal generator’ of SS»St≤ST≥Ts«TT supœrganisms:

Those are functions in which the 3 present elements of a supœrganism, Ts≤ST≥St appear in the same equation. We call them to distinguish them from PDEs of coordinates of position, ‘True PDEs’.

And we can distinguish roughly two types of such functions:

– Spatial function in which the 3 dimensions of change are NOT coordinates external to a point (locomotion) but internal to the point (area and volume): For example, the area of a rectangle is a function S=xy, of its base x and its height y. The volume of a rectangular parallelepiped is a function v=xyz, of its three dimensions. Because the 3 classic dimensions are in vital topology related to its function, width≈reproduction≈ST, length≈locomotion≈sT, height≈information≈St, even though science normally uses them merely for fixed volumes, they do have in certain cases of ‘hydrodynamics’ and ‘cellular growth’, or social growth of populations a potential use for a dynamic study when properly considered in ‘mental spaces’, where the 3 dimotions of change are analyzed as qualitative dimensions (themes of those of an advanced 5D algebra course we will not attempt for a long time).

– TImespace function proper, classic on physics, when considering formulae related to energy (ST) expressed by its 2 independent space-time parameters, E=H ƒ= K T = M V, in its multiple parameters and variables. The well-known formula p V = RT expresses the dependence of the volume v of a definite amount of gas on the pressure p and absolute temperature T.

And so we will find with those basic type of differential equations we can calculate most Fractal generators in its dynamic change in most stiences.

∆¡: Differential vs. ∆<¡: simple calculus.

The difference between the simpler calculus of integrals and derivatives vs. differential equations thus is one of growth of complexity, from ‘words’ into ‘sentences’ of time dimotion; which means further extension in time, in spatial population and in the probing through scales, with those derivatives of motion or o information (Fourier transforms, etc.)

Because such ST-analysis of superorganisms or T, TT, analysis of time events are the fabric of which reality is built, even if analysis concentrates in simpler systems and herds, differential equations play a prominent role in many disciplines including 5D stience, concerned with the Dimotions of spacetime.

Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution.

Physical equations are related to the 3 elements of al the existential entities of the Universe. It must then be understood that within the general ƒ(x)≈f(t) and y=S isomorphism between mathematical equations and ST-eps (not always tHe case as symmetric steps can repeat itself with the same parameters in SSS and TTT derivatives as we have seen in our intro to ODE), partial differential equations, will be combinations of analysis of systems in its ‘primary’ differential finitesimals of space and time then aggregated in more complex St SYSTEMS, giving as an enormous range of possible PDE studies, which we shall strive to order according to the concept that there is a geometric symmetry between ¬Algebra  (s≈t symmetries) and geometry (S-wholes sum of t-dynamic points) and analysis (st-eps).

So it is a good guidance for all ¬Algebra equations to make a comment of its significance in the vital ternary geometry of a T.œ or complex event between T.œs across different planes, ∆§ studied with those equations.

Calculus and physics: Partial Differential equations as ∆@st-equations.

Physical events and processes occurring in a space- time system always consist of the changes, during the passage of its finite time, of certain physical magnitudes related to its points of vital space.

This simple definition of space-time processes is at the heart of the whole differential calculus, which with slight changes of interpretation apply to all GST.

Any of those ST processes can be described by functions with four ST, independent variables, S(x, y), and (z, ƒ), where x, y  are the coordinates of a point of the space, and , and z  and ƒ of time.

So ideally in a world in which humans had not distorted bidimensional time cycles, the way we work around mathematical equations would be slightly changed. As we are not reinventing the human mind of 7 billion people – we are not that arrogant, we just will feel happy trying to explain a few of those processes of bidimensional space and time here.

In the study of the phenomena of nature, partial differential equations are encountered just as often as ordinary ones. As a rule this happens in cases where an event is described by a function of several variables. From the study of nature there arose that class of partial differential equations that is at the present time the most thoroughly investigated and probably the most important in the general structure of human knowledge, namely the equations of mathematical physics.

∆ST symmetries.

Each partial differential equation represents a different finitesimal of scale, time and space, in the second level

Let us first consider oscillations in any kind of medium. In such oscillations every point of the medium, occupying in equilibrium the position (x, y, z), will at time t be displaced along a vector u(x, y, z, t), depending on the initial position of the point (x, y, z) and on the time t. In this case the process in question will be described by a vector field. But it is easy to see that knowledge of this vector field, namely the field of displacements of points of the medium, is not sufficient in itself for a full description of the oscillation. It is also necessary to know, for example, the density ρ(x, y, z, t) at each point of the medium, the temperature T(x, y, z, t), and the internal stress, i.e., the forces exerted on an arbitrarily chosen volume of the body by the entire remaining part of it.

These quantities can be described by functions with four independent variables, x, y, z, and t, where x, y, and z are the coordinates of a point of the space, nd and t is the time.

Physical quantities may be of different kinds.

∆: Some are completely characterized by their numerical values, e.g., temperature, density, and the like, and are called scalars.

S=T: Others have direction and are therefore vector quantities: velocity, acceleration, the strength of an electric field, etc. Vector quantities may be expressed not only by the length of the vector and its direction but also by its “components” if we decompose it into the sum of three mutually perpendicular vectors, for example parallel to the coordinate axes.

In mathematical physics a scalar quantity or a scalar field is presented by one function of four independent variables, whereas a vector quantity defined on the whole space or, as it is called, a vector field is described by three functions of these variables. We can write such a quantity either in the form:

U (x,y,z,t) where the bold face type indicates the u is a vector, or in the form of three functions:Ux (x,y,z,t), U y(x,y,z,t), Uz (x,y,z,t)

where ux, uy, and uz denote the projections of the vector on the coordinate axes.

In addition to vector and scalar quantities, still more complicated entities occur in physics, for example the state of stress of a body at a given point. Such quantities are called tensors; after a fixed choice of coordinate axes, they may be characterized everywhere by a set of functions of the same four independent variables.

In this manner, the description of widely different kinds of physical phenomena is usually given by means of several functions of several variables. Of course, such a description cannot be absolutely exact.

For example, when we describe the density of a medium by means of one function of our independent variables, we ignore the fact that at a given point we cannot have any density whatsoever. The bodies we are investigating have a molecular structure, and the molecules are not contiguous but occur at finite distances from one another. The distances between molecules are for the most part considerably larger than the dimensions of the molecules themselves. Thus the density in question is the ratio of the mass contained in some small, but not extremely small, volume to this volume itself. The density at a point we usually think of as the limit of such ratios for decreasing volumes. A still greater simplification and idealization is introduced in the concept of the temperature of a medium. The heat in a body is due to the random motion of its molecules. The energy of the molecules differs, but if we consider a volume containing a large collection of molecules, then the average energy of their random motions will define what is called temperature.

Similarly, when we speak of the pressure of a gas or a liquid on the wall of a container, we should not think of the pressure as though a particle of the liquid or gas were actually pressing against the wall of the container. In fact, these particles, in their random motion, hit the wall of the container and bounce off it. So what we describe as pressure against the wall is actually made up of a very large number of impulses received by a section of the wall that is small from an everyday point of view but extremely large in comparison with the distances between the molecules of the liquid or gas. It would be easy to give dozens of examples of a similar nature. The majority of the quantities studied in physics have exactly the same character. Mathematical physics deals with idealized quantities, abstracting them from the concrete properties of the corresponding physical entities and considering only the average values of these quantities.

Such an idealization may appear somewhat coarse but, as we will see, it is very useful, since it enables us to make an excellent analysis of many complicated matters, in which we consider only the essential elements and omit those features which are secondary from our point of view.

I.e. the Poisson and Laplace equations are all over the place, as they represent the ideal form of most efficient ‘sinks’ that make an ∆-1 herd of T.œs fall into a ‘door’ to a larger/smaller nested TŒ scale of the fifth dimension (charges and masses sink); the Laplace equation, Ñ2 ƒ=0, represents the way in which a herd of ∆-1 fractal points can ‘fill in’ a spherical membrain, which is the fundamental mode in which membrains that have motion even if we see them as fixed forms (a fixed form ultimately is a πS surface in which each point that appears static merely is a πd, self-turning diameter=point). So it is all pervading in nature. In fact electrons, which are herds of ‘ultradense light photons’ trapped in the ‘event horizon’ of the black hole-proton on the quantum scale, can be modeled just as the spherical harmonics of its ∆-1 ‘field’ of photonic points; and so on and so on.

So a beautiful way to ‘read physics’ as I used to do a few decades ago when my mental skills were at its height, and my memory intact, was just to ‘see as ∆ST functions=forms in dimotion, the classic equations of physics. This will be the future of physics to entangle the abstraction of those PDEs and ODEs equations, numerated in the following list, in terms of what they mean for the vital life-death dimotional cycles of its T.œs

Simple examples.

Differential equations are very common in science and engineering, as well as in many other fields of quantitative study, because what can be directly observed and measured for systems undergoing changes are their rates of change. The solution of a differential equation is, in general, an equation expressing the functional dependence of one variable upon one or more others; it ordinarily contains constant terms that are not present in the original differential equation. Another way of saying this is that the solution of a differential equation produces a function that can be used to predict the behaviour of the original system, at least within certain constraints.

Differential equations are classified into several broad categories, and these are in turn further divided into many subcategories. The most important categories are ordinary differential equations and partial differential equations. When the function involved in the equation depends on only a single variable, its derivatives are ordinary derivatives and the differential equation is classed as an ordinary differential equation. On the other hand, if the function depends on several independent variables, so that its derivatives are partial derivatives, the differential equation is classed as a partial differential equation. The following are examples of ordinary differential equations:

In these, y stands for the function, and either t or x is the independent variable. The symbols k and m are used here to stand for specific constants.

Whichever the type may be, a differential equation is said to be of the nth order if it involves a derivative of the nth order but no derivative of an order higher than this.

The equation:

is an example of a partial differential equation of the second order. The theories of ordinary and partial differential equations are markedly different, and for this reason the two categories are treated separately.

Instead of a single differential equation, the object of study may be a simultaneous system of such equations. The formulation of the laws of dynamics frequently leads to such systems. In many cases, a single differential equation of the nth order is advantageously replaceable by a system of n simultaneous equations, each of which is of the first order, so that techniques from linear ¬Algebra can be applied.

An ordinary differential equation in which, for example, the function and the independent variable are denoted by y and x is in effect an implicit summary of the essential characteristics of y as a function of x.

These characteristics would presumably be more accessible to analysis if an explicit formula for y could be produced. Such a formula, or at least an equation in x and y (involving no derivatives) that is deducible from the differential equation, is called a solution of the differential equation. The process of deducing a solution from the equation by the applications of ¬Algebra and calculus is called solving or integrating the equation.

It should be noted, however, that the differential equations that can be explicitly solved form but a small minority. Thus, most functions must be studied by indirect methods. Even its existence must be proved when there is no possibility of producing it for inspection. In practice, methods from numerical analysis, involving computers, are employed to obtain useful approximate solutions.

PDES

Abstract. Each stœp of a method of solution is grounded in a real property of the 5D ∆ST symmetries and conservation laws of the Universe, which are the 3 Galilean paradoxes between ∆+1 curved closed worldcycles, sum of lineal steps, which gives birth to the most used method of lineal approximations; the equivalence between Space and time, in all Stœps of dimotions, which gives birth to the method of separation of variables on differential equations and more broadly allows to move around relative space and time parameters in equations joined by an operand of ‘equivalence’ (≈ not =). And the 2 conservation laws of the Universe, conservation of those ‘beats’ of existence, S=T in relative present, eternal balance, justifying the equivalence operands. And conservation of the ‘volume of space-time’ of each plane of the Universe, by virtue of the 5D metric equation SxT=C, which justifies the solution of differential equations by separations of scales and harmonizes those scales allowing constant but balanced transfers of larger bites energy exchanged by smaller bits of information, St¡=1=Ts¡.

As in both cases, because S=T and and ∆-1=∆0, we can separate parameters and make them equivalent to a common constant

The Simplest Equations of Mathematical Physics. Solutions based in 5D laws.

The object of mathematical physics is to study the relations existing among these idealized elements, these relations being described by sets of functions of several independent variables.

The elementary connections and relations among physical quantities are expressed by the laws of mechanics and physics. Although these relations are extremely varied in character, they give rise to more complicated ones, which are derived from them by mathematical argument and are even more varied. The laws of mechanics and physics may be written in mathematical language in the form of partial differential equations, or perhaps integral equations, relating unknown functions to one another. To understand what is meant here, let us consider some examples of the equations of mathematical physics.

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.

PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

Both ordinary and partial differential equations are broadly classified as linear and nonlinear; such as linear are small steps of a non-lineal larger ∆ST period, hence we can approximate all non-lineal systems by lineal equations.

A differential equation is linear if the unknown function and its derivatives appear to the power 1 (products of the unknown function and its derivatives are not allowed) v. nonlinear of higher powers. The characteristic property of linear equations is that their solutions form an affine subspace of an appropriate function space, which results in much more developed theory of linear differential equations. Homogeneous linear differential equations are a further subclass for which the space of solutions is a linear subspace i.e. the sum of any set of solutions or multiples of solutions is also a solution. The coefficients of the unknown function and its derivatives in a linear differential equation are allowed to be (known) functions of the independent variable or variables; if these coefficients are constants then one speaks of a constant coefficient linear differential equation.

There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.

A partial differential equation (PDE) is an equation involving functions and their partial derivatives; for example, the wave equation

Some partial differential equations can be solved exactly

In general, partial differential equations are much more difficult to solve analytically than are ordinary differential equations. They are mostly solved using the fundamental illogic laws of 5D metrics, even if huminds don’t know they are using them; that is (:

An integral transform, (finding the ∆-whole or exact solution) a separation of variables (using the S=T= C metric) lineal procedures (using the ∑|¡-1= O¡+1 law of small lineal steps that ad to Time cycle; or–when all else fails (which it frequently does)–numerical methods such as finite differences (using the fact that continuity is reall a sum of discrete stœps).

Let us make some comments on those methods of 5D calculus (:

The methods of solutions of Differential equations combined 5D calculus. S=T.

As we said somewhere, all resumes in the functio of present existence, S=T.

This is the method of solution of PDEs, to reduce them to two ODEs, such as S=T, hence S=c, T=c and we solve two equations. Simple, isn’t? Simplicity is genius said L1. L3 agrees (:

Consider the most famous equation of quantum physics, the Schrodinger equation, describing nonrelativistic quantum phenomena:

H2 /2m2Ψ+V (S)Ψ = −ihΨ/∂t

where m is the mass of a subatomic particle, h is Planck’s constant (divided by 2π), V is the potential energy of the particle, and |Ψ(s, t)|2 is the probability density of finding the particle at s at time t.

Those Equations have partial derivatives with respect to time. As a first step toward solving these PDEs, let’s separate the time variable. We will denote the functions in all four equations by the generic symbol Ψ(s,t).

The separation of variables starts with separating the s and t dependence into factors:

Ψ(s, t) ≡ S(s)T (t).

This factorization permits us to separate the two operations of space differentiation and time differentiation. As an illustration, we separate the time and space dependence for the Schrodinger equation. Substituting for Ψ, we get     h2 /2m2(ST) + V (s)(ST) = −ih ∂t (ST),

Dividing both sides by ST yields:

1/S h2 /2m2S + V (s) = −i1/ T h dT/dt

Now comes the crucial step in the process central argument of the separation of variables.

The LHS of Equation (is a function of Space position alone, and the RHS is a function of time alone. Since r and t are independent variables, the only way that the equation can hold is for both sides to be constant, say α:

1/S h2 /2m2S + V (s) = α

−I 1/ T h dT/dt= α

We have reduced the original time-dependent Schrodinger equation, a PDE, to an ODE involving only time, and a PDE involving only the space position variables. Most problems of elementary mathematical physics have the same property, i.e., S=T, which by virtue of 5D metrics is the Constant state of present, hence S-> α T->= α

All this physicists do but don’t know why it works for so many equation (:

The time-dependent PDEs of mathematical physics can be reduced to 2 ODEs in the time variable and the space variable by virtue of S=T = Constant, the equation of present.

Thus as usual our interest is not in repeating the well known ‘pro’ methods of study of PDEs and ODEs but the whys we can learn from its merging with 5D as we already noticed in our introduction.

As geometry defines form in space and by virtue of the S=T equivalence motion in time, the geometry of differential equations and its solutions illuminates in great deal the function or part of an organic whole the diferential equation studies. And greatly reduces the number of ‘real differential equations’ we find in nature, helping also to consider which solutions are valid and which restrictions and limits must be imposed.

Second order.

Partial differential equations of second-order are amenable to analytical solution if they can be wrriten as a lineal equation. Such PDEs are of the form:

And once more according to 5D they can be classified into 3 topological variations, which miraculously huminds, without knowing vital topology have called exactly… believe it or not if you are an expert on 5D (that is if you are me, I and myself:) as elliptic, hyperbolic, or parabolic. Whereas, a hyperbolic equation is one dominated by the ST body-wave topology; that is where ∂s∂t is larger than ∂s∂s and ∂t∂t; elliptic when ∂s∂s + ∂t∂t is bigger than SS+TT and parabolic when both are equal.

In the language of classic math, we said that Linear second-order PDEs are then classified according to the properties of the matrix

as elliptic, hyperbolic, or parabolic.

If Z is a positive definite matrix, i.e., , the PDE is said to be elliptic. Laplace’s equation and Poisson’s equation are examples. Boundary conditions are used to give the constraint  on ∂Ω, where

holds in Ω.

If det(Z)<0, the PDE is said to be hyperbolic. The wave equation is an example of a hyperbolic partial differential equation. Initial-boundary conditions are used to give

where holds in Ω

If det (Z)=0, the PDE is said to be parabolic. The heat conduction equation equation and other diffusion equations are examples. Initial-boundary conditions are used to give

where holds in Ω

Separation of variables by scales.

Finally the 3rd fundamental method besides lineal approximations and separation of S=T values, takes into account the conservation of volume of space-time between scale, but also the fact that smaller scales have faster times speeds and smaller parts, perceived as relative information compared to the bites of energy of a larger scale, so the equivalence is between a ‘pool’ of bits of information of a smaller ∆-1 scale and its equivalent volume of bites of energy of a larger ∆0 plane. This equivalence being essential to everything in the Universe as it allows the coupling of languages and energy, making for example the physical economy equivalent to the financial economy, the language equivalent to the action; the mathematics mirror the reality, but as information is smaller and more abundant, this equivalence tends to be inflationary, themes those that appear in all the planes of space-time and the interaction of minds and languages of information (SS, St) with Energy bites and entropic boosts (TT, Ts).

Let us then show the case, with the previous example of Schrodinger’s equation for the orbit of an electron in an atom.

2(ρψ)/∂ρ2 = – (ε + 2/ρ) ρψ

Here, ε is the electron energy, ρ is the radial coordinate, and ψ is the electron wave amplitude.

At small distances from the nucleus, ψ may oscillate rapidly, but at large distances, ψ must decrease exponentially with r. This is because, at large distances, a bound electron’s potential energy is greater than its total energy. With a negative kinetic energy, wave number k becomes imaginary and the normal oscillatory exp{ikr} term becomes exp{–Kr}, where K=k/i.

We separate the large-scale exponential from the small-scale oscillations by making this substitution:

ρψ = g(ρ) exp{–βρ}

Here, β is an arbitrary constant, and g(ρ) is the unknown small-scale function of distance. After a lot of math, our differential equation becomes:

2g/∂ρ2 –2β ∂g/∂ρ + (β2 + ε + 2/ρ) g = 0

Believe it or not, this is progress. Let’s choose β2=–ε, reducing our equation to:

2g/∂ρ2 –2β ∂g/∂ρ + 2g/ρ = 0

If not for the ρ in the denominator of the third term, this would be a simple equation. But we can solve this with a Taylor series.

Let: g(ρ) = Σk ak ρk

This technique will work if the coefficients ak approach zero for large k. Putting the Taylor series into our equation yields:

0 = Σn {(n+1)nan+1 –2βnan +2an} ρn–1

Here the sum is from n=1 to n=+∞. The above equation is valid for all values of ρ. This can only be true if the coefficient of each power of ρ is zero. This is an important rule for polynomials that is well worth remembering. After rearranging, we obtain:

for all n>0: an+1 = an {2(βn–1)/n(n+1)}

With any choice of a1, we can recursively calculate a2, a3, … in terms of β, which is related to the electron’s energy. For the electron to be bound to the nucleus, β must equal then a finitesimal, 1/n for some integer n; which makes sense of why electrons in atoms have quantized energies which are its finitesimal parts ( ultradense light photons trapped in the potential well of the atom).

This is the basis of the Periodic Table, chemistry, biology, solid state physics, digital electronics, and everything else we know about atoms; and the insight of 5D is to consider those quantized energies the finitesimal bites of electrons – its dense bosons of light.

So the question of Partial Differential equations ‘reduces’ to the study of ODEs, which we shall consider now in detail; to then plunge into the experimental praxis of the main differential equations of the Universe, which will reflect the basic topologies of St<ST>Ts and ∆±¡ laws of existential algebra, regarding locomotion (Ts) and transfers of energy and information between planes and physiological networks (St¡-1=Ts¡)

To be Cont’ed

ODEs

As ALMOST all equations of mathematical physics deal with locmotion and locomotion deals with stœps of space-time, in which a function changes from S-tate to s-Tate, Stopping, gauging motion in a particle-point state and moving, reproducing in a contiguous region in a step-wave state; the fundamental equation of all algebraic systems is S=T.

However the fun is in which ‘scale and dimotion we consider the S and T instants of the being’. Because the Universe is all about Stœps changing dimotions and scales, there are many variations of an S=t state. The being might be informing itself in S or reproducing in T but in doing so it might be acting any of the 5 Dimotions of existence, so S is any of the 5 Dimensions of Space and T any of the 5 motions of time. The being might be going up and down the 5th dimension, perceiving form in height, moving in length, reproducing in width, and in doing so it will zig-zag as it switches modularly between S and T states.

This is what we see in an equation of algebra, where the two sides of the being tend to represent two different states and can represent a switch on modular dimotion in the sequence of actions of the being, which is allowed by the rules of existential algebra to change constantly actions as you change from moving to perceiving to eating to fuking, beautiful word if you were not a repressed enzymen forbid in your culture to achieve your maximal state as human being – the present moment of orgasm, when S=T hangs into eternity.

Stœps then, going to more mundane facts, are the substance of equations. But the simplest stœps when there is no change of scale, no change of dimotion, are the easier to solve. When we start to change scale, we enter into the realm of ODEs, but as long as we stay in the same DImotion, ODEs are easy to solve.

ODEs are relatively ‘easy’ functions as they are overwhelmingly S(y)=X=(t) where a parameter of a ‘spatial state’ of present is integrated over a present to future interval of change to determine the spatial parameter we are measuring which might range from populations growths and diminutions, to reproductive frequencies of a worldcycle, or parts of it; to parameters of energy ‘storage’, etc.

All of them reduce to the analysis of a number of ‘parameters of change’ of which in the ternary Universe, the two first are the only relevant, the rate of change or ‘speed’ and the ‘rate of rate’ of change or acceleration (unless we are working with polynomial approximations, where the concept of derivative is an abstraction of the polynomial degree, studied somewhere else)

We now give exact definitions. An ordinary differential equation of order n in one unknown function y is a relation of the form

between the independent variable x and the quantities

The order of a diflerential equation is the order of the highest derivative of the unknown function appearing in the differential equation. Thus the equation in example 1 is of the first order, and those in examples 2, 3, 4, 5, and 6, are of the second order.

A function ϕ(x) is called a solution of the differential equation (17) if substitution of ϕ(x) for y, ϕ′(x) for y′, · · ·, ϕ(n) (x) for y(n) produces an identity.

Problems in physics and technology often lead to a system of ordinary differential equations with several unknown functions, all depending on the same argument and on their derivatives with respect to that argument.

For greater concreteness, the explanations that follow will deal chiefly with one ordinary differential equation of order not higher than the second and with one unknown function. With this example one may explain the essential properties of all ordinary differential equations and of systems of such equations in which the number of unknown functions is equal to the number of equations.

Motion equation, particle equations, electric equation: the limited range of ODEs.

Now ODEs are not very much in use when we come out of the particle point state, into fields, because humans measure fields with 3 relative coordinates and so we enter the realm of PDEs. Let us then consider briefly the main fields of ODEs

Let a material point of a mass m be moving along the horizontal axis Ox in a resisting medium, for example in a liquid or a gas, under the influence of the elastic force of two springs, acting under Hooke’s law (figure 1), which states that the elastic force acts toward the position of equilibrium and is proportional to the deviation from the equilibrium position. Let the equilibrium position occur at the point x = 0. Then the elastic force is equal to –bx where b > 0.

We will assume that the resistance of the medium is proportional to the velocity of motion, i.e., equal to –a(dx/dt), where a > 0 and the minus sign indicates that the resisting medium acts against the motion. Such an assumption about the resistance of the medium is confirmed by experiment.

From Newton’s basic law that the product of the mass of a material point and its acceleration is equal to the sum of the forces acting on it, we have:   md²x/dt²= – bx –a(dx/dt) (6)

Thus the function x(t), which describes the position of the moving point at any instant of time t, satisfies the differential equation (6). We will investigate the solutions of this equation in one of the later sections.

If, in addition to the forces mentioned, the inaterial point is acted upon by still another force, F outside of the system, then the equation of motion takes the form:   md²x/dt²= – bx –a(dx/dt) + F (6′)

Example 3. A mathematical pendulum is a material point of mass m, suspended on a string whose length will be denoted by l. We will assume that at all stages the pendulum stays in one plane, the plane of the drawing (figure 2). The force tending to restore the pendulum to the vertical position OA is the force of gravity mg, acting on the material point. The position of the pendulum at any time t is given by the angle ϕ by which it differs from the vertical OA. We take the positive direction of ϕ to be counterclockwise. The arc A A′ = lϕ is the distance moved by the material point from the position of equilibrium A. The velocity of motion ν will be directed along the tangent to the circle and will have the following numerical value:

v= l dΦ/dt.

To establish the equation of motion, we decompose the force of gravity mg into two components Q and P, the first of which is directed along the radius OA′ and the second along the tangent to the circle. The component Q cannot affect the numerical value of the rate ν, since clearly it is balanced by the resistance of the suspension OA′. Only the component P can affect the value of the velocity ν. This component always acts toward the equilibrium position A, i.e., toward a decrease in ϕ, if the angle ϕ is positive, and toward an increase in ϕ, if ϕ is negative. The numerical value of P is equal to –mg sin ϕ, so that the equation of motion of the pendulum is:

m dv/dt= – mg sinΦ  or: d²Φ/dt² = – g/l sin Φ

It is interesting to note that the solutions of this equation cannot be expressed by a finite combination of elementary functions. The set of elementary functions is too small to give an exact description of even such a simple physical process as the oscillation of a mathematical pendulum. Later we will see that the differential equations that are solvable by elementary functions are not very numerous, so that it very frequently happens that investigation of a differential equation encountered in physics or mechanics leads us to introduce new classes of functions, to subject them to investigation, and thus to widen our arsenal of functions that may be used for the solution of applied problems.

Let us now restrict ourselves to small oscillations of the pendulum for which, with small error, we may assume that the arc AA′ is equal to its projection x on the horizontal axis Ox and sin is equal to ϕ. Then ϕ ≈ sin ϕ = x/l and the equation of motion of the pendulum will take on the simpler form:

(8) d²x/dt²=-g/l x

Later we will see that this equation is solvable by trigonometric functions and that by using them we may describe with sufficient exactness the “small oscillations” of a pendulum.

Example 4. Helmholtz’ acoustic resonator consists of an air-filled vessel V, the volume of which is equal to ν, with a cylindrical neck F. Approximately, we may consider the air in the neck of the container as cork of mass where ρ is the density of the air, s is the area of the cross section of the neck, and l is its length. If we assume that this mass of air is displaced from a position of equilibrium by an amount x, then the pressure of the air in the container with volume ρ is changed from the initial value p by some amount which we will call Δp.

m=ρ (9)

We will assume that the pressure p and the volume ν satisfy the adiabatic law pvk = C. Then, neglecting magnitudes of higher order, we have

and

(In our case, Δν = sx.) The equation of motion of the mass of air in the neck may be written as:

md²x/dt²= ∆p • s (11)

Here Δp · s is the force exerted by the gas within the container on the column of air in the neck. From 10 & 11 we get

where ρ, p, ν, l, k, and s are constants.

Example 5. An equation of the form 6 also arises in the study of electric oscillations in a simple oscillator circuit. The circuit diagram is given in figure. Here on the left we have a condenser of capacity C, in series with a coil of inductance L, and a resistance R. At some instant let the condenser have a voltage across its terminals. In the absence of inductance from the circuit, the current would flow until such time as the terminals of the condenser were at the same potential. The presence of an inductance alters the situation, since the circuit will now generate electric oscillations.

To find a law for these oscillations, we denote by ν(t), or simply by ν, the voltage across the condenser at the instant t, by I(t) the current at the instant t, and by R the resistance. From well-known laws of physics, I(t)R remains constantly equal to the total electromotive force, which is the sum of the voltage across the condenser and the inductance –L(dI/dt). Thus: IR= – v – L(dI/dt) 13.

We denote by Q(t) the charge on the condenser at time t. Then the current in the circuit will, at each instant, be equal to dQ/dt. The potential difference ν(t) across the condenser is equal to Q(t)/C. Thus I = dQ/dt = C(dν/dt) and equation (13) may be transformed into:

Example 6. The circuit diagram of an electron-tube generator of electromagnetic oscillations is shown in figure. The oscillator circuit consisting of a capacitance C, across a resistance R and an inductance L, represents the basic oscillator system.

The coil L′ and the tube shown in the center of figure 5 form a so-called “feedback.” They connect a source of energy, namely the battery B, with the L-R-C circuit; K is the cathode of the tube, A the plate, and S the grid. In such an L-R-C circuit “self-oscillations” will arise. For any actual system in an oscillatory state the energy is transformed into heat or is dissipated in some other form to the surrounding bodies, so that to maintain a stationary state of oscillation it is necessary to have an outside source of energy. Self-oscillations differ from other oscillatory processes in that to maintain a stationary oscillatory state of the system the outside source does not have to be periodic. A self-oscillatory system is constructed in such a way that a constant source of energy, in our case the battery B, will maintain a stationary oscillatory state. Examples of self-oscillatory systems are a clock, an electric bell, a string and bow moved by the hand of the musician, the human voice, and so forth.

The current I(t) in the oscillatory L-R-C circuit satisfies the equation:

Here ν = ν(t) is the voltage across the condenser at the instant t, Ia(t) is the plate current through the coil L′; M is the coupling coefficient between the coils L and L′. In comparison with equation (13), equation (15) contains the extra term M(dIa/dt).

We will assume that the plate current Ia(t) depends only on the voltage between the grid S and the cathode of the tube (i.e., we will neglect the reactance of the anode), so that this voltage is equal to the voltage ν(t) across the condenser C. The character of the functional dependence of Ia, on ν is given in figure. The curve as sketched is usually taken to be a cubical parabola and we write an approximate equation for it by:

Substituting this into the right side of equation (15), and using the fact that  dv/dt=I we get for ν the equation:

In the examples considered, the search for certain physical quantities characteristic of a given physical process is reduced to the search for solutions of ordinary differential equations.

Physical systems reduce the number of space-time parameters we measure essentially to 3 corresponding to scale, space and time, Mass-density (mass-energy ratio per volume), Space-Length (space ratio to time frequency) and time (frequency of steps).

So the number of symmetries of space-time to find is relatively limited: ∆ρ ≈ Sl ≈ Tƒ

Where  ∆ρ codes for any scalar active magnitude, which can be mass (gravitational scale) or charge (quantum scale) even Temperature (thermodynamic scale) So in principle the final reduction of  the equations of physics deal with only those 3 elements and yet it has a ginormous volume of information. Let us then consider the key equations that we can elaborate with the 3 elements, first noticing this parallelism with the ∆, s and t elements of any REAL tœ and its symmetry between the 3 parts of its simultaneous space and its limits of duration in time.

What mass, heat or charge measures then is the potential capacity of the internal vital energy to move expand, and as the result of being ‘enclosed’ by the membrane of the higher T.œ system. It also follows that solutions to systems without ‘membrane constrains’ or ‘singularity’ centres for the active magnitude, to define either a closed o-1 or a 1-∞ relative ‘equal’ regions of measure will not normally have meaningful solutions.

We have spoken earlier of the fact that, as a rule, every differential equation has not one but an infinite set of solutions. Let us illustrate this first of all by intuitive considerations based on the examples given in equations (2-6). In each of these, the corresponding differential equation is already fully defined by the physical arrangement of the system. But in each of these systems there can be many different motions. For example, it is perfectly clear that the pendulum described by equation (8) may oscillate with many different amplitudes. To each of these different oscillations of the pendulum there corresponds a different solution of equation (8), so that infinitely many such solutions must exist. It may be shown that equation (8) is satisfied by any function of the form

where C1, and C2, are arbitrary constants.

It is also physically clear that the motion of the pendulum will be completely determined only in case we are given, at some instant t0, the (initial) value x0 of x (the initial displacement of the material point from the equilibrium position) and the initial rate of motion:

X’o=(dx/dt)|t=0 These initial conditions determine the constants C1, and C2, in formula (18).

In exactly the same way, the differential equations we have found in other examples will have infinitely many solutions.

In general, it can be proved, under very broad assumptions concerning the given differential equation (17) of order n in one unknown function that it has infinitely many solutions. More precisely: If for some “initial value” of the argument, we assign an “initial value” to the unknown function and to all of its derivatives through order n – 1, then one can find a solution of equation (17) which takes on these preassigned initial values. It may also be shown that such initial conditions completely determine the solution, so that there exists only one solution satisfying the initial conditions given earlier. We will discuss this question later in more detail. For our present aims, it is essential to note that the initial values of the function and the first n – 1 derivatives may be given arbitrarily. We have the right to make any choice of n values which define an “initial state” for the desired solution.

If we wish to construct a formula that will if possible include all solutions of a differential equation of order n, then such a formula must contain n independent arbitrary constants, which will allow us to impose n initial conditions. Such solutions of a differential equation of order n, containing n independent arbitrary constants, are usually called general solutions of the equation. For example, a general solution of (8) is given by formula (18) containing two arbitrary constants; a general solution of equation (3) given by formula (5).

We will now try to formulate in very general outline the problems confronting the theory of differential equations. These are many and varied, and we will indicate only the most important ones.

If the differential equation is given together with its initial conditions, then its solution is completely determined. The construction of formulas giving the solution in explicit form is one of the first problems of the theory. Such formulas may be constructed only in simple cases, but if they are found, they are of great help in the computation and investigation of the solution.

The theory should provide a way to obtain some notion of the behavior of a solution: whether it is monotonic or oscillatory, whether it is periodic or approaches a periodic function, and so forth.

Suppose we change the initial values for the unknown function and its derivatives; that is, we change the initial state of the physical system. Then we will also change the solution, since the whole physical process will now run differently. The theory should provide the possibility of judging what this change will be. In particular, for small changes in the initial values will the solution also change by a small amount and will it therefore be stable in this respect, or may it be that small changes in the initial conditions will give rise to large changes in the solution so that the latter will be unstable.

We must also be able to set up a qualitative, and where possible, quantitative picture of the behavior not only of the separate solutions of an equation, but also of all of the solutions taken together.

In machine construction there often arises the question of making a choice of parameters characterizing an apparatus or machine that will guarantee satisfactory operation. The parameters of an apparatus appear in the form of certain magnitudes in the corresponding differential equation. The theory must help us make clear what will happen to the solutions of the equation (to the working of the apparatus) if we change the differential equation (change the parameters of the apparatus).

Finally, when it is necessary to carry out a computation, we will need to find the solution of an equation numerically. And here the theory will be obliged to provide the engineer and the physicist with the most rapid and economical methods for calculating.

THE SS-TT LIMITS: FUNCTIONS WHICH ARE DERIVABLE.

An interesting question for existential algebra and the scalar structure of space-time are functions ¡mmensely differentiable (∝), so many time as the system can perceive. Essentially they are only 2, for a good reason:

SS-functions of trigonometric @-mind measure, which fluctuate from sin to cos. Related to the angular dimensionless perception of a larger Universe into a smaller whole. An its inverse:

TT- ex functions, as they are their own derivative, related to the maximal decay-reproduction of a system.

The importance of absolute differentiability for existential algebra is obvious: we can obtain a finitesimal of a finitesimal without limit only in functions of pure SS-perception and Pure TT-entropy dissolution, which means there are no limits of size for a mirror mind of a larger whole Universe, hence no limits for a nested ∆± being. On the other side, reproductive decay and/or reproductive explosions (in praxis though limited by the carrying capacity of a domain) have neither limit, so virtually the number of parts of a system can be any relative .

Concepts those which bring us a better understanding of the 3rd class of functions with multiple derivatives… to a limit of k-1, polynomials, whose interest in calculus resides in its capacity to approach any other more complex sinusoidal or exponential function by obtaining key derivative values for all its polynomial factors.

Polynomial ¬algebra as an approximation to analysis.

We said multiple times that only the first and second derivatives are meaningful in the context of the 5 Dimotions happening between two planes of existence. But there is a system in which multiple derivatives make sense – polynomial approximation to complex sinusoidal and exponential functions. Why? The answer is that such functions must be taken not as continuous, ‘smooth’ functions in space, but rather as summatories, hence series in time. It is then when polynomial series make sense and each derivative appears as a term of time sequence.

The difference between polynomials/logarithms vs. derivatives and integrals is this:

Derivatives & integrals often transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in ages and form.

Polynomials are better suited for simpler systems, Planes of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.

So polynomials work better for single planes, its Planes of social growth, or square and cubic surfaces of space.

So the region in which polynomials are better suited with powers ≤3 is the central ‘Newtonian region’. But both approaches must be similar in its results, as they essentially observe the same phenomena with different focus, which is the reason of the existence of the Taylor and Newton’s approximations through derivatives of any polynomial:

So  polynomials are the rough approximation to the more subtle methods of finding dimensional change. proper of analysis – even if huminds found first the unfocused polynomials. So we call today McLaurin & Taylor’s formulae of multiple derivatives, approximations to Polynomials; when the opposite is truth:

Polynomials are the approximation to more complex organic ages and processes of organic growth and reverse the concept of McLaurin series, where we approach polynomial simple social growth with derivatives as usual taken as more precise the ‘simpler spatial mind-view – the polynomial’ (§@) than the subtle temporal view (∆∂) – the derivative.

In the graph the McLaurin and Taylor series show the limits of a ‘working function’ as they diverge from the function, once we move further away from the Point of view {ƒ'(o)} of the frame of reference, since they diverge, once passed the correct ‘degree’ of usefulness of the series – save for the exponential of ‘entropic decay’ whose derivative is the same function. So we can often use the limits of similarity between the series and the real function to find its ‘domain of finite’ use closing its world cycle, i.e. the sine function approaches only in the domain of a 2π cycle.

So Derivatives & integrals can transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in form and even ages (variational calculus). WHILE Polynomials are better suited for simpler systems, Planes of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.

And that is the ultimate reason why Galois could prove through permutations of its coefficients, which are lineal operators, of sums and multiplications power 5 polynomials could not be resolved, as they were prying in non-lineal regions of ∆±i space-time.

Now departing from the general rule that f(x) is a function of ‘time motions’ as all variables are by definition time motions, and the Y function its spatial views as ‘a whole’, we can take this as the rule of interpretation ƒ(x)=t=S=Y as a general rule, and as so often we have a function of the type ƒ (x)=ƒ(t)=0, we consider the polynomial a representation of a world cycle.

And from that we can differentiate factors through ∆ Planes such as… ∆±1= 0’-1 probability sphere (∆<0) and Polynomial (Xª=∆ª).

It is then obvious that one of the key equations of the Universe, the equation that relates polynomials and derivatives, space and time views of complex symmetric bundles must be reinterpreted on the light of those disomorphisms between the mathematical mirror, and the 5D³ Universe.

While Derivatives & integrals can transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in form and even ages (variational calculus ). While Polynomials are better suited for simpler systems, Planes of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.

Further on derivatives are thus just ‘approximations’ to the last of the operands between planes, the power/logarithm dual curve of growth. But they have no physical relevance. The more interesting graph though is the one above the polynomial formulae, as only ‘analysis’ emerges from one to another scale, from wholes into parts without great distortion and so it becomes truly the most important of all operand and branches of ¬Algebra.

The concept of such use of higher derivatives is simple: it represents an approximation to a local point as derivatives penetrate in finitesimals with higher accuracy.

Approximations to a local point.

In ¬Algebra, the 3rd or higher derivatives are used to improve the accuracy of an approximation to the function:

f(Xo+h)=f(xo)+f′(Xo)h+f″(Xo)h²/2!+f‴(ξ)h³/3!

Thus Taylor’s expansion of a function around a point involves higher order derivatives, and the more derivatives you consider, the higher the accuracy. This also translates to higher order finite difference methods when considering numerical approximations.

Now what this means is obvious: beyond the accuracy of the three derivatives canonical to an ∆º±1 supœrganism, information as it passes the potential barrier between Planes of the 5th≈∆-dimension, suffers a loss of precision so beyond the third derivative, we can only obtain approximations by using higher derivatives or in a likely less focused=exact procedure the equivalent polynomials, more clear expressions of ‘dimensional growth.

So their similitude first of all proves that both high derivatives and polynomials are representations of growth across planes and Planes, albeit loosing accuracy.

However in the fifth dimensional correct perspective is more accurate the derivative-integral game; as it ‘looks at the infinitesimal’ to integrate then the proper quanta.

Taylor’s Formula

The function: where the coefficients ak are constants, is a polynomial of degree=dimension n. In particular, y = ax + b is a polynomial of the first degree and y = ax² + bx + c is a polynomial of the second dimension. Dimensional polynomials have the particularity that are mostly 2-manifolds symmetric in that x=y, that is both dimension square, D1 x D2.

To achieve this feat, S=t, tt or ss steps must be considered.

Polynomials may be considered IN THAT SENSE as the simplest of all poli-DIMENSIONAL functions. In order to calculate their value for a given x, we require only the operations of addition, subtraction, and multiplication; not even division is needed. Polynomials are continuous for all x and have derivatives of arbitrary order. Also, the derivative of a polynomial is again a polynomial, of degree lower by one, and the derivatives of order n + 1 and higher of a polynomial of degree n are equal to 0’. Yet the derivative diminishes slower than the simple square of the function; so if we consider the derivative the parts of the polynomial, the product of those parts would be more than the whole.

It is then when we can increase the complexity establishing for example, ratios of polynomials.
If to the polynomials we adjoin functions of the form:

for the calculation of which we also need division, and also the functions √x and ³√X, finally, arithmetical combinations of these functions, we obtain essentially all the functions whose values can be calculated by such methods.

But what a polynomial describes?  All others are easier to get through approximations:

On an interval containing the point a, let there be given a function f(x) with derivatives of every order. The polynomial of first degree:

p1(x) = ƒ(a) + ƒ'(a) (x-a)   has the same value as f(x) at the point x = a and also, as is easily verified, has the same derivative as f(x) as this point. Its graph is a straight line, which is tangent to the graph of f(x) to the point a. It is possible to choose a polynomial of the second degree, namely: which at the point of x = a has with f(x) a common value and a common first and second derivative. Its graph at the point a will follow that of f(x) even more closely. It is natural to expect that if we construct a polynomial which at x = a has the same first n derivatives as f(x) at the same point, then this polynomial will be a still better approximation to f(x) at points x near a. Thus we obtain the following approximate equality, which is Taylor’s formula:The right side of this formula is a polynomial of degree n in (x − a). For each x the value of this polynomial can be calculated if we know the values of f(a), f′(a), ···, f(n)(a).

For functions which have an (n + 1)th derivative, the right side of this formula, as is easy to show, differs from the left side by a small quantity which approaches 0’ more rapidly than (x − a)n. Moreover, it is the only possible polynomial of degree n that differs from f(x), for x close to a, by a quantity that approaches 0’, as x → a, more rapidly than (x − a)n. If f(x) itself is an ¬Algebraic polynomial of degree n, then the approximate equality (25) becomes an exact one.
Finally, and this is particularly important, we can give a simple expression for the difference between the right side of formula (25) and the actual value of f(x). To make the approximate equality (25) exact, we must add to the right side a further term, called the “remainder term”

has the peculiarity that the derivative appearing in it is to be calculated in each case not at the point a but at a suitably chosen point ξ, which is unknown but lies somewhere in the interval between a and x.
So we can make use of the generalized mean-value theorem quoted earlier:

Differentiating the functions ϕ(u) and ψ(u) with respect to u (it must be recalled that the value of x has been fixed) we find that: The equality of this last expression with the original quantity (27) gives Taylor’s formula in the form (26).
In the form (26) Taylor’s formula not only provides a means of approximate calculation of f(x) but also allows us to estimate the error.

And so with Taylor we close this introduction to derivatives and differentials, enlightened with the basic elements that relate them to the 5 dimensions of space-time, specifically to the ∆-1 finitesimals.

RECAP.

Calculus in its parallelism with polynomials, which rise dimensional Planes of a system in a different ‘more lineal social inter planar way’.

So polynomials and limits are what ¬Algebra is to calculus; space to time and lineal ¬Algebra to curved geometries.

The vital interpretation though of that amazing growth of polynomials is far more scary.

Power laws by the very fact of ‘being lineal’, and maximise the growth of a function ARE NOT REAL in the positive sense of infinite growth, a fantasy only taken seriously by our economists of greed and infinite usury debt interest… where the eª exponential function first appeared.

The fact is that in reality such exponentials only portrait the decay destruction of a mass of cellular/atomic beings ALREADY created by the much smaller processes of ‘re=product-ion’ which is the second dimension mostly operated with multiplication (of scalars or anti commutative cross vectors).

So the third dimension of operands is a backwards motion –  a lineal motion into death, because it only reverses the growth of sums and multiplications polynomials makes sense of its properties. Taylor’s formula resumes the main space-time symmetries and its development, left polynomials, right derivatives fills in the content of ¬Algebra in the measure of space-time systems.

In a given point it can then be understood as a differential value and then consider Dc the polynomial vs. ∫∂ep differential. Lineal functions in short distances’ view that become curved in larger more accurate spatial views, make us think that the ƒ(t) time function is step by step building the ƒ(y) spatial worldycycle, which dId all those step curvatures.

And for that reason it was born from arithmetic and its basic social operands (±, x÷)

But as the Universe in social space has only 3 dimensions, before a discontinuity of ∆-planes is reached, ever since the infamous Fermat’s THEOREM we know that beyond quadratic x² equations, systems are not always possible, solutions are not ever found and the humind wasted (as it does today with string theory) incredible amounts of ink to ‘invent solution’ to unreal 5D Universes (no solutions of quintics).

In the graph both in ¬Algebra and geometry there are ‘inflationary solutions’ of more than 3 dimensions in space, which are meaningless in nature, due to the discontinuity between planes, beyond 3 st dimensions.

Ad maximal we can work with 3 Spatial dimensions and put together the 3 dimensions of time symmetric to them as relativity does in its metric S²=∑ (X,Y,Z)² – (ct)².

That is really how far polynomials make truly sense, with specific extremely symmetric simpler solutions for certain species of higher degree polynomials normally able to reduce to products of i≤3 equations, as the young genius of Abel and Galois found.

Hence the importance acquired not so much by what can be resolved in ¬Algebra but by what CANNOT, that is systems which do not reflect nature’s holographic, bidimensional nature, and hence have not automatic ‘radicals’, which occupied most of the historic development of classic ¬Algebra (resolution of polynomials, i>2).

The fact that most methods of resolution of ¬Algebra implies the reduction of a polynomial to products of Holographic bidimensional equations becomes then one of the strongest proofs of the Holographic principle of SxT dual dimensions.

And the analysis made partially in number theory and @nalytic geometry of the meaning of its operands, which should be the true focus of ¬Algebra, as the operations allowed between space and time dimensions.

In that sense the main error of ¬Algebra is precisely the obscuring of its meaning due to the elimination of most of its real content through the abstraction of its ‘letters and the methods of resolving equations by placing all the elements of the equation in one side, leaving the right side in 0’, which eliminates the symmetry between two possible sides of the equation.

This explanation of the most frequently found dual operands of scalar operations between planes, parts and wholes, will have not be clearly understood unless you have read the entire article on ∆nalysis, something impossible to do as i have not written it yet (: sorry for the tease… what i want to convey is that ¬Algebra can get ever more complex, as it gets further in depth into the dualities, paradoxes, symmetries, Planes, hidden variables, multiple social groups that interact, interconnect, network, web and finally create reality; even if ultimately as a Fourier transform, it can be decomposed in its minimal units of formal motion, the ststtss beats of reality, which NEVER ceases to move through s-t symmetric steps.

Yet to understand Fourier series and any other sinusoidal function in depth, we must introduce the higher scale of the derivative and integral operands, which entangles all Dimotions combined in ∆ST –calculated in polynomial forms as the sum of a power series, that is the growth of parts into wholes, in the way of Newton or ‘scalar calculus’, or through PDEs and ODEs, the next stage of a sum of ‘stœps’ of dimotions=changes, by Leibniz.

Example 2. The Laplacian. Steady state bodies.

Death is a simple process of decay in which a system breaks into its ∆«∆-2 particles, and as ½ of reality are processes of death, albeit happening in a much shorter time, processes of death as the previous simple e-x law of exponential decay are overwhelmingly common in Nature.

But we must distinguish ‘death’ that happens in a disordered single manner, from the ∆-2»∆º relationship between a field or ‘limbic system’ of locomotion and a particle or ‘sink’ or informative central vortex.

In those systems then we find a ‘second derivative’ as a system, the limb/field herd, is raised to the scale of the particle/sink or vice versa. And as such a ‘gradient appears’ in which the sink-source and the field are related by a second derivative, which in time is perceived as a minimal time process, hence an ‘acceleration’, and in space, as a process of maximal ‘curvature’ and ‘smoothness’ of the field that looses all its ‘network form’ in a higher plane (constants of form tending to 0’). Hence the equation of a Laplacian:

The Laplace operator is a second order differential operator in the n-dimensional Euclidean space, defined as the divergence (∇·) of the gradient (∇f ). Thus if f is a twice-differentiable real-valued function, then the Laplacian of f is defined by Δf= ∇·∇ƒ= ∇2

where the latter notations derive from formally writing As a second-order differential operator, the Laplace operator maps C¡  functions to C¡-2 functions for ¡ ≥ 2.

The Laplacian Δf(p) of a function f at a point p, is (up to a factor) the rate at which the average value of f over spheres centered at p deviates from f(p) as the radius of the sphere grows. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable.

The Laplacian occurs in differential equations that describe many physical phenomena, such as electric and gravitational potentials, the diffusion equation for heat and fluid flow, wave propagation, and quantum mechanics. The Laplacian represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplacian of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation; the most general expression of entropic disorder of a whole into its parts that spread in a background space. And inversely the operator gives a constant multiple of the mass density when it is applied to a given gravitational potential. So solutions of the equation Δf = 0, now called Laplace’s equation, are the so-called harmonic functions, and represent the possible gravitational fields in free space.

So the Laplace’s equation is a second-order partial differential equation  written as: Ñ2r=0.

∇2φ=0orΔφ=0Where ∆ = ∇2 is the Laplace operator and φr is a scalar function.

In electrostatics, separation of Laplace’s equation, Laplace’s equation for which f(S) = 0, leads to these ODEs:

d2X/ dx2 + α1X = 0’,

d2Y/ dy2 + α2Y = 0’,

d2Z/dz2 (α1 + α2)Z = 0.

The solutions to these equations are trigonometric or hyperbolic (exponential) functions, determined from the boundary conditions (conducting surfaces). The unsymmetrical treatment of the three coordinates—the plus sign in front of the first two constants and a minus sign in front of the third—is not dictated by the above equations.

There is a freedom in the choice of sign in these equations. However, the boundary conditions will force the constants to adapt to values appropriate to the physical situation at hand and still leave a Z=ST, more complex dimension as the 3 dimensions have different form, height-information and length-motion are the orthogonal S and T state, whose product give us the more complex reproductive width/depth.

Complex Examples

In the first group of examples, let u be an unknown function of x, and c and ω are known constants.

Inhomogeneous first-order linear constant coefficient ordinary differential equation:

Homogeneous second-order linear ordinary differential equation:

Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:

Inhomogeneous first-order nonlinear ordinary differential equation:

Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:

In the next group of examples, the unknown function u depends on two variables x and t or x and y.

Homogeneous first-order linear partial differential equation:

Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:

Third-order nonlinear partial differential equation, the Korteweg–de Vries equation:

Existence of solutions

Solving differential equations is not like solving ¬Algebraic equations. Not only are their solutions often times unclear, but whether solutions are unique or exist at all are also notable subjects of interest.

For first order initial value problems, it is easy to tell whether a unique solution exists. Given any point in the xy-plane, define some rectangular region , such that and is in . If we are given a differential equation and an initial condition , then there is a unique solution to this initial value problem if and are both continuous on . This unique solution exists on some interval with its center at .

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order such that For any non 0’, if and are continuous on some interval containing , is unique and exists.

A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.

A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations.

A differential ¬Algebraic equation (DAE) is a differential equation comprising differential and ¬Algebraic terms, given in implicit form.

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve approximation of the solution of a differential equation by the solution of a corresponding difference equation.

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena.

As an example, consider propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

In physics:

Classical mechanics:

So long as the force acting on a particle is known, Newton’s second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton’s second law to obtain an ordinary differential equation, which is called the equation of motion.

Electrodynamics:

Maxwell’s equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell’s equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.

General relativity:

The Einstein field equations (EFE; also known as “Einstein’s equations”) are a set of ten partial differential equations in Albert Einstein’s general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).

Quantum mechanics:

In quantum mechanics, the analogue of Newton’s law is Schrödinger’s equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple ¬Algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system’s wave function (also called a “state function”).

Other important equations:

Euler–Lagrange equation in classical mechanics

Hamilton’s equations in classical mechanics

Newton’s law of cooling in thermodynamics

The wave equation

The heat equation in thermodynamics

Laplace’s equation, which defines harmonic functions

Poisson’s equation

The geodesic equation

The Navier–Stokes equations in fluid dynamics

The Diffusion equation in stochastic processes

The Convection–diffusion equation in fluid dynamics

The Cauchy–Riemann equations in complex analysis

The Poisson–Boltzmann equation in molecular dynamics

The shallow water equations

Universal differential equation

The Lorenz equations whose solutions exhibit chaotic flow.

What do all those functions without going into detail have in common? Essentially to study one of the key dimotions of existence for a domain that represents one of the 3 ‘parts’ of an organism, its:

|-sT-field/limbs < ST-body-wave > O-§t (particle-head)

The entropic, Poisson equations will be the best class field analysis, the Lagrange and Hamiton’s equation the best class for body-wave analysis… and so on.

We shall consider them in our future papers on mathematical physics…

PARTIAL DIFFERENTIAL EQUATIONS

In mathematics, equation relating a function of several variables to its partial derivatives. A partial derivative of a function of several variables expresses how fast the function changes when one of its variables is changed, the others being held constant (compare ordinary differential equation). The partial derivative of a function is again a function, and, if f(x, y) denotes the original function of the variables x and y, the partial derivative with respect to x—i.e., when only x is allowed to vary—is typically written as fx(x, y) or ∂f/∂x. The operation of finding a partial derivative can be applied to a function that is itself a partial derivative of another function to get what is called a second-order partial derivative. For example, taking the partial derivative of fx(x, y) with respect to y produces a new function fxy(x, y), or ∂2f/∂yx. The order and degree of partial differential equations are defined the same as for ordinary differential equations.

In general, partial differential equations are difficult to solve, but techniques have been developed for simpler classes of equations called linear, and for classes known loosely as “almost” linear, in which all derivatives of an order higher than one occur to the first power and their coefficients involve only the independent variables.

RECAP. the commonly used distinctions of O/PDES include 3 dualities which we put in correspondence with the 3 elements, ∆ST according to pentalogic. So IF the equation studies:

T by its Number of Dimotions can be Ordinary (1 Dimotion) /Partial (multiple demotions): An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term “ordinary” is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.

ODES imply Partial differential equations (PDEs) are equations that involve rates of change with respect to continuous variables. The position of a rigid body is specified by six parameters, but the configuration of a fluid is given by the continuous distribution of several parameters, such as the temperature, pressure, and so forth. The dynamics for the rigid body take place in a finite-dimensional configuration space; the dynamics for the fluid occur in an infinite-dimensional configuration space. This distinction usually makes PDEs much harder to solve than ordinary differential equations (ODEs), but here again, there will be simple solutions for linear problems. Classic domains where PDEs are used include acoustics, fluid dynamics, electrodynamics, and heat transfer.

S-Topology, according to its form can be Linear/Non-linear=cyclical (entangled by product).

It follows from what we have said of ¬Ælgebra that Odes and lineal PDEs are those in which the ratio of change of the being adds to the function but does NOT entangle through multiplication with them.

This is really what makes non-lineal PDEs so difficult to solve as the entanglement which will happen in other Planes of reality will make it almost impossible to get all the information needed, and multiply its solutions, themes those of 5D analysis.

Only the simplest differential equations are solvable by explicit formulas; and most have multiple solutions, implying the future is pentalogic – it can go different ways. Which ones are solvable then helps to understand the philosophy of time:

T=S symmetries.

A Cauchy problem in mathematics asks for the solution of a partial differential equation that satisfies certain conditions that are given on a hypersurface in the domain. A Cauchy problem can be an initial value problem (Time symmetry) or a boundary value problem (space-symmetry or Cauchy boundary condition) or it can be neither of them.

The Cauchy problem consists of finding the unknown functions and solutions only will exist if there is an initial finite time (singularity related, as the will of the system and its dimotions) or finite space (membrane related), hence a formed T.œ structure for the space-time event/being studied.

∆±¡: Equation order Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on. Each order representing them a scale of reality. And since most systems just extend through 3±¡ planes Differential equations that describe natural phenomena almost always have only first and second order derivatives in them.

Also a scalar division is that between Inhomogeneous/Homogeneous, which studies those in which its scaling by multiplication is conserved.

Since a homogeneous function is one with multiplicative scaling behaviour: if all its arguments are multiplied by a factor, then its value is multiplied by some power of this factor:  for some constant k and all real numbers α. The constant k is called the degree of homogeneity.

Lineal, affine functions of the type y = Ax + c are not HOMOGENEOUS, which again brings us the duality of ± in the same plane and x/ in different planes of existence.

We can then with those simple concepts understand intuitively many properties of physical equations and parameters by the type of ‘rates of change that take place’.

I.e. products are NOT reproductions but entanglements in a lower plane. So lineal equations will study NON-entangled additions in a single plane, and follow the superposition principle. They are the only solvable, as we have all the parameters.

Most ODEs that are encountered in physics are linear, as they deal with the 2nd Dimotion, lineal locomotion, and, therefore, most special functions may be defined as solutions of linear differential equations.

Partial differential equations

is a differential equation that contains unknown multivariable functions and their partial derivatives, used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs which in general will correspond to systems that are not particle/head controlled, and hence hierarchical with definitive ‘stillness’ in position and single @ristotelian logic. This basically leaves two type of PDEs, those related to entropic, memoriless states, which will tend to be ‘lineal’ as a superposition of non-entangled elements, and those related to complex fluids that interact among its particles and have a complex, variable internal structure which tend to be non-lineal and partial and hence irresolvable. For example:

Lineal PDE: The position of a rigid body (ð§) is specified by a few parameters and it is a lineal ODE.

but the configuration of a fluid is given by several parameters, such as the temperature, pressure, and so forth. Classic domains where such PDEs are used include acoustics, fluid dynamics, electrodynamics, and heat transfer: the heat equation, the wave equation, Laplace’s equation, Helmholtz equation, Klein–Gordon equation, and Poisson’s equation.

Non-linear differential equations finally are formed by the products of the unknown function and its derivatives are allowed and its degree is > 1. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos, as they are BOTH co-existing in several Planes and interacting in its parts on a single scale.

So even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, are hard problems as the. Navier–Stokes differential equation of fluids show.

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.

So the conclusion is obvious: Nature with its infinite monads and Planes IS NOT ALWAYS reflected in a mathematical mirror, which cannot be the origin of Nature (false creationist theories).

Generally speaking the techniques of differentiation distinguish between ODE ordinary equations with a single ST variable, which probe in depth on either space or time consecutive derivatives, but have a limited use as reality only allows 3 multiple derivatives into the single time or space dimension (beyond 3 the results are not essentially not related to the direct experience of how space-time systems evolve through Planes). Multiple derivatives though are the tool to approximate two of the 3 great fields of observance of the scalar Universe, through mathematical mirrors, which we can write as generator equation:

∆-i: Fractal Mathematics (discontinuous analysis of finitesimals) < Analysis – Integrals and differential equations (∆º±1: continuous=organic space) < ∆+i: Polynomials (diminishing information on wholes).

It is important in that sense to understand the difference focus of the 3 approaches of mathematical mirrors to observe reality. We shall study in the usual order in which they were born, first ODE then PDE and finally fractals.

Now the mathematical elements of analysis are all well known and standard. Leibniz started them with the symbol ∫ that means summations.

Ordinary differential equations

An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term “ordinary” is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.

They are basically analysis of single ‘steps’/symmetries of S≈T systems, but ODE can go ‘deeper’ into the spatial or temporal structure of the system by establishing multiple derivatives on the original parameter, thus they are perfect systems to ‘study’ the ternary dimensions of ‘integral’ s pace (1Distance, 2D area and 3D volume) and ‘derivative’ time (steady motion, acceleration and deceleration).

And as the symmetries between those 3D of space and time are not clearly understood, ∆st can bring some insights in its analysis.

To notice finally that the best use of mathematical equations and its operations are the simplest actions of motion as reproductions of information in its 3 states/varieties (potentials, waves and particles); but for complex social and reproductive processes very few internal characteristics can be extracted with mathematical tools.

And yet even in those simple cases, exact solutions are not always possible, regardless of the dogmatic myths of mathematical accuracy. This happens as usual because humans measure ‘lineal distances’ and reality is curved, so we approximate lineal quanta/finitesimal and then ad them to find the whole curved state, making use of one of the 3 ‘primary Galilean dualities’ between continuity and discontinuity, linearity and cyclicality, large and small.

So what are the key elements for finding ‘solutions’, that is descriptions of the full T.œ, its state and simpler actions of 1D-motion/reproduction in space, and topological ≤≥ change from lineal to cyclical form? Basically to have enough data about the ‘boundary conditions’ of the vital energy open ball (that is a parameter for the singularity if it exists, and for the membrane that encloses the system). As both are 1D, 2D hence lineal forms of the type A+Bx, then it is possible to measure and find determined solutions.

Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.

ODEs are thus symmetric to simple Space-time steps, which correspond themselves to the simplex 3 actions of 1D, 2D and some possible 4D simple entropy deaths and some simple 3D reproductive steps (3D however when combining space and time parameters and most combined steps of several dimensions and 5D worlds will required PDEs. )

Let us illustrate by a simple example. Consider a material particle of mass m moving along an axis Ox, and let x denote its coordinate at the instant of time t. The coordinate x will vary with the time, and knowledge of the entire motion of the particle is equivalent to knowledge of the functional dependence of x on the time t. Let us assume that the motion is caused by some force F, the value of which depends on the position of the particle (as defined by the coordinate x), on the velocity of motion ν = dx/dt and on the time t, i.e., F = F(x, dx/dt, t).

According to the laws of mechanics, the action of the force F on the particle necessarily produces an acceleration ω = d²x/dt² such that the product of w and the mass m of the particle is equal to the force, and so at every instant of the motion we have the equation:

1. m d²x/dt² = F (x, dx/dt,t)

Where we find the first key ‘second derivative’ for the dimension of time acceleration, which requires a first insight on the Nature of physical systems and its dimensions in space vs. time.

In space, the dimensions seem to us in an easy hierarchical system of growth, 1D lines (2D if we consider them waves of Non-E fractal points with a 0’-1 unit circle dimension for each point), 2D areas and 3 D volumes, but in time the 3 arrows depart from 1D steady state motion and can be considered as opposite directions, when volumes of space grow, through the scattering arrow of entropy diminishing its speed, vs. the acceleration of speed that diminishes space as the system collapses into a singularity:

So the 3D of classic space, ‘volume’ actually belongs to the entropic arrow of decelerating time that creates space-volume, vs. the opposite arrow of imploding time vortices that diminish space volume and increases speed, Vo x Ro = k.

So what seems in space a natural growth of volume in space, in time has a different order:

Entropic ≈decelerating volume < steady state ≈ distance-motion > Informative, cyclical area ≈ accelerated motion.

This different ‘order’ of dimensions when perceived in simultaneous space and cyclical time is the main dislocation in the way the mind perceives both (which is sorely painful when we consider the order of a world cycle, always starting in the ∆-1 scale of maximal information to decline as it grows and reproduces into less perfect, more entropic volumes of iterative forms that finally decline and die in the arrow of entropy; which the mind that has a spatial-volume inclined nature of ever-growth, does not understand).

This is the differential equation that must be satisfied by the function x(t) describing the behavior of the moving particle. It is simply a representation of laws of mechanics. Its significance lies in the fact that it enables us to reduce the mechanical problem of determining the motion of a particle to the mathematical problem of the solution of a differential equation.

Later the reader will find other examples showing how the study of various physical processes can be reduced to the investigation of differential equations.

The theory of differential equations began to develop at the end of the 17th century, almost simultaneously with the appearance of the differential and integral calculus. At the present time, differential equations have become a powerful tool in the investigation of natural phenomena. In mechanics, astronomy, physics, and technology they have been the means of immense progress. From his study of the differential equations of the motion of heavenly bodies, Newton deduced the laws of planetary motion discovered empirically by Kepler. In 1846 Leverrier predicted the existence of the planet Neptune and determined its position in the sky on the basis of a numerical analysis of the same equations.

To describe in general terms the problems in the theory of differential equations, we first remark that every differential equation has in general not one but infinitely many solutions; that is, there existsan infinite set of functions that satisfy it. For example, the equation of motion for a particle must be satisfied by any motion induced by the given force F(x, dx/dt, t), independently of the starting point or the initial velocity. To each separate motion of the particle there will correspond a particular dependence of x on time t. Since under a given force F there may be infinitely many motions the differential equation (2) will have an infinite set of solutions.

Every differential equation defines, in general, a whole class of functions that satisfy it. The basic problem of the theory is to investigate the functions that satisfy the differential equation. The theory of these equations must enable us to form a sufficiently broad notion of the properties of all functions satisfying the equation, a requirement which is particularly important in applying these equations to the natural sciences. Moreover, our theory must guarantee the means of finding numerical values of the functions, if these are needed in the course of a computation. We will speak later about how these numerical values may be found.

If the unknown function depends on a single argument, the differential equation is called an ordinary differential equation. If the unknown function depends on several arguments and the equation contains derivatives with respect to some or all of these arguments, the differential equation is called a partial differential equation. The first three of the equations in (1) are ordinary and the last three are partial.

The theory of partial differential equations has many peculiar features which make them essentially different from ordinary differential equations but can be reduced always to time-like vs. space-like dualities.

The following are examples of important partial differential equations that commonly arise in problems of mathematical physics.

Benjamin-Bona-Mahony equation

Biharmonic equation

Boussinesq equation

Cauchy-Riemann equations ∂u/∂x=∂v/∂y       ∂v/∂x=∂u/∂y

Chaplygin’s equation

Euler-Darboux equation

Heat conduction equation

Helmholtz differential equation

Klein-Gordon equation

Korteweg-de Vries-Burgers equation

Korteweg-de Vries equation

Krichever-Novikov equation

where

Laplace’s equation

Lin-Tsien equation

Sine-Gordon equation

Spherical harmonic differential equation

Tricomi equation

Wave equation

FUNCTIONS OF SEVERAL VARIABLES. GEOMETRICAL VIEW.

In practice it is often necessary to deal also with functions depending on two, three, or in general many variables. For example, the area of a rectangle is a function S=xy  of its base x and its height y. The volume of a rectangular parallelepiped is a function V=xyz of its three dimensions. The distance between two points A and B is a function:

of the six coordinates of these points. The well-known formula:  pv = nRT expresses the dependence of the volume v of a definite amount of gas on the pressure p and absolute temperature T.

Functions of several variables, like functions of one variable, are in many cases defined only on a certain region of values of the variables themselves. For example, the function

U = ln (1-x²-y²-z²)  is defined only for values of x, y and z that satisfy the condition x²+y²+z²=1

(For other x, y, z its values are not real numbers.) The set of points of space whose coordinates satisfy the inequality (35) obviously fills up a sphere of unit radius with its center at the origin of coordinates. The points on the boundary are not included in this sphere; the surface of the sphere has been so to speak “peeled off.” Such a sphere is said to be open. The function (34) is defined only for such sets of three numbers (x, y, z) as are coordinates of points in the open sphere G. It is customary to state this fact concisely by saying that the function (34) is defined on the sphere G.

Let us give another example. The temperature of a nonuniformly heated body V is a function of the coordinates x, y, z of the points of the body. This function is not defined for all sets of three numbers x, y, z but only for such sets as are coordinates of points of the body V.
Finally, as a third example, let us consider the function:where ϕ is a function of one variable defined on the interval [0, 1]. Obvious-ly the function u is defined only for sets of three numbers (x, y, z) which are coordinates of points in the cube:  0≤x≤1,0≤y≤1,0≤z≤1.
We now give a formal definition of a function of three variables. Suppose that we are given a set E of triples of numbers (x, y, z) (points of space). If to each of these triples of numbers (points) of E there corresponds a definite number u in accordance with some law, then u is said to be a function of x, y, z (of the point), defined on the set of triples of numbers (on the points) E, a fact which is written thus: u= F(x,y,z)

In place of F we may also write other letters: f, ϕ, ψ.
In practice the set E will usually be a set of points, filling out some geometrical body or surface: sphere, cube, annulus, and so forth, and then we simply say that the function is defined on this body or surface. Functions of two, four, and so forth, variables are defined analogously.

Implicit definition of a function.

Let us note that functions of two variables is a useful means for the definition of functions of one variable. Given a function F(x, y) of two variables let us set up the equation: F(s,t)=0

In general, this equation will define a certain set of points (s,t) of the surface on which our function is equal to 0’. Such sets of points usually represent curves that may be considered as the graphs of one or several one-valued functions y = ϕ(s) or s = ψ(t) of one variable. In such a case these one-valued functions are said to be defined implicitly by the equation (36). For example, the equation:

s²+t²=-r²=0   gives an implicit definition of two functions of one variable:

s=+√r²-t² and s= – √r²-t²

But it is necessary to keep in mind that an equation of the form (36) may fail to define any function at all. For example, the equation: t²+s²+1=0  obviously does not define any real function, since no pair of real numbers satisfies it.
Geometric representation. Functions of two variables may always be visualized as surfaces by means of a system of space coordinates. Thus the function:   z=ƒ(s,t)
is represented in a three-dimensional rectangular coordinate system by a surface, which is the geometric locus of points M  whose coordinates s, t, z satisfy the equation:

There is another, extremely useful method, of representing the function (37), which has found wide application in practice. Let us choose a sequence of numbers z1, z2, ···, and then draw on one and the same plane Ost the curves:   z1=ƒ(s,t); z2=ƒ(s,t)

which are the so-called level lines of the function f(s, t). From a set of level lines, if they correspond to values of z that are sufficiently close to one another, it is possible to form a very good image of the variation of the function f(s,t), just as from the level lines of a topographical map one may judge the variation in altitude of the locality.

Figure shows a map of the level lines of the function z = s2 + t2, the diagram at the right indicating how the function is built up from its level lines. In Chapter III, figure 50, a similar map is drawn for the level lines of the function z = st.

Partial derivatives and differential.

Let us make some remarks about the differentiation of the functions of several variables. As an example we take the arbitrary function of two variables: z=ƒ(x,y)

If we fix the value of y, that is if we consider it as not varying, then our function of two variables becomes a function of the one variable x. The derivative of this function with respect to x, if it exists, is called the partial derivative with respect to x and is denoted thus: ∂z/∂x or ƒx/∂x or ƒ’x(x,y)

The last of these three notations indicates clearly that the partial derivative with respect to x is in general a function of x and y. The partial derivative with respect to y is defined similarly.

The general case for space change through any volume.

When we generalise the case to any combination of space or time dimensions, the same method can be used to obtain the ginormous quantity of possible changes in multiple Dimensional analysis.

Thus, in order to determine the function that represents a given physical process, we try first of all to set up an equation that connects this function in some definite way with its derivatives of change of various orders and dimensions.

The method of obtaining such an equation, which is called a differential equation, often amounts to replacing increments of the desired functions by their corresponding differentials.
As an example let us solve a classic problem of change in 3 pure dimensions of euclidean space, which by convention we shall call Sxyz.

In a rectangular system of coordinates Oxyz, then we consider the surface obtained by rotation of the parabola whose equation (in the Oyz plane) is z = y2. This surface is called a paraboloid of revolution). Let v denote the volume of the body bounded by the paraboloid and the plane parallel to the Oxy plane at a distance z from it. It is evident that v is a function of z (z >0).

To determine the function v, we attempt to find its differential dv. The increment Δv of the function v at the point z is equal to the volume bounded by the paraboloid and by two planes parallel to the Oxy plane at distances z and z + Δz from it.
It is easy to see that the magnitude of Δv is greater than the volume of the circular cylinder of radius √z and height Δz but less than that of the circular cylinder with radius √z+∆z and height Δz. Thus:

πz ∆z < ∆v ≤ π (z +∆z) ∆z.        And so:

where θ is some number depending on Δz and satisfying the inequality 0 < θ < 1.
So we have succeeded in representing the increment Δv in the form of a sum, the first summand of which is proportional to Δz, while the second is an infinitesimal of higher order than Δz (as Δz → 0). It follows that the first summand is the differential of the function v:

dv=πz ∆z    or dv=πz dz

since Δz = dz for the independent variable z.  The equation so obtained relates the differentials dv and dz (of the variables v and z) to each other and thus is called a differential equation.  If we take into account that:

dv/dz =v’     where v′ is the derivative of v with respect to the variable z, our differential equation may also be written in the form: v’=π z

To solve this very simple differential equation we must find a function of z whose derivative is equal to πz.

A solution of our equation is given by v = πz²/2 + C, where for C we may choose an arbitrary number. In our case the volume of the body is obviously 0’ for z = 0 (see figure 22), so that C = 0. Thus our function is given by v = πz²/2.

Geometrically the function f(x, y) represents a surface in a rectangular three-dimensional system of coordinates. The corresponding function of x for fixed y represents a plane curve (figure) obtained from the intersection of the surface with a plane parallel to the plane Oxz and at a distance y from it. The partial derivative ∂z/∂x is obviously equal to the trigonometric tangent of the angle between the tangent. to the curve at the point (x, y) and the positive direction of the x-axis.

More generally, if we consider a function z = f(x1, x2, . . ., xn) of the n variables x1, x2, . . ., xn, the partial derivative ∂z/∂x, is defined as the derivative of this function with respect to xi, calculated for fixed values of the other variables.

We may say that the partial derivative of a function with respect to the variable xi is the rate of change of this function in the direction of the change in xi. It would also be possible to define a derivative in an arbitrary assigned direction, not necessarily coinciding with any of the coordinate axis, but we will not take the time to do this.

It is sometimes necessary to form the partial derivatives of these partial derivatives; that is; the so-called partial derivatives of second order. For functions of two variables there are four of them: However, if these derivatives are continuous, then it is not hard to prove that the second and third of these four (the so-called mixed derivatives) coincide:

For example, in the case of first function considered:

the two mixed derivatives are seen to coincide.
For functions of several variables, just as was done for functions of one variable, we may introduce the concept of a differential.
For definiteness let us consider a function:

z = ƒ (x,y)  of two variables. If it has continuous partial derivatives, we can prove that its increment: corresponding to the increments Δx and Δy of its arguments, may be put in the form:where ∂f/∂x and ∂f/∂y are the partial derivatives of the function at the point (x, y) and the magnitude a depends on Δx and Δy in such a way that α → 0 as Δx → 0 and Δy → 0.
The sum of the first two components:is linearly dependent on Δx and Δy and is called the differential of the function. The third summand, because of the presence of the factor α, tending to 0’ with Δx and Δy, is an infinitesimal of higher order than the magnitude:describing the change in x and y.

Let us give an application of the concept of differential. The period of oscillation of a pendulum is calculated from the formula:

where l is its length and g is the acceleration of gravity. Let us suppose that l and g are known with errors respectively equal to Δl and Δg. Then the error in the calculation of T will be equal to the increment ΔT corresponding to the increments of the arguments Δl and Δg. Replacing ΔT approximately by dT, we will have:

The signs of Δl and Δg are unknown, but we may obviously estimate ΔT by the inequality:

Thus we may consider in practice that the relative error for T is equal to the sum of the relative errors for l and g.
For symmetry of notation, the increments of the independent variables Δx and Δy are usually denoted by the symbols dx and dy and are also called differentials. With this notation the differential of the function u = f(x, y, z) may be written thus:Partial derivatives play a large role whenever we have to do with functions of several variables, as happens in many of the applications of analysis to technology and physics.

The ternary parts of a T.œ: its calculus.

We have already studied the process of integration for functions of one variable defined on a one-dimensional region, namely an interval. But the analogous process may be extended to functions of two, three, or more variables, defined on corresponding regions.

For example, let us consider a surface:z= ƒ (x,y) defined in a rectangular system of coordinates, and on the plane Oxy let there be given a region G bounded by a closed curve Γ. It is required to find the volume bounded by the surface, by the plane Oxy and by the cylindrical surface passing through the curve Γ with generators parallel to the Oz axis (figure 33). To solve this problem we divide the plane region G into subregions by a network of straight lines parallel to the axes Ox and Oy and denote by: G1, G2… Gn.

Those subregions which consist of complete rectangles. If the net is sufficiently fine, then practically the whole of the region G will be covered by the enumerated rectangles. In each of them we choose at will a point:

and, assuming for simplicity that Gi denotes not only the rectangle but also its area, we set up the sum:

It is clear that, if the surface is continuous and the net is sufficiently fine, this sum may be brought as near as we like to the desired volume V. We will obtain the desired volume exactly if we take the limit of the sum (47) for finer and finer subdivisions (that is, for subdivisions such that the greatest of the diagonals of our rectangles approaches 0’):
From the point of view of analysis it is therefore necessary, in order to determine the volume V, to carry out a certain mathematical operation on the function f(x, y) and its domain of definition G, an operation indicated by the left side of equality (48). This operation is called the integration of the function f over the region G, and its result is the integral of f over G. It is customary to denote this result in the following way:

Similarly, we may define the integral of a function of three variables over a three-dimensional region G, representing a certain body in space. Again we divide the region G into parts, this time by planes parallel to the coordinate planes. Among these parts we choose the ones which represent complete parallelepipeds and enumerate them:G1, G2… Gn.

In each of these we choose an arbitrary point:

and set up the sum:

where Gi denotes the volume of the parallelepiped Gi. Finally we define the integral of f(x, y, z) over the region G as the limit:

to which the sum (50) tends when the greatest diagonal d(Gi) approaches 0’.

Let us consider an example. We imagine the region G is filled with a nonhomogeneous mass whose density at each point in G is given by a known function ρ(x, y, z). The density ρ(x, y, z) of the mass at the point (x, y, z) is defined as the limit approached by the ratio of the mass of an arbitrary small region containing the point (x, y, z) to the volume of this region as its diameter approaches 0’.* To determine the mass of the body G it is natural to proceed as follows. We divide the region G into parts by planes parallel to the coordinate planes and enumerate the complete parallelepipeds formed in this way:  G1, G2, …, Gn

Assuming that the dividing planes are sufficiently close to one another, we will make only a small error if we neglect the irregular regions of the body and define the mass of each of the regular regions Gi (the complete parallelepipeds) as the product:

where (ξi, ηi, ζi) is an arbitrary point Gi. As a result the approximate value of the mass M will be expressed by the sum:

and its exact value will clearly be the limit of this sum as the greatest diagonal Gi approaches 0’; that is:

The integrals, 49 and 51 are called double and triple integrals respectively.

Let us examine a problem which leads to a double integral. We imagine that water is flowing over a plane surface. Also, on this surface the underground water is seeping through (or soaking back into the ground) with an intensity f(x, y) which is different at different points. We consider a region G bounded by a closed contour (figure 34) and assume that at every point of G we know the intensity f(x, y), namely the amount of underground water seeping through per minute per cm2 of surface; we will have f(x, y) > 0 where the water is seeping through and f(x, y) < 0 where it is soaking into the ground. How much water will accumulate on the surface G per minute ?

If we divide G into small parts, consider the rate of seepage as approximately constant in each part and then pass to the limit for finer and finer subdivisions, we will obtain an expression for the whole amount of accumulated water in the form of an integral: Double (two-fold) integrals were first introduced by Euler. Multiple integrals form an instrument which is used everyday in calculations and investigations of the most varied kind.

It would also be possible to show, though we will not do it here, that calculation of multiple integrals may be reduced, as a rule, to iterated calculation of ordinary one-dimensional integrals.

Contour and surface integrals.

Finally, we must mention that still other generalizations of the integral are possible. For example, the problem of defining the work done by a variable force applied to a material point, as the latter moves along a given curve, naturally leads to a so-called curvilinear integral, and the problem of finding the general charge on a surface on which electricity is continuously distributed with a given surface density leads to another new concept, an integral over a curved surface:

For example, suppose that a liquid is flowing through space ( and that the velocity of a particle of the liquid at the point (x, y)is given by a function P(x, y), not depending on z. If we wish to determine the amount of liquid flowing per minute through the contour Γ, we may reason in the following way. Let us divide Γ up into segments Δsi. The amount of water flowing through one segment Δsi is approximately equal to the column of liquid shaded in figure 35; this column may be considered as the amount of liquid forcing its way per minute through that segment of the contour. But the area of the shaded parallelogram is equal to:

P i (x,y) • ∆Si • cos α i  where αi is the angle between the direction, ‾x of the x-axis and the outward normal of the surface bounded by the contour Γ; this normal is the perpendicular ñ to the tangent, which we may consider as defining the direction of the segment Δsi. By summing up the areas of such parallelograms and passing to the limit for finer and finer subdivisions of the contour Γ, we determine the amount of water flowing per minute through the contour Γ; it is denoted thus:

and is called a curvilinear integral. If the flow is not everywhere parallel, then its velocity at each point (x, y) will have a component P(x, y) along the x-axis and a component Q(x, y) along the y-axis. In this case we can show by an analogous argument that the quantity of water flowing through the contour will be equal to:

When we speak of an integral over a curved surface G for a function f(M) of its points M(x, y, z), we mean the limit of sums of the form:

for finer and finer subdivisions of the region G into segments whose areas are equal to Δσi.

General methods exist for transforming multiple, curvilinear, and surface integrals into other forms and for calculating their values, either exactly or approximately.

Several important and very general formulas relating an integral over a volume to an integral over its surface (and also an integral over a surface, curved or plane, to an integral around its boundary) have a very wide application, and are yet another striking proof on the constant trans-form-ations of S≈T DIMENSIONS, and interaction of the parts of the system, in this case between the membrane that encircles the vital space, whose parameters ARE ALWAYS CLOSELY RELATED, as we can consider the membrane, just the last ‘cover’ of maximal size of that inner 3D vital energy (unlike the quite distinct singularity, which ‘moves’ across ∆±i Planes and tends to be quite different in form, parameters and substance)

Let us put an example: imagine, as we did before, that over a plane surface there is a horizontal flow of water that is also soaking into the ground or seeping out again from it. We mark off a region G, bounded by a curve Γ, and assume that for each point of the region we know the components P(x, y) and Q(x, y) of the velocity of the water in the direction of the x-axis and of the y-axis respectively.

Let us calculate the rate at which the water is seeping from the ground at a point with coordinates (x, y). For this purpose we consider a small rectangle with sides Δx and Δy situated at the point (x, y).

As a result of the velocity P(x, y) through the left vertical edge of this rectangle, there will flow approximately P(x, y)Δy units of water per minute into the rectangle, and through the right side in the same time will flow out approximately P(x + Δx, y)Δy units. In general, the net amount of water leaving a square unit of surface as a result of the flow through its left and right vertical sides will be approximately:

If we let Δx approach 0’, we obtain in the limit: ∂P/∂x.

Correspondingly, the net rate of flow of water per unit area in the direction of the y-axis will be given by: ∂Q/∂y.

This means that the intensity of the seepage of ground water at the point with coordinates (x, y) will be equal to: ∂P/∂x + ∂Q/∂y

But in general, as we saw earlier, the quantity of water coming out from the ground will be given by the double integral of the function expressing the intensity of the seepage of ground water at each point, namely:

But, since the water is incompressible, this entire quantity must flow out during the same time through the boundaries of the contour Γ. The quantity of water flowing out through the contour Γ is expressed, as we saw earlier, by the curvilinear integral over Γ:

The equality of the magnitudes (52) and (53) gives in its simplest two-dimensional case:

A key formula to mirror a widespread phenomenon in the external world, which in our example we interpreted in a readily visualized way as preservation of the volume of an incompressible fluid.

Which can be generalise to express the connection between an integral over a multidimensional volume and an integral over its surface. In particular, for a three-dimensional body G, bounded by the surface Γ:

where dσ is the element of surface.

It is interesting to note that the fundamental formula of the integral calculus:

may be considered as a one-dimensional case. The equation (54) connects the integral over an interval with the “integral” over its “null-dimensional” boundary, consisting of the two end points.

Formula (54) may be illustrated by the following analogy. Let us imagine that in a straight pipe with constant cross section s = 1 water is flowing with velocity F(x), which is different for different cross sections (figure 36). Through the porous walls of the pipe, water is seeping into it (or out of it) at a rate which is also different for different cross sections:

If we consider a segment of the pipe from x to x + Δx, the quantity of water seeping into it in unit time must be compensated by the difference F(x + Δx) – F(x) between the quantity flowing out of this segment and the quantity flowing into it along the pipe.

So the quantity seeping into the segment is equal to the difference F(x + Δx) – F(x), and consequently the rate of seepage per unit length of pipe (the ratio of the seepage over an infinitesimal segment to the length of the segment) will be equal to:

More generally, the quantity of water seeping into the pipe over the whole section [a, b] must be equal to the amount lost by flow through the ends of the pipe. But the amount seeping through the walls is equal to:and the amount lost by flow through the ends is F(b) – F(a). The equality of these two magnitudes produces formula.

GREEN’S THEOREM

Then there is of course the fact that a system in space-time, in which there is a displacement in time, will be equivalent to a system in which this time motion is seen as fixed space. Such cases mean that we can integrate lines with motion into planes, and surfaces with motion into volumes.

The result is:

-Green’s theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green[1] and is the two-dimensional special case of the more general Kelvin–Stokes theorem.

-Stoke’s theorem, which says that the integral of a differential form ω over the boundary of some orientable manifold Ω is equal to the integral of its exterior derivative dω over the whole of Ω:   ∫∂Ω ω = ∫Ω d ω

In general such Integrals also follow the geometrical structure of a system built with an external membrane, which absorbs the information of the system, and internal 0’-point of scalar that focus it and a vital space between them. The result of these relationships allow to define the basic laws of integrals in time and space that relate line integrals to surface integrals to volume integrals, of which the best known example are the laws of electromagnetism, written not as Maxwell did, in terms of derivatives (curls and gradients) but of integrals.

So we can in this manner represent the laws of electromagnetism and fully grasp the meaning of magnetism, the external membrane and charge, the central point of view, with interactions between both, the electromagnetic waves and induced currents and magneto fluxes.

Thus the best examples in physics of this relationship are the 4 equations of Maxwell:

While the other 2 define the membrane of an electromagnetic field, the magnetic field and the central point of view or charge:

So we can consider that the Tƒ element of the electromagnetic field, the charge or o-point and the membrane, or closed outer path either in its integral or inverse differential equations, and the wave interaction between them, easily deduced from the stokes theorem or expressed inversely in differential form, give us the full description of an electromagnetic system in terms of the generator:

Sp-membrane (magnetic field-gauss’ law of magnetism) < ST (Faraday/Ampere’s laws of interaction between Sp and Tƒ) >Tƒ (Gauss Law of the central point).

And that those interactions are integrals of the quanta of the ∆-1 field in which the electric charge and magnetic field that integrates them arouses.

Density integrals. The meaning of Tƒ/Sp and Sp/Tƒ: information and energy densities:

Some General Remarks on the Formation and Solution of Differential Equations

As we have expressed many times, equations do NOT have solutions till the whole information on its ternary TŒ are given. Which in time means to know an initial condition and an end, through which the function will run under the principle of completing its action in the least possible time: Max. S T (min. t), which for any combination of Space and Time dimensions implies to complete the action in the minimal possible time.

Conditions of this type arise in all equations of all stiences.

In the symmetry of space the boundary of the T.œ though must be expressed as lineal conditions of the 2D membrane and 1D singularity, which can be superposed 1D+2D to give the 3D solution of the vital space both enclose, normally through a product operator, 1 D x 2D=3D.

In any case each equation once determined by its space or time constrictions, can be found certain solutions, which form a sœT of possible frequencies or areas that are efficient parameters for the MIND equation to describe real T.œs (expressed here in the semantic inversion of sets and toes).

The key to those solutions and its approximations is the fact that singularity and membrane conditions are expressed as scalars and lineal functions, while ternary vital energy solutions have cyclical form.

A bit on the ‘numbers’

Now the fundamental concept behind analysis is the ∂∫ duality of ‘derivatives’ and ‘integrals’, related at first sights to the concepts of ‘time’ and ‘space’ (you derivate in time, you integrate in space), and to the concept of scalar ‘evolution’ from parts into wholes (you derivate to obtain a higher scalar wholeness.

i.e. you derivate a past space into present speed, adding a time-motion, and then derivate into acceleration – future time – to obtain the ‘most thorough single parameter of the being in time’: its acceleration that encodes also its speed and distance.

On the other hand you integrate in space, and so it is also customary to consider the first and second integral, which will bring also the ternary scale of volume of a system.

And it is a tribute to the simplicity of the Universe that further ‘derivatives’ are not really needed, to perceive the system in its full time and space parameters. As further derivations and integrations are not needed (they happen in the search of curvature and tensions and jerks, rates of acceleration, which are really menial details and in some combined space-time multiple systems).

DETERMINISM IN SOLUTION TO ODEs.

Lineal vs cyclic; dis≈continuity; 1st vs 2nd order, partial vs ordinary,∂ vs ∫; 3 states of matter & its freedoms.

In the philosophy of science of mathematical physics some concepts come back once and again, based in dualities

And so the qualitative description of all those entropic, reproductive/moving and informative/Tiƒ time-like vortices became ‘only’ mathematical.

It is interesting at this stage, to consider that the whole world of ∫∂ mathematics has two approaches which humans as always being one-dimensional did not fully find complementary but argue, the method of newton based in infinite series (arithmetic, temporal pov) and that of Leibniz using spatial, geometric concepts (tangents); which is the one, being more evident and simpler, that stood.

First, trivially speaking the existence of such 2 canonical, time-space ways to come to ∫∂ is a clear proof that both researchers found ∫∂ independently. Next, their argument about who was right and better shows how one-minded is humanity, and third, the dominance of Leibniz’s methods for being visual, geometrical, spatial tells a lot about the difficulty humans have to understand time, causality and the concepts of infinity, limit, discontinuity, continuity, and other key elements of ∆nalysis, which we shall argue in our mathematical posts on… ∆nalysis.

Limits in closed spacetime: Expansion of curved sinusoidal functions in a trigonometic series

On the basis of what has been said there arises the fundamental question: Which functions of period 2π/α can be represented as the sum of a trigonometric series – that is what kind ‘of social forms’ evolve from parts into sinusoid wholes?

This question was raised in the 18th century by Euler and Bernoulli in connection with Bernoulli’s study of the vibrating string. Here Bernoulli took the point of view suggested by physical considerations that a very wide class of continuous functions, including in particular all graphs drawn by hand, can be expanded in a trigonometric series.

The answer was a true intention of a genius, as it meant that all motions of a double pendulum as the ‘hand’ is, all composite SHM motions were combinations of cyclical wave motions.

This opinion received harsh treatment from many of Bernoulli’s contemporaries. And would be proved only after  Fourier and Dirichlet showed that every continuous function of period 2π/α, which for any one period has a finite number of maxima and minima, can be expanded in a unique trigonometric Fourier series, uniformly convergent to the function. Since the essential element of a ‘sinusoidal series’, which we anticipated is related to the SMH simplest representation of a world cycle is to have a perfect balance of minimal and maximal, which represent the ‘turning points’ or change of ‘ages’ of the world cycle, which can then be decomposed in smaller ones.

Figure illustrates a function satisfying Dirichlet’s conditions. Its graph is continuous and periodic, with period 2π, and has one maximum and one minimum in the period 0≤x≤2π, hence it is a sum of cyclical ‘SMH’ worldcycles expressed as a composite of 3 Dimotions, the up and down motion of the point, or amplitude, the perpendicular field of ∆-1 entropy which the point shapes in its form to carry the wave and its temporal frequency in the vibration, that forms a ‘ternary’ structure of reality, which can be ‘socially added’ if multiple similar vibrations take place upwards, (5D social evolution) or downwards (dissolution of the total wave into its parts) to form a complex 5Dimotional system, which is the highest ‘state’ of a sinusoidal ‘mathematical mirror of the pentalogic Universe – and hence the ‘proper limit’ of study of the Trigonometric function in non-Ælgebra

Fourier coefficients. The calculus of 5Dimotions.

At this stage, though it belongs to Analysis, a question is poised: it is the series analytic? Jence Differentiable without limit? And what does mean the possible constant differentiability of the trigonometric functions of ‘perception’, in terms of metaphysics?

Let us discuss the case which belongs to more complex 5D ¬Algebra.

The maths of it, then are well known: If we consider functions of period 2π, which simplify the formulas any continuous function f(x) of period 2π satisfying Dirichlet’s condition may be expanded into a trigonometric series, symmetric in its ‘height and length’ cos/sin functions, sum of multiple simpler waves, which is uniformly convergent:
We pose the problem: to compute the coefficients ak and bk of the series for a given function f(x).
To this end we note the following equation:

These integrals are easy to compute by reducing the products of the various trigonometric functions to their sums and differences and their squares to expressions containing the corresponding trigonometric functions of double the angle.

The first equation states that the integral, over a period of the function, of the product of two different functions from the sequence 1, cos x, sin x, cos 2x, sin 2x, ··· is equal to 0’ (the so-called orthogonality property of the trigonometric functions). On the other hand, the integral of the square of each of the functions of this sequence is equal to π. The first function, identically equal to one, forms an exception, since the integral of its square over the period is equal to 2π. It is this fact which makes it convenient to write the first term of the series (22) in the form a0/2.
Now we can easily solve our problem. To compute the coefficient am, we multiply the left side and each term on the right side of the series (22) by cos mx and integrate term by term over a period 2π, as is permissible since the series obtained after multiplication by cos mx is uniformly convergent. By (23) all integrals on the right side, with the exception of the integral corresponding to cos mx, will be 0’, so that:

Similarly, multiplying the left and right sides of (22) by sin mx and integrating over the period, we get an expression for the coefficients:

and we have solved our problem. The numbers am and bm computed by formulas (24) and (25) are called the Fourier coeficients of the function f(x).
Let us take an example the function f(x) of period 2π illustrated in figure 13. Obviously this function is continuous and satisfies Dirichlet’s condition, so that its Fourier series converges uniformly to it.
It is easy to see that this function also satisfies the condition f(—x) = —f(x). The same condition also clearly holds for the function F1(x) = f(x) cos mx, which means that the graph of F1(x) is symmetric with respect to the origin. From geometric arguments it is clear that:

so that am = 0 (m = 0, 1, 2, ···). Further, it is not difficult to see that the functions F2(x) = f(x) sin mx has a graph which is symmetric with respect to the axis Oy; so that:

But for even m this graph is symmetric with respect to the center π/2 of the segment [0, π], so that bm = 0 for even m. For odd m = 2l + 1 (l = 0, 1, 2, ···) the graph of F2(x) is symmetric with respect to the straight line x = π/2, so that: But, as can be seen from the sketch, on the segment [0, π/2] we have simply f(x) = x, so that by integration by parts, we get:

Thus we have found the expansion of our function in a Fourier series.
Convergence of the Fourier partial sums to the generating function. In applications it is customary to take as an approximation to the function f(x) of period 2π the sum:

of the first n terms of its Fourier series, and then there arises the question of the error of the approximation. If the function f(x) of period 2π has a derivative f(r)(x) of order r which for all x satisfies the inequality:

Then the error of the approximation may be estimated as follows:

Where cr is a constant depending only on r. We see that the error converges to 0’ with increasing n, the convergence being the more rapid the more derivatives the function has.
For a function which is analytic on the whole real axis there is an even better estimate, as follows:

Where c and q are positive constants depending on f and q < 1. It is remarkable that the converse is also true, namely that if the previous inequality holds for a given function, then the function is necessarily analytic. This fact, which was discovered at the beginning of the present century, in a certain sense reconciles the controversy between Bernoulli and his contemporaries. We can now state: If a function is expandable in a Fourier series which converges to it, this fact in itself is far from implying that the function is analytic; however, it will be analytic, if its deviation from the sum of the first n terms of the Fourier series decreases more rapidly than the terms of some decreasing geometric progression.
A comparison of the estimates of the approximations provided by the Fourier sums with the corresponding estimates for the best approximations of the same functions by trigonometric polynomials shows that for smooth functions the Fourier sums give very good approximations, which are in fact, close to the best approximations. But for nonsmooth continuous functions the situation is worse: Among these, for example, occur some functions whose Fourier series diverges on the set of all rational points.

THE TOPOLOGICAL VIEW.

Multiple Integrals in topology.

As the Universe is a kaleidoscopic mirror of symmetries between all its elements, this dominant of analysis on ∆-scaling must ad also the use of analysis on a single plane, in fact the most common, whereas the essential consideration is the ∆§ocial decametric and e-π ternary scaling, with minimal distortion (which happens in the Lorentzian limits between Planes).

This key distinction on GST (∆§ well-behaved scaling versus ∆±i ‘distorted emerging and dissolution, which does change the form of the system) does have special relevance in analysis as for very long it was necessary the ‘continuity’ without such distortions of the function studied, and so analysis was restricted to ∆(±1 – 0) intervals and ‘broke’ when jumping two Planes as in processes of entropy (death-feeding). But with improved approximation techniques, functionals and operators (which assume a whole scale of ∞ parts as a function of functions in the operator of the larger scale) and renormalisation in double and triple integrals and derivatives by praxis, without understanding the scalar theory behind it, this hurdle today…

And it has always amused me that humans can get so far in all disciplines by trial and error, when a ‘little bit of thought on first principles’ could make thinks much easier. It seems though-thought beings are scarce in our species and highly disregarded, as the site’§ight§how (allow me, a bit of cacophony and repetition the trade mark of this blog and the Universe 🙂 As usual I shall also repeat, I welcome comments, and offers of serious help from specialists and Universities, since nothing would make me happier than unloading tons of now-confusing analysis not only of analysis, before I get another health crisis and all goes to waste in the eternal entropic arrow of two derivatives, aka death.

Existence and Uniqueness of the Solution of a Differential Equation; Approximate Solution of Equations

The question of existence and uniqueness of the solution of a differential equation. We return to the differential equation (17) of arbitrary order n. Generally, it has infinitely many solutions and in order that we may pick from all the possible solutions some one specific one, it is necessary to attach to the equation some supplementary conditions, the number of which should be equal to the order n of the equation. Such conditions may be of extremely varied character, depending on the physical, mechanical, or other significance of the original problem.

For example, if we have to investigate the motion of a mechanical system beginning with some specific initial state, the supplementary conditions will refer to a specific (initial) value of the independent variable and will be called initial conditions of the problem. But if we want to define the curve of a cable in a suspension bridge, or of a loaded beam resting on supports at each end, we encounter conditions corresponding to different values of the independent variable, at the ends of the cable or at the points of support of the beam. We could give many other examples showing the variety of conditions to be fulfilled in connection with differential equations.
We will assume that the supplementary conditions have been defined and that we are required to find a solution of equation:

that satisfies them.

The first question we must consider is whether any such solution exists at all. It often happens that we cannot be sure of this in advance. Assume, say, that equation (17) is a description of the operation of some physical apparatus and suppose we want to determine whether periodic motion occurs in this apparatus. The supplementary conditions will then be conditions for the periodic repetition of the initial state in the apparatus, and we cannot say ahead of time whether or not there will exist a solution which satisfies them.
In any case the investigation of problems of existence and uniqueness of a solution makes clear just which conditions can be fulfilled for a given differential equation and which of these conditions will define the solution in a unique manner.

But the determination of such conditions and the proof of existence and uniqueness of the solution for a differential equation corresponding to some physical problem also has great value for the physical theory itself. It shows that the assumptions adopted in setting up the mathematical description of the physical event are on the one hand mutually consistent and on the other constitute a complete description of the event.
The methods of investigating the existence problem are manifold, but among them an especially important role is played by what are called direct methods. The proof of the existence of the required solution is provided by the construction of approximate solutions, which are proved to converge to the exact solution of the problem. These methods not only establish the existence of an exact solution, but also provide a way, in fact the principal one, of approximating it to any desired degree of accuracy.

For the rest of this section we will consider, for the sake of definiteness, a problem with initial data, for which we will illustrate the ideas of Euler’s method and the method of successive approximations.

Euler’s method of broken lines.

Consider in some domain G of the (x, y) plane the differential equation: dy/dx = ƒ (x,y)

As we have already noted, equation (34) defines in G a field of tangents. We choose any point (x0, y0) of G. Through it there will pass a straight line L0 with slope f(x0, y0). On the straight line L0 we choose a point (x1, y1), sufficiently close to (x0, y0); in figure 9 this point is indicated by the number 1.

We draw the straight line L1, through the point (x1, y1) with slope f(x1, y1) and on it mark the point (x2, y2); in the figure this point is denoted by the number 2. Then on the straight line L2, corresponding to the point (x2, y2) we mark the point (x3, y3), and continue in the same manner with x0, < x1, < x2, < x3, < · · ·. It is assumed, of course, that all the points (x0, y0), (x1, y1), (x2, y2), · · · are in the domain G. The broken line joining these points is called an Euler broken line.

One may also construct an Euler broken line in the direction of decreasing x; the corresponding vertices on our figure are denoted by –1, –2, –3.

It is reasonable to expect that every Euler broken line through the point (x0, y0) with sufficiently short segments gives a representation of an integral curve l passing through the point (x0, y0), and that with decrease in the length of the links, i.e., when the length of the longest link tends to 0’, the Euler broken line will approximate this integral curve.

Here, of course, it is assumed that the integral curve exists. In fact it is not hard to prove that if the function f(x, y) is continuous in the domain G, one may find an infinite sequence of Euler broken lines, the length of the largest links tending to 0’, which converges to an integral curve l. However, one usually cannot prove uniqueness: there may exist different sequences of Euler broken lines that converge to different integral curves passing through one and the same point (x0, y0). M. A. Lavrent’ev has constructed an example of a differential equation of the form (29) with a continuous function, f(x, y), such that in any neighborhood of any point P of the domain G there passes not one but at least two integral curves.

In order that through every point of the domain G there pass only one integral curve, it is necessary to impose on the function f(x, y) certain conditions beyond that of continuity. It is sufficient, for example, to assume that the function f(x, y) is contitiuous and has a bounded derivative with respect to y on the whole domain G. In this case it may be proved that through each point of G there passes one and only one integral curve and that every sequence of Euler broken lines passing through the point (x0, y0) converges uniformly to this unique integral curve, as the length of the longest link of the broken lines tends to 0’. Thus for sufficiently small links the Euler broken line may be taken as an approximation to the integral curve of equation (34).
From the preceding it can be seen that the Euler broken lines are so constituted that small pieces of the integral curves are replaced by line segments tangent to these integral curves. In practice, many approximations to integral curves of the differential equation (34) consist not of straight-line segments tangent to the integral curves, but of parabolic segments that have a higher order of tangency with the integral curve. In this way it is possible to find an approximate solution with the same degree of accuracy in a smaller number of steps (with a smaller number of links in the approximating curve).

The method of successive approximations.

We now describe another method of successive approximation, which is as widely used as the method of the Euler broken lines. We assume again that we are required to find a solution y(x) of the differential equation (34) satisfying the initial condition:  y (xo) = yo

For the initial approximation to the function y(x), we take an arbitrary function y0(x). For simplicity we will assume that it also satisfies the initial condition, although this is not necessary. We substitute it into the right side f(x, y) of the equation for the unknown function y and construct a first approximation y1, to the solution y from the following requirements:Since there is a known function on the right side of the first of these equations the function y1(x) may be found by integration:

It may be expected that y1(x) will differ from the solution y(x) by less than y0(x) does, since in the construction of y1(x) we made use of the differential equation itself, which should probably introduce a correction into the original approximation. One would also think that if we improve the first approximation y1(x) in the same way, then the second approximation:

will be still closer to the desired solution.

Let us assume that this process of improvement has been continued indefinitely and that we have constructed the sequence of approximations: yo(x), y1(x),…yn(x)….

Will this sequence converge to the solution y(x)?

More detailed investigations show that if f(x, y) is continuous and ƒ’y is bounded in the domain G, the functions yn(x) will in fact converge to the exact solution y(x) at least for all x sufficiently close to x0 and that if we break off the computation after a sufficient number of steps, we will be able to find the solution y(x) to any desired degree of accuracy.

Exactly in the same way as for the integral curves of equation (34), we may also find approximations to integral curves of a system of two or more differential equations of the first order. Essentially the necessary condition here is to be able to solve these equations for the derivatives of the unknown functions. For example, suppose we are given the system:Assuming that the right sides of these equations are continuous and have bounded derivatives with respect to y and z in some domain G in space, it may be shown under these conditions that through each point (x0, y0, z0) of the domain G, in which the right sides of the equations in (37) are defined, there passes one and only one integral curve:

y = Φ (x), z = ψ (x)  of the system (37). The functions f1(x, y, z) and f2(x, y, z) give the direction numbers at the point (x, y, z), of the tangent to the integral curve passing through this point. To find the functions ϕ(x) and ψ(x) approximately, we may apply the Euler broken line method or other methods similar to the ones applied to the equation (34).
The process of approximate computation of the solution of ordinary differential equations with initial conditions may be carried out on computing machines. There are electronic machines that work so rapidly that if, for example, the machine is programmed to compute the trajectory of a projectile, this trajectory can be found in a shorter space time than it takes for the projectile to hit its target (cf. Chapter XIV).
The connection between differential equations of various orders and a system of a large number of equations of first order. A system of ordinary differential equations, when solved for the derivative of highest order of each of the unknown functions, may in general be reduced, by the introduction of new unknown functions, to a system of equations of the first order, which is solved for all the derivatives. For example, consider the differential equation: d²y/dx²= ƒ (x, y, dy/dx). We set dy/dx = z. Then equation (38) may be written in the form: dz/dx = ƒ (x, y, z)

Hence, to every solution of equation (38) there corresponds a solution of the system consisting of equations (39) and (40). It is easy to show that to every solution of the system of equations (39) and (40) there corresponds a solution of equation (38).
Equations not explicitly containing the independent variable. The problems of the pendulum, of the Helmholtz acoustic resonator, of a simple electric circuit, or of an electron-tube generator considered in §1 lead to differential equations in which the independent variable (time) does not explicitly appear. We mention equations of this type here, because the corresponding differential equations of the second order may be reduced in each case to a single differential equation of the first order rather than to a system of first-order equations as in the paragraph above for the general equation of the second order. This reduction greatly simplifies their study.
Let us then consider a differential equation of the second order, not containing the argument t in explicit form:F (x, dx,dt, d²x.dt²)=0.  We set dx/dt=y and consider y as a function of x, so that:

Then equation (41) may be rewritten in the form:  F (x, y, y dy/dx)=0

In this manner, to every solution of equation (41) there corresponds a unique solution of equation (43). Also to each of the solutions y = ϕ(x) of equation (43) there correspond infinitely many solutions of equation (41). These solutions may be found by integrating the equation: dx/dt=Φ (x) where x is considered as a function of t.
It is clear that if this equation is satisfied by a function x = x(t), then it will also be satisfied by any function of the form x(t + t0), where t0 is an arbitrary constant.

It may happen that not every integral curve of equation (43) is the graph of a single function of x. This will happen, for example, if the curve is closed. In this case the integral curve of equation (43) must be split up into a number of pieces, each of which is the graph of a function of x. For every one of these pieces, we have to find an integral of equation (44).

The values of x and dx/dt which at each instant characterize the state of the physical system corresponding to equation (41) are called the phases of the system, and the (x, y) plane is correspondingly called the phase plane for equation (41). To every solution x = x(t) of this equation there corresponds the curve: y = x'(t)
in the (x, y) plane; t here is considered as a parameter. Conversely, to every integral curve y = ϕ(x) of equation (43) in the (x, y) plane there corresponds an infinite set of solutions of the form x = x(t + t0) for equation (41); here t0 is an arbitrary constant. Information about the behavior of the integral curves of equation (43) in the plane is easily transformed into information about the character of the possible solutions of equation (41). Every closed integral curve of equation (43) corresponds, for example, to a periodic solution of equation (41).
If we subject equation (6) to the transformation (42), we obtain: dy/dx = -ay – bx/my.

Setting ν = x and dv/dt = y in equation (16), in like manner we get:

Just as the state at every instant of the physical system corresponding to the second-order equation (41) is characterized by the two magnitudes* (phases) x and y = dx/dt, the state of a physical system described by equations of higher order or by a system of differential equations is characterized by a larger number of magnitudes (phases). Instead of a phase plane, we then speak of a phase space.

DUALITIES: The behavior of integral curves in the large DOMAIN self-centred in the small singularity.

the behavior of the integral curves “in the large”; that is, in the entire domain of the given system of differential equations, without attempting to preserve the scale. We will consider a space in which this system defines a field of directions as the phase space of some physical process. Then the general scheme of the integral curves, corresponding to the system of differential equations, will give us an idea of the character of all processes (motions) which can possibly occur in this system:

In figures we have constructed approximate schematized representations of the behavior of the integral curves in the neighborhood of an isolated singular point.

Why those matter obviously because singularities @ matter. We can divide those curves which are canonical of extensive families that exhaust the 3 possibilities:

∑=∏: 3D communication. What first calls attention is the symmetry of the upper fig. 12, when the singularity merely acts as in a tetraktys configuration as the identity neutral element that communicates all the flows that touch the T.œ system, entering and leaving symmetrically the o-point (having hence a 0 line of symmetry diagonal to the point).

It is also noticeable that the paths are ‘fast’, as the points of those paths know they will not be changed by the identity element.

ð•: 1D predation. But in the case the 0’-point acts as a predator that won’t let the point-prey go, the form is a spiralled, slow motion.

\$: 2D flows. Finally as usual we have a ternary case in which the curves do NOT touch the singularity, which curiously start with the points going straight, perpendicular to it, hence this case tends to apply to spatial points of vital energy with a certain ‘discerning’ view, which makes them feed first on the field established by the singularity to escape it when being aware of what lies ahead. The 2 last cases can be compared in vital terms – not trajectories, the behaviour of smallish ‘blind comets’ spiralling into stars that will feed on them as opposed to symbiotic planets that herd gravitational quanta together with the star but will NOT fall in the gravitational trap.

Mathematically the drawing of those curves, is one of the most fundamental problems in the theory of differential equations:  finding as simple a method as possible for constructing such a scheme for the behavior of the family of integral curves of a given system of differential equations in the entire domain of definition, in order to study the behavior of the integral curves of this system of differential equations “in the large.”

And since we exist in a bidimensional Universe, this problem remains  almost untouched for spaces of dimension higher than 2 (a recurrent fact of all mathematical mirrors from Fermat’s last theorem to the proof of almost all geometrical theorems in a plane).

But the problem is still very far from being solved for the single equation of the form: dy/dx = M (x,y)/N (x.y) even when M(x, y) and N(x, y) are polynomials, which shows how so many times the whys of ∆st are truly synoptic and simple, even if the detailed paths of 1D motions, the obsession of one-dimensional humans are ignored.

In fact the only solution quite resolved is… yes you guess it, that in which the particle has no ‘freedom of thought’ so to speak and falls down the spiral path of entropic death and informative capture by the singularity.

THIS WILL again be a rule of ∆st, the simplest solutions are those related with death, dissolution, entropy and one-dimensional ‘fall’.
In what follows, we will assume that the functions M(x, y) and N(x, y) have continuous partial derivatives of the first order.
If all the points of a simply connected domain G, in which the right side of the differential equation is defined, are ordinary points, then the family of integral curves may be represented schematically as a family of segments of parallel straight lines; since in this case one integral curve will pass through each point, and no two integral curves can intersect. For an equation of more general form, which may have singular points, the structure of the integral curves may be much more complicated. The case in which the previous equation has an infinite set of singular points (i.e., points where the numerator and the denominator both vanish) may be excluded, at least when M(x, y) and N(x, y) are polynomials.

Thus we restrict our consideration to those cases in which the previous equation has a finite number of isolated singular points. The behavior of the integral curves that are near to one of these singular points forms the essential element  in setting up a schematized representation of the behavior of all the integral curves of the equation.

A very typical element in such a scheme for the behavior of all the integral curves of the previous equation is formed by the so-called limit cycles. Let us consider the equation 64:  dρ/dΦ = ρ-1    where ρ and ϕ are polar coordinates in the (x, y) plane.
The collection of all integral curves of the equation  is given by the formula (65):where C is an arbitrary constant, different for different integral curves. In order that ρ be nonnegative, it is necessary that ϕ have values no larger than – In | C |, C < 0. The family of integral curves will consist of
1. the circle ρ = 1 (C = 0);

1. the spirals issuing from the origin, which approach this circle from the inside as ϕ → – ∞(C < 0);
3. the spirals, which approach the circle ρ = 1 from the outside as ϕ → – ∞ (C > 0)

The circle ρ = 1 is called a limit cycle for its equation (65). In general a closed integral curve l is called a limit cycle, if it can be enclosed in-a disc all points of which are ordinary for equation (64) and which is entirely filled by nonclosed integral curves.
From equation (65) it can be seen that all points of the circle are ordinary. This means that a small piece of a limit cycle is not different from a small piece of any other integral curve.

Every closed integral curve in the (x, y) plane gives a periodic solution [x(t), y(t)] of the system:

dx/dt =N (x.y), dy/dt=M (x,y)  describing the law of change of some physical system. Those integral curves in the phase plane that as t → + ∞ approximate a limit cycle are motions that as t → ∞ approximate periodic motions.
Let us suppose that for every point (x0, y0) sufficiently close to a limit cycle l, we have the following situation: If (x0, y0) is taken as initial point (i.e., for t = t0) for the solution of the system (67), then the corresponding integral curve traced out by the point [x(t), y(t)], as t → + ∞ approximates the limit cycle l  in the (x, y) plane. (This means that the motion in question is approximately periodic.) In this case the corresponding limit cycle is called stable. Oscillations that act in this way with respect to a limit cycle correspond physically to self-oscillations. In some self-oscillatory systems, there may exist several stable oscillatory processes with different amplitudes, one or another of which will be established by the initial conditions. In the phase plane for such “self-oscillatory systems,” there will exist corresponding limit cycles if the processes occuring in these systems are described by an equation of the form (67).

The problem of finding, even if only approximately, the limit cycles of a given differential equation has not yet been satisfactorily solved. The most widely used method for solving this problem is the one suggested by Poincaré of constructing “cycles without contact.” It is based on the following theorem. We assume that on the (x, y) plane we can find two closed curves L1 and L2 (cycles) which have the following properties:

1. 1. The curve L2 lies in the region enclosed by L1.
2. 2. In the annulus Ω, between L1 and L2, there are no singular points of equation (64).
3. L1 and L2 have tangents everywhere, and the directions of these tangents are nowhere idertical with the direction of the field of directions for the given equation (64).
4. For all points of L1 and L2 the cosine of the angle between the interior normals to the boundary of the domain Ω and the vector with components [N(x, y), M(x, y)] never changes sign.
Then between L1 and L2 there is at least one limit cycle of equation (64).
Poincaré called the curves L1 and L2 cycles without contact.
The proof of this theorem is based on the following rather obvious fact.

We assume that for decreasing t (or for increasing t) all the integral curves: x = x(t), y = y (t),  of equation (64) (or, what amounts to the same thing, of equations (67), where t is a parameter), which intersect L1 or L2 enter the annulus Ω between L1 and L2. Then they must necessarily tend to some closed curve l lying between L1 and L2, since none of the integral curves lying in the annulus can leave it, and there are no singular points there.

Singular Points.

Now when considering the singular points in relationship to the vital energy mapped out in its cyclical trajectories by those curves, we observe there are 3 cases, the absorption, the crossing and the isolated point, which in abstract math are studied as follows.

Let the point P(x, y) be in the interior of the domain G in which we consider the differential equation: dy/dx = M (x,y)/N (x.y).

If there exists a neighborhood R of the point P through each point of which passes one and only one integral curve (47), then the point P is called an ordinary point of equation (47). But if such a neighborhood does not exist, then the point P is called a singular point of this equation. The study of singular points is very important in the qualitative theory of differential equations, which we will consider in the next section.
Particularly important are the so-called isolated singular points, i.e., singular points in some neighborhood of each of which there are no other singular points. In applications one often encounters them in investigating equations of the form(47), where M(x, y) and N(x, y) are functions with continuous derivatives of high orders with respect to x and y. For such equations, all the interior points of the domain at which M(x, y) ≠ 0 or N(x, y) ≠ 0 are ordinary points.

Let us now consider any interior point (x0, y0) where M(x, y) = N(x, y) = 0. To simplify the notation we will assume that x0 = 0 and y0 = 0. This can always be arranged by translating the original origin of coordinates to the point (x0, y0). Expanding M(x, y) and N(x, y) by Taylor’s formula into powers of x and y and restricting ourselves to terms of the first order, we have, in a neighborhood of the point (0, 0):

Equations (45) and (46) are of this form. Equation (45) does not define either dy/dx or dx/dy for x = 0 and y = 0. If the determinant:then, whatever value we assign to dy/dx at the origin, the origin will be a point of discontinuity for the values dy/dx and dx/dy, since they tend to different limits depending on the manner of approach to the origin. The origin is a singular point for our differential equation.
It has been shown that the character of the behavior of the integral curves near an isolated singular point (here the origin) is not influenced by the behavior of the terms ϕ1(x, y) and ϕ2(x, y) in the numerator and denominator, provided only that the real part of both roots of the equation:is different from 0’. Thus, in order to form some idea of this behavior, we study the behavior near the origin of the integral curves of the equation:We note that the arrangement of the integral curves in the neighborhood of a singular point of a differential equation has great interest for many problems of mechanics, for example in the investigation of the trajectories of motions near the equilibrium position.
It has been shown that everywhere in the plane it is possible to choose coordinates ξ, η, connected with x, y by the equations:

where the kij are real numbers such that equation (50) is tranformed into one of the the following three types:

If these roots are real and different, then equation (50) is transformed into the form (52). If these roots are equal, then equation (50) is transformed either into the form (52) or into the form (53), depending on whether a2 + d2 = 0 or a2 + d2 ≠ 0. If the roots of equation (55) are complex, λ = α ± βi, then equation (51) is transformed into the form (54).

We will consider each of the equations (52), (53), (54). To begin with, we note the following.
Even though the axes Ox and Oy were mutually perpendicular, the axes Oξ and Oη need not, in general, be so. But to simplify the diagrams, we will assume they are perpendicular. Further, in the transformation (51) the Planes on the Oξ and Oη axes may be changed; they may not be the same as the ones originally chosen on the axes Ox and Oy. But again, for the sake of simplicity, we assume that the Planes are not changed. Thus, for example, in place of the concentric circles, as in figure 8, there could in general occur a family of similar and similarly placed ellipses with common center at the origin.

All integral curves of equation 1 are given by:

where a and b are arbitrary constants.

The integral curves of equation (52) are graphed in figure 10; here we we have assumed that k > 1. In this case all integral curves except one, the axis Oη, are tangent at the origin to the axis Oξ. The case 0 < k < 1 is the same as the case k > 1 with interchange of ξ and η, i.e., we have only to interchange the roles of the axes ξ and η. For k = 1, equation (52) becomes equation (30). whose integral curves were illustrated in figure 7.
An illustration of the integral curves of equation (52) for k < 0 is given in figure 11. In this case we have only two integral curves that pass through the point O: these are the axis Oξ and the axis Oη. All other integral  curves, after approaching the origin no closer than to some minimal distance, recede again from the origin. In this case we say that the point O is a saddle point because the integral curves are similar to the contours on a map representing the summit of a mountain pass (saddle).

All integral curves of equation (53) are given by the equation:where a and b are arbitrary constants. These are illustrated schematically in figure 12; all of them are tangent to the axis Oη at the origin.
If every integral curve entering some neighborhood of the singular point O passes through this point and has a definite direction there, i.e., has a definite tangent at the origin, as is illustrated in figures 10 and 12, then we say that the point O is a node.
Equation (54) is most easily integrated, if we change to polar coordinates ρ and ϕ, putting:

If k > 0 then all the integral curves approach the point O, winding infinitely often around this point as ϕ → – ∞ (figure 13). If k < 0,
then this happens for ϕ → + ∞. In these cases, the point O is called a focus. If, however, k = 0, then the collection of integral curves of (56) consists of curves with center at the point O. Generally, if some neighborhood of the point O is completely filled by closed integral curves, surrounding the point O itself, then such a point is called a center.
A center may easily be transformed into a focus, if in the numerator and the denominator of the right side of equation (54) we add a term of arbitrarily high order; consequently, in this case the behavior of integral curves near a singular point is not given by terms of the first order.
Equation (55), corresponding to equation (45), is identical with the characteristic equation (19). Thus figures 10 and 12 schematically represent the behavior in the phase plane (x, y) of the curves:

x=x(t), y = x'(t)   corresponding to the solutions of equation (6) for real λ1, and λ2, of the same sign; Figure 11 corresponds to real λ1, and λ2, of opposite signs, and figures 13 and 8 (the case of a center) correspond to complex λ1, and λ2. If the real parts of λ1, and λ2, are negative, then the point (x(t), y(t)) approaches 0 for t → + ∞; in this case the point x = 0, y = 0 corresponds to stable equilibrium. If, however, the real part of either of the numbers λ1, and λ2, is positive, then at the point x = 0, y = 0, there is no stable equilibrium.

There are not many differential equations with the property that all their solutions can be expressed explicitly in terms of simple functions, as is the case for linear equations with constant coefficients. It is possible to give simple examples of differential equations whose general solution cannot be expressed by a finite number of integral of known functions, or as one says, in quadratures.

An equation of the form dy/dx + ay² = x², for a > 0, cannot be expressed as a finite combination of integrals of elementary functions.

So it becomes important to develop methods of approximation to the solutions of differential equations, which will be applicable to wide classes of equations.
The fact that in such cases we find not exact solutions but only approximations should not bother us. First of all, these approximate solutions may be calculated, at least in principle, to any desired degree of accuracy. Second, it must be emphasized that in most cases the differential equations describing a physical process are themselves not altogether exact, as can be seen in all the examples discussed in §1.
An especially good example is provided by the equation (12) for the acoustic resonator. In deriving this equation, we ignored the compressibility of the air in the neck of the container and the motion of the air in the container itself. As a matter of fact, the motion of the air in the neck sets into motion the mass of the air in the vessel, but these two motions have different velocities and displacements. In the neck the displacement of the particles of air is considerably greater than in the container. Thus we ignored the motion of the air in the container, and took account only of its compression. For the air in the neck, however, we ignored the energy of its compression and took account only of the kinetic energy of its motion.
To derive the differential equation for a physical pendulum, we ignored the mass of the string on which it hangs. To derive equation (14) for electric oscillations in a circuit, we ignored the self-inductance of the wiring and the resistance of the coils. In general, to obtain a differential equation for any physical process, we must always ignore certain factors and idealize others.

For physical investigations we are especially interested in those differential equations whose solutions do not change much for arbitrary small changes, in some sense or another, in the equations themselves. Such differential equations are called “intensitive.” These equations deserve particularly complete study.
It should be stated that in physical investigations not only  are the differential equations that describe the laws of change of the physical quantities themselves inexactly defined but even the number of these quantities is defined only approximately. Strictly speaking, there are no such things as rigid bodies. So to study the oscillations of a pendulum, we ought to take into account the deformation of the string from which it hangs and the deformation of the rigid body itself, which we approximated by taking it as a material point. In exactly the same way, to study the oscillations of a load attached to springs, we ought to consider the masses of the separate coils of the springs.

But in these examples it is easy to show that the character of the motion of the different particles, which make up the pendulum and its load together with the springs, has little influence on the character of the oscillation. If we wished to take this influence into account, the problem would become so complicated that we would be unable to solve it to any suitable approximation. Our solution would then bear no closer relation to physical reality than the solution given in §1 without consideration of these influences. Intelligent idealisation of  a problem is always unavoidable.

To describe a process, it is necessary to take into account the essential features of the process but by no means to consider every feature without exception. This would not only complicate the problem a great deal but in most cases would result in the impossibility of calculating a solution.

The fundamental problem of physics or mechanics, in the investigation of any phenomenon, is to find the smallest number of quantities, which with sufficient exactness describe the state of the phenomenon at any given moment, and then to set up the simplest differential equations that are good descriptions of the laws governing the changes in these quantities. This problem is often very difficult. Which features are the essential ones and which are non-essential is a question that in the final analysis can be decided only by long experience. Only by comparing the answers provided by an idealized argument with the results of experiment can we judge whether the idealization was a valid one.

The mathematical problem of the possibility of decreasing the number of quantities may be formulated in one of the simplest and most characteristic cases, as follows.
Suppose that to begin  with we characterize the state of a physical system at time t by the two magnitudes x1(t) and x2(t). Let the differential equations expressing their rates of change have the form: In the second equation the coefficient of the derivative is a small constant parameter ε . If we put ε= 0, the second equation will cease to be a differential equation. It then takes the form: ƒ2(t, x1,x2)=0

From this equation, we define x2, as a function of t and x1, and we substitute it into the first equation. We then have the differential equation: dx1/dt = F (t, x1)  for the single variable x1. In this way the number of parameters entering into the situation is reduced to one. We now ask, under what conditions will the error introduced by taking ε = 0 be small. Of course, it may happen that as ε → 0 the value dx2/dt grows beyond all bounds, so that the right side of the second of equations (28) does not tend to 0’ as ε→ 0.

Generalized Solutions

The range of problems in which a physical process is described by continuous, differentiable functions satisfying differential equations may be extended in an essential way by introducing into the discussion discontinuous solutions of these equations.

In a number of cases it is clear from the beginning that the problem under consideration cannot have solutions that are twice continuously differentiable; in other words, from the point of view of the classical statement of the problem given in the preceding section, such a problem has no solution. Nevertheless the corresponding physical process does occur, although we cannot find functions describing it in the preassigned class of twice-differentiable functions. Let us consider some simple examples.

If a string consists of two pieces of different density, then in the equation:

the coefficient will be equal to a different constant on each of the corresponding pieces, and so equation (24) will not, in general, have classical (twice continuously differentiable) solutions.

Let the coefficient a be a constant, but in the initial position let the string have the form of a broken line given by the equation u|i = 0 = ϕ(x). At the vertex of the broken line, the function ϕ(x) obviously cannot have a first derivative. It may be shown that there exists no classical solution of equation (24) satisfying the initial conditions:

(here and in what follows ut denotes ∂u/∂t).

If a sharp blow is given to any small piece of the string, the resulting oscillations are described by the equation:

where f(x, t) corresponds to the effect produced and is a discontinuous function, differing from 0’ only on the small piece of the string and during a short interval of time. Such an equation also, as can be easily established, cannot have classical solutions.

These examples show that requiring continuous derivatives for the desired solution strongly restricts the range of the problems we can solve. The search for a wider range of solvable problems proceeded first of all in the direction of allowing discontinuities of the first kind in the derivatives of highest order, for the functions serving as solutions to the problems, where these functions must satisfy the equations except at the points of discontinuity. It turns out that the solutions of an equation of the type Δu = 0 or ∂u/∂t − Δu = 0 cannot have such (so-called weak) discontinuities inside the domain of definition.

Solutions of the wave equation can have weak discontinuities in the space variables x, y, z, and in t only on surfaces of a special form, which are called characteristic surfaces. If a solution u(x, y, z, t) of the wave equation is considered as a function defining, for t = t1, a scalar field in the x, y, z space at the instant t1, then the surfaces of discontinuity for the second derivatives of u(x, y, z, t) will travel through the (x, y, z) space with a velocity equal to the square root of the coefficient of the Laplacian in the wave equation.

The second example for the string shows that it is also necessary to consider solutions in which there may be discontinuous first derivatives; and in the case of sound and light waves, we must even consider solutions that themselves have discontinuities.

The first question that comes up in investigating the introduction of discontinuous solutions consists in making clear exactly which discontinuous functions can be considered as physically admissible solutions of an equation or of the corresponding physical problem. We might, for example, assume that an arbitrary piecewise constant function is “a single solution” of the Laplace equation or the wave equation, since it satisfies the equation outside of the lines of discontinuity.

In order to clarify this question, the first thing that must be guaranteed is that in the wider class of functions, to which the admissible solutions must belong, we must have a uniqueness theorem. It is perfectly clear that if, for example, we allow arbitrary piecewise smooth functions, then this requirement will not be satisfied.

Historically, the first principle for selection of admissible functions was that they should be the limits (in some sense or other) of classical solutions of the same equation. Thus, in example 2, a solution of equation (24) corresponding to the function ϕ(x), which does not have a derivative at an angular point may be found as the uniform limit of classical solutions un(x, t) of the same equation corresponding to the initial conditions un | t = 0 = ϕn(x), unt | t = 0 = 0 where the ϕ(x) are twice continuously differentiable functions converging uniformly to ϕ(x) for n → ∞.

In what follows, instead of this principle we will adopt the following: An admissible solution u must satisfy, instead of the equation Lu = f, an integral identity containing an arbitrary function Ф.

This identity is found as follows: We multiply both sides of the equation Lu = f by an arbitrary function Ф, which has continuous derivatives with respect to all its arguments of orders up through the order of the equation and vanishes outside of the finite domain D in which the equation is defined. The equation thus found is integrated over D and then transformed by integration by parts so that it does not contain any derivatives of u. As a result we get the identity desired. For equation (24), for example, it has the form:

For equations with constant coefficients these two principles for the selection of admissible (or as they are now usually called, generalized) solutions, are equivalent to each other. But for equations with variable coefficients, the first principle may turn out to be inapplicable, since these equations may in general have no classical solutions (cf. example 1). The second of these principles provides the possibility of selecting generalized solutions with very broad assumptions on the differentiability properties of the coefficients of the equations. It is true that this principle seems at first sight to be overly formal and to have a purely mathematical character, which does not directly indicate how the problems ought to be formulated in a manner similar to the classical problems.

In order that a larger number of problems may be solvable, we must seek the solutions among functions belonging to the widest possible class of functions for which uniqueness theorems still hold. Frequently such a class is dictated by the physical nature of the problem. Thus, in quantum mechanics it is not the state function ψ(x), defined as a solution of the Schrödinger equation, that has physical meaning but rather the integral av = ∫E ψ(x) ψv(x)dx, where the ψv are certain functions for which:. Thus the solution ψ is to be sought not among the twice continuously differentiable functions but among the ones with integrable square. In the problems of quantum electrodynamics, it is still an open question which classes of functions are the ones in which we ought to seek solutions for the equations considered in that theory.

Progress in mathematical physics during the last thirty years has been closely connected with this new formulation of the problems and with the creation of the mathematical apparatus necessary for their solution.

Particularly convenient methods of finding generalized solutions in one or another of these classes of functions are: the method of finite differences, the direct methods in the calculus of variations and functional-operator methods. These latter methods basically depend on a study of transformations generated by these problems. Here we will explain the basic ideas of the direct methods of the calculus of variations.

Let us consider the problem of defining the position of a uniformly stretched membrane with fixed boundary. From the principle of minimum potential energy in a state of stable equilibrium the function u(x, y) must give the least value of the integral:

in comparison with all other continuously differentiable functions υ(x, y) satisfying the same condition on the boundary, u| S = ϕ, as the function u does. With some restrictions on ϕ and on the boundary S it can be shown that such a minimum exists and is attained by a harmonic function, so that the desired function u IS a solution of the Dirichlet problem Δu = 0, u|S = ϕ. The converse is also true: The solution of the Dirichlet problem gives a minimum to the integral J with respect to all υ satisfying the boundary condition.

The proof of the existence of the function u, for which J attains its minimum, and its computation to any desired degree of accuracy may be carried out, for example, in the following manner (Ritz method). We choose an infinite family of twice continuously differentiable functions {υn(x, y)}, n = 0, 1, 2, …, equal to 0’ on the boundary for n > 0 and equal to ϕ for n = 0. We consider J for functions of the form:

where n is fixed and the Ck are arbitrary numbers. Then J(υ) will be a polynomial of second degree in the n independent variables C1, C2, …, Cn. We determine the Ck from the condition that this polynomial should assume its minimum. This leads to a system of n linear ¬Algebraic equations in n unknowns, the determinant of which is different from 0’. Thus the numbers Ck are uniquely defined. We denote the corresponding υ by vn(x, y). It can be shown that if the system {υn) satisfies a certain condition of “completeness” the functions υn will converge, as n → ∞, to a function which will be the desired solution of the problem.

In conclusion, we note that in this chapter we have given a description of only the simplest linear problem of mechanics and have ignored many further questions, still far from completely worked out, which are connected with more general partial differential equations.

Methods of Constructing Solutions

On the possibility of decomposing any solution into simpler solutions. Solutions of the problems of mathematical physics formulated previously may be derived by various devices, which are different specific problems. But at the basis of these methods there is one general idea. As we have seen, all the equations of mathematical physics are, for small values of the unknown functions, linear with respect to the functions and their derivatives. The boundary conditions and initial conditions are also linear.

If we form the difference between any two solutions of the same equation, this difference will also be a solution of the equation with the right-hand terms equal to 0’. Such an equation is called the corresponding homogeneous equation. For example, for the Poisson equation Δu = − 4πρ, the corresponding homogeneous equation is the Laplace equation Δu = 0.

If two solutions of the same equation also satisfy the same boundary conditions, then their difference will satisfy the corresponding homogeneous condition: The values of the corresponding expression on the boundary will be equal to 0’.

Hence the entire manifold of the solutions of such an equation, for given boundary conditions, may be found by taking any particular solution that satisfies the given non homogeneous condition together with all possible solutions of the homogeneous equation satisfying homogeneous boundary conditions (but not, in general, satisfying the initial conditions).

Solutions of homogeneous equations, satisfying homogeneous boundary conditions may be added, or multiplied by constants, without ceasing to be solutions.

If a solution of a homogeneous equation with homogeneous conditions is a function of some parameter, then integrating with respect to this parameter will also give us such a solution. These facts form the basis of the most important method of solving linear problems of all kinds for the equations of mathematical physics, the method of superposition.

The solution of the problem is sought in the form:

where u, is a particular solution of the equation satisfying the boundary conditions but not satisfying the initial conditions, and the u, are solutions of the corresponding homogeneous equation satisfying the corresponding homogeneous boundary conditions. If the equation and the boundary conditions were originally homogeneous, then the solution of the problem may be sought in the form: U = ∑ Uk.

In order to be able to satisfy arbitrary initial conditions by the choice of particular solutions uk of the homogeneous equation, we must have available a sufficiently large arsenal of such solutions.

The solutions obtained by using the equations of mathematical physics for these or other problems of natural science give us a mathematical description of the expected course or the expected character of the physical events described by these equations.

Since the construction of a model is carried out by means of the equations of mathematical physics, we are forced to ignore, in our abstractions, many aspects of these events, to reject certain aspects as nonessential and to select others as basic, from which it follows that the results we obtain are not absolutely true. They are absolutely true only for that scheme or model that we have considered, but they must always be compared with experiment, if we are to be sure that our model of the event is close to the event itself and represents it with a sufficient degree of exactness.

The ultimate criterion of the truth of the results is thus practical experience only. In the final analysis, there is just one criterion, namely practical experience, although experience can only be properly understood in the light of a profound and well-developed theory.

If we consider the vibrating string of a musical instrument, we can understand how it produces its tones only if we are acquainted with the laws for superposition of characteristic oscillations. The relations that hold among the frequencies can be understood only if we investigate how these frequencies are determined by the material, by the tension in the string, and by the manner of fixing the ends. In this case the theory not only provides a method of ca!culating any desired numerical quantities but also indicates just which of these quantities are of fundamental importance, exactly how the physical process occurs, and what should be observed in it.

In this way a domain of science, namely mathematical physics, not only grew out of the requirements of practice but in turn exercised its own influence on that practice and pointed out paths for further progress.

Mathematical physics is very closely connected with other branches of mathematical analysis, but we cannot discuss these connections here, since they would lead us too far afield.

CALCULUS OF VARIATIONS

The calculus of variations studies efficient solutions to paths of st-ates/ages of a physical/biological T.œ.

Examples of variational problems. The curve of fastest descent.

The problem of the brachistochrone, or the curve of fastest descent, was historically the first problem in the development of the calculus of variations. Its in interest remains in that it is the simplest combination of  a space and time dimension with a curved form, which was faster than a lineal one, showing the least time principle – that time, hence function and motion objectively dominated space, form and mind:

Among all curves connecting the points M1 and M2, it is required to find that one along which a mathematical point, moving under the force of gravity from M1, with no initial velocity, arrives at the point M2 in the least time.
To solve this problem we must consider all possible curves joining M1 and M2. If we choose a definite curve l, then to it will correspond some definite value T of the time taken for the descent of a material point along it. The time T will depend on the choice of l, and of all curves joining M1 and M2 we must choose the one which corresponds to the least value of T.

The problem of the brachistochrone may be expressed in the following way.
We draw a vertical plane through the points M1 and M2. The curve of fastest descent must obviously lie in it, so that we may restrict ourselves to such curves. We take the point M1 as the origin, the axis Ox horizontal, and the axis Oy vertical and directed downward (figure 1). The coordinates of the point M1, will be (0, 0); the coordinates of the point M2 we will call (x2, y2). Let us consider an arbitrary curve described by the equation:

y = ƒ(x), 0 ≤x ≤x2 where f is a continuously differentiable function. Since the curve passes through M1 and M2, the function f at the ends of the segment [0, x2] must satisfy the condition: ƒ(0)=0, ƒ(x2)=y2.
If we take an arbitrary point M(x, y) on the curve, then the velocity υ of a material point at this point of the curve will be connected with the y-coordinate of the point by the well-known physical relation: 1/2 v² = gy, v = √2gy,

The time necessary for a material point to travel along an element ds of arc of the curve has the value:And thus the total time of the descent of the point along the curve from M1 to M2 is equal to:

Finding the brachistochrone is equivalent to the solution of the following minimal problem: Among all possible functions (1) that satisfy conditions (2), find that one which corresponds to the least value of the integral (3).
2. The surface of revolution of the least area. Among the curves joining two points of a plane, it is required to find that one whose arc, by rotation around the axis Ox, generates the surface with the least area.
We denote the given points by M1(x1, y2) and M2(x1, y2) and consider an arbitrary curve given by the equation: y = ƒ(x)

If the curve passes through M1 and M2, the function f will satisfy the condition: ƒ (x1) = y1, ƒ (x2)= y2
When rotated around the axis Ox this curve describes a surface with area numerically equal to the value of the integral:This value depends on the choice of the curve, or equivalently of the function y = f(x). Among all functions (4) satisfying condition (5) we must find that function which gives the least value to the integral (6).

Uniform deformation of a membrane.

By a membrane we usually mean an elastic surface that is plane in the state of rest, bends freely, and does work only against extension. We assume that the potential energy of a deformed membrane is proportional to the increase in the area of its surface.
In the state of rest let the membrane occupy a domain B of the Oxy plane (figure 2). We deform the boundary of the membrane in a direction perpendicular to Oxy and denote by ϕ(M) the displacement of the point M of the boundary. Then the interior of the membrane is also deformed, and we are required to find the position of equilibrium of the membrane for a given deformation of its boundary:

With a great degree of accuracy we may assume that all points of the membrane are displaced perpendicularly to the plane Oxy. We denote by u(x, y) the displacement of the point (x, y). The area of the membrane in its displaced position will be:If the deformations of the elements of the membrane are so small that we can legitimately ignore higher powers of ux and uy, this expression for the area may be replaced by a simpler one:The change in the area of the membrane is equal to:“so that the potential energy of the deformation will have the value:where μ is a constant depending on the elastic properties of the membrane.
Since the displacement of the points on the edge of the membrane is assumed to be given, the function u(x, y) will satisfy the condition:

u|l = Φ (M)  on the boundary of the domain B.
In the position of equilibrium the potential energy of the deformation must have the smallest possible value, so that the function u(x, y), describing the displacement of the points of the membrane, is to be found by solving the following mathematical problem: Among all functions u(x, y) that are continuously differentiable on the domain B and satisfy condition (8) on the boundary, find the one which gives the least value to the integral (7).

Extreme values of functionals and the calculus of variations.

These examples allow us to form some impression of the kind of problems considered, but to define exactly the position of the calculus of variations in mathematics, we must become acquainted with certain new concepts. We recall that one of the basic concepts of mathematical analysis is that of a function. In the simplest case the concept of functional dependence may be described as follows. Let M be any set of real numbers. If to every number x of the set M there corresponds a number y, we say that there is defined on the set M a function y = f(x). The set M is often called the domain of definition of the function.
The concept of a functional is a direct and natural generalization of the concept of a function and includes it as a special case.
Let M be a set of objects of any kind. The nature of these objects is immaterial at this time. They may be numbers, points of a space, curves, functions, surfaces, states or even motions of a mechanical system. For brevity we will call them elements of the set M and denote them by the letter x.
If to every element x of the set M there corresponds a number y, we say that there is defined on the set M a functional y = F(x).
If the set M is a set of numbers x, the functional y = F(x) will be a function of one argument. When M is a set of pairs of numbers (x1, x2) or a set of points of a plane, the functional will be a function y = F(x1, x2) of two arguments, and so forth.

For the functional y = F(x), we state the following problem:
Among all elements x of M find that element for which the functional y = F(x) has the smallest value.
The problem of the maximum of the functional is formulated in the same way.
We note that if we change the sign in the functional F(x) and consider the functional —F(x), the maximum (minimum) of F(x) becomes the minimum (maximum) of —F(x). So there is no need to study both maxima and minima; in what follows we will deal chiefly with minima of functionals.
In the problem of the curve of fastest descent, the functional whose minimum we seek will be the integral (3), the time of descent of a material point along a curve. This functional will be defined on all possible functions (1), satisfying condition (2).
In the problem of the position of equilibrium of a membrane, the functional is the potential energy (7) of the deformed membrane, and we must find its minimum on the set of functions u(x, y) satisfying the boundary condition (8).
Every functional is defined by two factors: the set M of elements x on which it is given and the law by which every element x corresponds to a number, the value of the functional. The methods of seeking the least and greatest values of a functional will certainly depend on the properties of the set M.
The calculus of variations is a particular chapter in the theory of functionals. In it we consider functionals given on a set of functions, and our problem consists of the construction of a theory of extreme values for such functionals.
This branch of mathematics became particularly important after the discovery of its connection with many situations in physics and mechanics. The reason for this connection may be seen as follows. As will be made clear later, it is necessary, in order that a function provide an extreme value for a functional, that it satisfy a certain differential equation. On the other hand, as was mentioned in the chapters describing differential equations, the quantitative laws of mechanics and physics are often written in the form of differential equations. As it turned out, many equations of this type also occurred among the differential equations of the calculus of variations. So it became possible to consider the equations of mechanics and physics as extremal conditions for suitable functionals and to state the laws of physics in the form of requiring an extreme value, in particular a minimum, for certain quantities. New points of view could thus be introduced into mechanics and physics, since certain laws could be replaced by equivalent statements in terms of “minimal principles.” This in turn opened up a new method of solving physical problems, either exactly or approximately, by seeking the minima of corresponding functionals.

The Differential Equations of the Calculus of Variations

The Euler differential equation.

The reader will recall that a necessary condition for the existence of an extreme value of a differentiable function f at a point x is that the derivative f′ be equal to 0’ at this point: f′(x) = 0; or what amounts to the same thing, that the differential of the function be equal to 0’ here: df = f′(x) dx = 0.
Our immediate goal will be to find an analogue of this condition in the calculus of variations, that is to say, to set up a necessary condition that a function must satisfy in order to provide an extreme value for a functional.
We will show that such a function must satisfy a certain differential equation. The form of the equation will depend on the kind of functional under consideration. We begin with the so-called simplest integral of the calculus of variations, by which we mean a functional with the following integral representation:The function F, occuring under the integral sign, depends on three arguments (x, y, y′). We will assume it is defined and is twice continuously differentiable with respect to the argument y′ for all values of this argument, and with respect to the arguments x and y in some domain B of the Oxy plane. Below it is assumed that we always remain in the interior of this domain.

It is clear that y is a function of x:  y = y(x) continuously differentiable on the segment , and that y′ is its derivative.
Geometrically the function y(x) may be represented on the Oxy plane by a curve l over the interval [x1, x2]:The integral (9) is a generalization of the integrals (3) and (6), which we encountered in the problem of the curve of fastest descent and the surface of revolution of least area. Its value depends on the choice of the function y(x) or in other words of the curve l, and the problem of its minimum value is to be interpreted as follows:
Given some set M of functions (10) (curves l); among these we must find that function (curve l) for which the integral I(y) has the least value.
We must first of all define exactly the set of functions M for which we will consider the value of the integral (9). In the calculus of variations the functions of this set are usually called admissible for comparison. We consider the problem with fixed boundary values. The set of admissible functions is defined here by the following two requirements:
1. y(x) is continuously differentiable on the segment [x1, x2];
2. At the ends of the segment y(x) has values given in advance: y (x1)=y1,   y(x2)=y2

Otherwise the function y(x) may be completely arbitrary. In the language of geometry, we are considering all possible smooth curves over the interval [x1, x2], which pass through the two points A(x1, y2) and B(x2, y2) and can be represented by the equation (10). The function giving the minimum of the integral will be assumed to exist and we will call it y(x).
The following simple and ingenious arguments, which can often be applied in the calculus of variations, lead to a particularly simple form of the necessary condition which y(x) must satisfy. In essence they allow us to reduce the problem of the minimum of the integral (9) to the problem of the minimum of a function.
We consider the family of functions dependent on a numerical parameter α: In order that Ȳ(x)  be an admissible function for arbitrary α, we must assume that η(x) is continously differentiable and vanishes at the ends of the interval [x1, x2]:

“The integral (9) computed for ȳ will be a function of the parameter α: Since y(x) gives a minimum to the value of the integral, the function Φ(α) must have a minimum for α = 0, so that its derivative at this point must vanish:

This last equation must be satisfied for every continuously differentiable function η(x) which vanishes at the ends of the segment [x1, x2]. In order to obtain the result which follows from this, it is convenient to transform the second term in condition (14) by integration by parts:

It may be shown that the following simple lemma holds.
Let the following two conditions be fulfilled:
1. The function f(x) is continuous on the interval [a, b];

1. The function η(x) is continuously differentiable on the interval [a, b] and vanishes at the ends of this interval.

ƒ(x) = 0 if for an arbitrary function η(x) the following integral is equal to 0’: For let us assume that at some point c the function f is different from 0’ and show that then  in contradiction to the condition of the lemma a function η(x) necessarily exists for which:

Since f(c) ≠ 0 and f is continuous, there must exist a neighborhood [α, β] of c in which f will be everywhere different from 0’ and thus will have a constant sign throughout:

We can always construct a function η(x) which is continuously differentiable on [a, b], positive on [α, β], and equal to 0’ outside of [α, β] (figure).
Such a function η(x), for example, is defined by the equations

The latter of these integrals cannot be equal to 0’ since, in the interior of the interval of integration, the product fη is different from 0’ and never changes its sign.
Since equation (15) must be satisfied for every η(x) that is continuously differentiable and vanishes at the ends of the segment [x1, x2], we may assert, on the basis of the lemma, that this can occur only in the case:This equation is a differential equation of the second order with respect to the function y. It is called Euler’s equation.
We may state the following conclusion.
If a function y(x) minimizes the integral I(y), then it must satisfy Euler’s differential equation (17). In the calculus of variations, this last statement has a meaning completely analogous to the necessary condition df = 0 in the theory of extreme values of functions. It allows us immediately to exclude all admissible functions that do not satisfy this condition, since for them the integral cannot have a minimum, so that the set of admissible functions we need to study is very sharply reduced.
Solutions of equation (17) have the property that for them the derivative [(d/dα)I(y + αη)]α=0 vanishes for arbitrary η(x), so that they are analogous in meaning to the stationary points of a function. Thus it is often said that for solutions of (17) the integral I(y) has a stationary value.
In our problem with fixed boundary values, we do not need to find all solutions of the Euler equation but only those which take on the values y1, y2, at the points x1, x2.
We turn our attention to the fact that the Euler equation (17) is of the second order. Its general solution will contain two arbitrary constants: y = Φ (x, C1, C2). These must be defined so that the integral curve passes through the points A and B, so we have the two equations for finding the constants C1 and C2:   Φ (x1, C1, C2)= y1;  Φ (x2, C1, C2)=y2

In many cases this system has only one solution and then there will exist only one integral curve passing through A and B.
The search for functions giving a minimum for this integral is thus reduced to the solution of the following boundary-value problem for differential equations: On the interval [x1, x2] find those solutions of equation (17) that have the given values y1, y2 at the ends of the interval.
Frequently this last problem can be solved by using known methods in the theory of differential equations.
We emphasize again that every solution of such a boundary-value problem can provide only a suspected minimum and that it is necessary to verify whether or not it actually does give a minimum value to the integral. But in particular cases, especially in those occurring in the applications, Euler’s equation completely solves the problem of finding the minimum of the integral. Suppose we know initially that a function giving a minimum for the integral exists, and assume, moreover, that the Euler equation (17) has only one solution satisfying the boundary conditions (11). Then only one of the admissible curves can be a suspected minimum, and we may be sure, under these circumstances, that the solution found for the equation (17) indeed gives a minimum for the integral.
Example. It was previously established that the problem of the curve of fastest descent may be reduced to finding the minimum of the integral:

Euler’s equation has the form:

from which, by integrating, we get: x = ± k/2(u sin u) + C. Since the curve must pass through the origin, it follows that we must put C = 0.

In this way we see that the brachistochrone is the cycloid: The constant k must be found from the condition that this curve passes through the point M2(x2, y2).

The Connection Between Functions of a Complex Variable and the Problems of Mathematical Physics

Connection with problems of hydrodynamics.

The Cauchy-Riemann conditions relate the problems of mathematical physics to the theory of functions of a complex variable. Let us illustrate this from the problems of hydrodynamics.

Examples of plane-parallel flow of a fluid.

We consider several examples. Let:  W= Az where A is a complex quantity. From (29) it follows that: u + iv = ā
Thus the linear function (30) defines the flow of a fluid with constant vector velocity. If we set: A= uo – ivo then, decomposing into the real and imaginary parts of w, we have:so that the streamlines will be straight lines parallel to the velocity vector (figure 7).
As a second example we consider the function: w=Az² where the constant A is real. In order to graph the flow, we first determine the streamlines. In this case: ψ (x,y) = 2 Axy and the equations of the streamlines are: xy = const.

These are hyperbolas with the coordinate axes as asymptotes (figure 8). The arrows show the direction of motion of the particles along the streamlines for A > 0. The axes Ox and Oy are also streamlines.
If the friction in the liquid is very small, we will not disturb the rest of the flow if we replace any streamline by a rigid wall, since the fluid will glide along the wall. Using this principle to construct walls along the positive coordinate axes (in figure 8 they are represented by heavy lines), we have a diagram of how the fluid flows irrotationally, in this case around a corner:

An important example of a flow is given by the function:where a and R are positive real quantities.
The stream function will be:In particular, taking the constant equal to 0’, we have either y = 0 or x2 + y2 = R2; thus, a circle of radius R is a streamline. If we replace the interior of this streamline by a solid body, we obtain the flow around a circular cylinder. A diagram of the streamlines of this flow is shown in figure 9. The velocity of the flow may be defined from formula (29) by:At a great distance from the cylinder we find:i.e., far from the cylinder the velocity tends to a constant value and thus the flow tends to be uniform. Consequently, formula (29) defines the flow which arises from the passage around a circular cylinder of a fluid which is in uniform motion at a distance from the cylinder.

Applications to other problems of mathematical physics.

The theory of functions of a complex variable has found wide application not only in wing theory but in many other problems of hydrodynamics.
However, the domain of application of the theory of functions is not restricted to hydrodynamics; it is much wider than that, including many other problems of mathematical physics. To illustrate, we return to the Cauchy-Riemann conditions: ∂u/∂x = ∂v/∂y; ∂u/∂y=-∂v/∂x and deduce from them an equation which is satisfied by the real part of an analytic function of a complex variable.

If the first of these equations is differentiated with respect to x, and the second with respect to y, we obtain by addition: ∂²u/∂x² + ∂²u/∂y²=0.   This equation is known as the Laplace equation. A large number of problems of physics and mechanics involve the Laplace equation. For example, if the heat in a body is in equilibrium, the temperature satisfies the Laplace equation. The study of magnetic or electrostatic fields is connected with this equation. In the investigation of the filtration of a liquid through a porous medium, we also arrive at the Laplace equation. In all these problems involving the solution of the Laplace equation the methods of the theory of functions have found wide application.
Not only the Laplace equation but also the more general equations of mathematical physics can be brought into connection with the theory of functions of a complex variable. One of the most remarkable examples is provided by planar problems in the theory of elasticity.

The Connection of Functions of a Complex Variable with Geometry

Geometric properties of differentiable functions.

As in the case of functions of a real variable, a great role is played in the theory of analytic functions of a complex variable by the geometric interpretation of these functions. Broadly speaking, the geometric properties of functions of a complex variable have not only provided a natural means of visualizing the analytic properties of the functions but have also given rise to a special set of problems. The range of problems connected with the geometric properties of functions has been called the geometric theory of functions. As we said earlier, from the geometric point of view a function of a complex variable w = f(z) is a transformation from the z-plane to the w-plane. This transformation may also be defined by two functions of two real variables: u = u (x,y);  v= v (x,y).
If we wish to study the character of the transformation in a very small neighborhood of some point, we may expand these functions into Taylor series and restrict ourselves to the leading terms of the expansion:

where the derivatives are taken at the point (x0, y0). Thus, in theneighborhood of a point, any transformation may be considered approximately as an affine transformation:Let us consider the properties of the transformation effected by the analytic function near the point z = x + iy. Let C be a curve issuing from the point z; on the w-plane the corresponding points trace out the curve Γ, issuing from the point w. If z′ is a neighboring point and w′ is the point corresponding to it, then for z′ → z we will have w′ → w and:This fact may be formulated in the following manner.
The limit of the ratio of the lengths of corresponding chords in the w-plane and in the z-plane at the point z is the same for all curves issuing from the given point z, or as it is also expressed, the ratio of linear elements on the w-plane and on the z-plane at a given point does not depend on the curve issuing from z.
The quantity |f′(z)|, which characterizes the magnification of linear elements at the point z, is called the coefficient of dilation at the point z.
We now suppose that at some point z the derivative f′(z) ≠ 0, so that f′(z) has a uniquely determined argument.* Let us compute this argument, using (34):but arg (w′ – w) is the angle β′ between the chord ww′ and the real axis, and arg(z′ – z) is the angle α′ between the chord zz′ and the real axis. If we denote by α and β the corresponding angles for the tangents to the curves C and Γ at the points z and w (figure 14), then for z′ → zSo that in the limit we get: arg ƒ'(z) = ß-α.  This equation shows that arg f′(z) is equal to the angle ϕ through which the direction of the tangent to the curve C at the point z must be turned to assume the direction of the tangent to the curve Γ at the point w. From this property arg f′(z) is called the rotation of the transformation at the point z.
From equation (36) the reader can easily derive the following propositions.
As we pass from the z-plane to the w-plane, the tangents to all curves issuing from a given point are rotated through the same angle.
If C1 and C2 are two curves issuing from the point z, and Γ1 and Γ2 are the corresponding curves from the point w, then the angle between Γ1 and Γ2 at the point w is equal to the angle between C1, and C2 at the point z.
In this manner, for the transformation effected by an analytic function, at each point where f′(z) ≠ 0, all linear elements are changed by the same ratio, and the angles between corresponding directions are not changed.
Transformations with these properties are called conformal transformations.
From the geometric properties just proved for transformations near a point at which f′(z0) ≠ 0, it is natural to expect that in a small neighborhood of z0 the transformation will be one-to-one; i.e., not only will each point z correspond to only one point w, but also conversely each point w will correspond to only one point z. This proposition can be rigorously proved.
To show more clearly how conformal transformations are distinguished from various other types of transformations, it is useful to consider an arbitrary transformation in a small neighborhood of a point. If we consider the leading terms of the Taylor expansions of the functions u and v effecting the transformation, we get: If in a small neighborhood of the point (x0, y0) we ignore the terms of higher order, then our transformation will act like an affine transformation. This transformation has an inverse if its determinant does not vanish:

If Δ = 0, then to describe the behavior of the transformation near the point (x0, y0) we must consider terms of higher order.
In case u + iv is an analytic function, we can express the derivatives with respect to y in terms of the derivatives with respect to x by using the Cauchy-Riemann conditions, from which we get:

i.e., the transformation has an inverse when f′(z0) ≠ 0. If we set f′(z0) = r(cos ϕ + i sin ϕ, then:

and the transformation near the point (x0, y0) will have the form: These formulas show that in the case of an analytic function w = u + iv, the transformation near the point (x0, y0) consists of rotation through the angle ϕ and dilation with coefficient r. In fact, the expressions inside the brackets are the well-known formulas from analytic geometry for rotation in the plane through an angle ϕ, and multiplication by r gives the dilation.
To form an idea of the possibilities when f′(z) = 0 it is useful to consider the function: w = z ˆn (37). The derivative of this function w′ = nzˆn−1 vanishes for z = 0. The transformation (37) is most conveniently considered by using polar coordinates or the trigonometric form of a complex number. Let:Using the fact that in multiplying complex numbers the moduli are multiplied and the arguments added, we get:From the last formula we see that the ray ϕ = const. of the z-plane transforms into the ray θ = nϕ = const. in the w-plane. Thus an angle α between two rays in the z-plane will transform into an angle of magnitude β = nα. The transformation of the z-plane into the w-plane ceases to be one-to-one. In fact, a given point w with modulus ρ and argument θ may be obtained as the image of each of the n points with moduli:and arguments: When raised to the power n, the moduli of the corresponding points will all be equal to ρ and their arguments will be equal to:and since changing the value of the argument by a multiple of 2π does not change the geometric position of the point, all the images on the w-plane are identical.

Conformal transformations.

If an analytic function w = f(z) takes a domain D of the z-plane into a domain Δ of the w-plane in a one-to-one manner, then we say that it effects a conformal transformation of the domain D into the domain Δ.
The great role of conformal transformations in the theory of functions and its applications is due to the following almost trivial theorem.
If ζ = F(w) is an analytic function on the domain A, then the composite function F[f(z)] is an analytic function on the domain D. This theorem results from the equation: ∆ζ/∆z=∆ζ/∆w•∆w/∆z

In view of the fact that the functions ζ = F(w) and w = f(z) are analytic, we conclude that both factors on the right side have a limit, and thus at each point of the domain D the quotient Δζ/Δz has a unique limit dζ/dz. This shows that the function ζ = F[f(z)] is analytic.
The theorem just proved shows that the study of analytic functions on the domain Δ may be reduced to the study of analytic functions on the domain D. If the geometric structure of the domain D is simpler, this fact simplifies the study of the functions:The most important class of domains in which it is necessary to study analytic functions is the class of simply connected domains. This is the name given to domains whose boundary consists of one piece as opposed to domains whose boundary falls into several pieces (for example, the domains illustrated in figures 15b and 15c).
We note that sometimes we are interested in investigating functions on a domain lying outside a curve rather than inside it. If the boundary of such a domain consists of only one piece, then the domain is also called simply connected (figure 15d).
At the foundations of the theory of conformal transformations lies the following remarkable theorem of Riemann.
For an arbitrary simply connected domain Δ, it is possible to construct an analytic function which effects a conformal transformation of the circle with unit radius and center at the origin into the given domain in such a way that the center of the circle is transformed into a given point w0 of the domain Δ, and a curve in an arbitrary direction at the center of the circle transforms into a curve with an arbitrary direction at the point w0. This theorem shows that the study of functions of a complex variable on arbitrary simply connected domains may be reduced to the study of functions defined, for example, on the unit circle.
We will now explain in general outline how these facts may be applied to problems in the theory of the wing of an airplane. Let us suppose that we wish to study the flow around a curved profile of arbitrary shape.
If we can construct a conformal transformation of the domain outside the profile to the domain outside the unit circle, then we can make use of the characteristic function for the flow around the circle to construct the characteristic function for the flow around the profile.
Let ζ be the plane of the circle, z the plane of the profile, and ζ = f(z) a function effecting the transformation of the domain outside the profile to the domain outside the circle, where:

We denote by a the point of the circle corresponding to the edge of the profile A and construct the circulatory flow past the circle with one of the streamlines leaving the circle at a (figure 16). This function will be denote: by W(ζ):

The streamlines of this flow are defined by the equation:  ψ= Const.

We now consider the function: w(z) = W[ƒ(z)] and set w= Φ+iψ

We show that w(z) is the characteristic function of the flow past the profile with a streamline leaving the profile at the point A. First of all the flow defined by the function w(z) is actually a flow past the profile.

To prove this, we must show that the contour of the profile is a streamline curve, i.e., that on the contour of the profile:ψ (x,y)= C, which follows from, ψ (x,y) = Ψ (ξ,η)  and the points (x, y) lying on the profile correspond to the points (ξ, η) lying on the circle, where ψ(ξ, η) = const.

It is also simple to show that A is a stagnation point for the flow, and it may be proved that by suitable choice of velocity for the flow past the circle, we may obtain a flow past the profile with any desired velocity.
The important role played by conformal transformations in the theory of functions and their applications gave rise to many problems of finding the conformal transformation of one domain into another of a given geometric form. In a series of simple but useful cases this problem may be solved by means of elementary functions. But in the general case the elementary functions are not enough. As we saw earlier, the general theorem in the theory of conformal transformations was stated by Riemann, although he did not give a rigorous proof. In fact, a complete proof required the efforts of many great mathematicians over a period of several decades.
In close connection with the different approaches to the proof of Riemann’s theorem came approximation methods for the general construction of conformal transformations of domains. The actual construction of the conformal transformation of one domain onto another is sometimes a very difficult problem. For investigation of many of the general properties of functions, it is often not necessary to know the actual transformation of one domain onto another, but it is sufficient to exploit some of its geometric properties. This fact has led to a wide study of the geometric properties of conformal transformations. To illustrate the nature of theorems of this sort we will formulate one of them.

Let the circle of unit radius on the z-plane with center at the origin be transformed into some domain (figure 17). If we consider a completely arbitrary transformation of the circle into the domain Δ, we cannot make any statements about its behavior at the point z = 0. But for conformal transformations we have the  following remarkable theorem.
The dilation at the origin does not exceed four times the radius of“the circle with center at w0, inscribed in the domain: | ƒ'(o| ≤ 4r.  Various questions in the theory of conformal transformations were considered in a large number of studies by Soviet mathematicians. In these works exact formulas were derived for many interesting classes of conformal transformations, methods for approximate calculation of conformal transformations were developed, and many general geometric theorems on conformal transformations were established.

Quasi-conformal transformations.

Conformal transformations are closely connected with the investigation of analytic functions, i.e., with the study of a pair of functions satisfying the Cauchy-Riemann conditions: ∂u/∂x = ∂v/∂y; ∂u/∂y=-∂v/∂x
But many problems in mathematical physics involve more general systems of differential equations, which may also be connected with transformations from one plane to another, and these transformations will have specific geometric properties in the neighborhood of points in the Oxy plane. To illustrate, we consider the following example of differential equations:

∂u/∂x = p (x,y)  ∂v/∂y; ∂u/∂y=- p (x,y) ∂v/∂x

In this manner, from those equations (38) it follows that at every point the infinitesimal ellipse that is transformed into a circle has its semiaxes completely determined by the transformation, both with respect to their direction and to the ratio of their lengths. It can be shown that this geometric property completely characterizes the system of differential equations (38); i.e., if the functions u and v effect a transformation with the given geometric property, then they satisfy this system of equations. In this way, the problem of investigating the solulions of equations (38) is equivalent to investigating transformations with the given properties.
We note, in particular, that for the Cauchy-Riemann equations this property is formulated in the following manner.
An infinitesimal circle with center at the point (x0, y0) is transformed into an infinitesimal circle with center at the point (u0, v0).
A very wide class of equations of mathematical physics may be reduced to the study of transformations with the following geometric properties.
For each point (x, y) of the argument plane, we are given the direction of the semiaxes of two ellipses and also the ratio of the lengths of these semiaxes. We wish to construct a transformation of the Oxy plane to the Ouo plane such that an infinitesimal ellipse of the first family transforms into an infinitesimal ellipse of the second with center at the point (u, v).
The idea of studying transformations defined by systems of differential equations made it possible to extend the methods of the theory of analytic functions to a very wide class of problems. Lavrent’ev and his students developed the study of quasiconformal transformations and found a large number of applications to various problems of mathematical physics, mechanics, and geometry. It is interesting to note that the study of quasiconformal transformations has proved very fruitful in the theory of analytic functions itself. Of course, we cannot dwell here on all the various applications of the geometric method in the theory of functions of a complex variable.

The Line Integral; Cauchy’s Formula and Its Corollaries

Integrals of functions of a complex variable.

In the study of the of analytic functions the concept of a complex variable plays a very important role. Corresponding to the definite integral of a function of a real variable, we here deal with the integral of a function of a complex variable along a curve. We consider in the plane a curve C beginning at the point z0 and ending at the point z, and a function f(z) defined on a domain containing the curve C. We  divide the curve C into small segments (figure 18) at the points: Zo,Z1…Zn =Z and consider the sum:

If the function f(z) is continuous and the curve C has finite length, we can prove, just as for real functions, that as the number of points of division is increased and the distance between neighboring points decreases to 0’, the sum S approaches a completely determined limit. This limit is called the integral along the curve C and is denoted by:We note that in this definition of the integral we have distinguished between the beginning and the end of the curve C; in other words, we have chosen a specific direction of motion on the curve C.
It is easy to prove a number of simple properties of the integral.
1. The integral of the sum of two functions is equal to the sum of the integrals of the individual functions:

All these properties are obvious for the approximating sums and carry over to the integral in passing to the limit.
5. If the length of the curve C is equal to L and if everywhere on C the inequality |ƒ(z)|≤M is, then  |∫c ƒ(z) dz| ≤ ML

Let us prove this property. It is sufficient to prove the inequality for the sum S, since then it will carry over in the limit for the integral also. For the sum

“But the sum in the second factor is equal to the sum of the lengths of the segments of the broken line inscribed in the curve C with vertices at the points zk. The length of the broken line, as is well known, is not greater than the length of the curve, so that: | S| ≤ ML.

We consider the integral of the simplest function f(z) = 1. Obviously in this case:This result shows that for the function f(z) = 1 the value of the integral for all curves joining the points z0 and z is the same. In other words, the value of the integral depends only on the beginning and end points of the path of integration. But it is easy to show that this property does not hold for arbitrary functions of a complex variable. For example, if f(z) = x, then a simple computation shows that

where C1 and C2 are the paths of integration shown in figure 19.
We leave it to the reader to verify these equations.
A remarkable fact in the theory of analytic functions is the following theorem of Cauchy.

If f(z) is differentiable at every point of a simply connected domain D, then the integrals over all paths joining two arbitrary points of the domain z0 and z are the same.

We will not give a proof of Cauchy’s theorem here, but refer the interested reader to any course in the theory of functions of a complex variable. Let us mention here some important consequences of this theorem.
First of all, Cauchy’s theorem allows us to introduce the indefinite integral of an analytic function. For let us fix the point Z0 and consider the integral along curves connecting z0 and z:Here we may take the integral over any curve joining z0 and z, since changing the curve does not change the value of the integral, which thus depends only on z. The function F(z) is called an indefinite integral of f(z).
An indefinite integral of f(z) has a derivative equal to f(z).
In many applications it is convenient to have a slightly different formulation of Cauchy’s theorem, as follows:
If f(z) is everywhere differentiable in a simply connected domain, then the integral over any closed contour lying in this domain is equal to 0’: This is obvious since a closed contour has the same beginning and end, so that z0 and z may be joined by a null path.
By a closed contour we will understand a contour traversed in the counterclockwise direction. If the contour is traversed in the clockwise direction we will denote it by Γ.

The Cauchy integral.

On the basis of the last theorem we can prove the following fundamental formula of Cauchy that expresses the value of a differentiable function at interior points of a closed contour in terms of the values of the function on the contour itself:We give a proof of this formula. Let z be fixed and ζ be an independent variable. The function:will be continuous and differentiable at every point ζ inside the domain D, with the exception of the point ζ = z, where the denominator vanishes, a circumstance that prevents the application of Cauchy’s theorem to the function ϕ(ζ) on the contour C.

We consider a circle Kρ, with center at the point z and radius ρ and show that: To this end we construct the auxiliary closed contour Γρ, consisting of the contour C, the path γρ connecting C with the circle, and the circle , taken with the opposite orientation (figure 20). The contour Γρ is indicated by arrows. Since the point ζ = z is excluded, the function ϕ(ζ) is differentiable everywhere inside Γρ and thus:

Expansion of differentiable functions in a power series.

We apply Cauchy’s theorem to establish two basic properties of differentiable functions of a complex variable.
Every function of a complex variable that has a first derivative in a domain D has derivatives of all orders.
In fact, inside a closed contour our function may be expressed by the Cauchy integral formula:

The function of z under the sign of integration is a differentiable function; thus, differentiating under the integral sign, we get

Under the integral sign there is again a differentiable function; thus we can again differentiate, obtaining:

Continuing the differentiation, we get the general formula:

In this manner we may compute the derivative of any order. To make this proof completely rigorous, we need also to show that the differentiation under the integral sign is valid. We will not give this part of the proof.
The second property is the following:
If f(z) is everywhere differentiable on a circle K with center at the point a, then f(z) can be expanded in a Taylor series:

which converges inside K.
In §1 we defined analytic functions of a complex variable as functions that can be expanded in power series. This last theorem says that every differentiable function of a complex variable is an analytic function. This is a special property of functions of a complex variable that has no analogue in the real domain. A function of a real variable that has a first derivative may fail to have a second derivative at every point.
We prove the theorem formulated in the previous paragraphs.
Let f(z) have a derivative inside and on the boundary of the circle K with center at the point a. Then inside K the function f(z) can be expressed by the Cauchy integral:

Using the fact that the point z lies inside the circle, and ρ is on the circumference we get: |Z-a/ζ -a|<1
so that from the basic formula for a geometric progression:

“and the series on the right converges. Using (44) and (45), we can represent formula (43) in the form:We now apply term-by-term integration to the series inside the brackets. (The validity of this operation can be established rigorously.) Removing the factor (z – a)n, which does not depend on ρ, from the integral sign in each term, we get:Now using the integral formulas for the sequence of derivatives, we may write:

We have shown that differentiable functions of a complex variable can be expanded in power series. Conversely, functions represented by power series are differentiable. Their derivatives may be found by term-by-term differentiation of the series. (The validity of this operation can be established rigorously.)
Entire functions. A power series gives an analytic representation of a function only in some circle. This circle has a radius equal to the distance to the nearest point at which the function ceases to be analytic, i.e., to the nearest singular point of the function.
Among analytic functions it is natural to single out the class of functions that are analytic for all finite values of their argument. Such functions are represented by power series, converging for all values of the argument z, and are called entire functions of z. If we consider expansions about the origin, then an entire function will be expressed by a series of the form: If in this series all the coefficients, from a certain one on, are equal to 0’, the function is simply a polynomial, or an entire rational function:If in the expansion there are infinitely many terms that are different from 0’, then the entire function is called transcendental.
Examples of such functions are:

In the study of properties of polynomials, an important role is played by the distribution of the roots of the equation: P (z) = 0  or, more generally speaking, we may raise the question of the distribution of the points for which the polynomial has a given value A: P (z) = AThe fundamental theorem of ¬Algebra says that every polynomial takes a given value A in at least one point. This property cannot be extended to an arbitrary entire function. For example, the function w = ez does not take the value 0’ at any point of the z-plane. However, we do have the following theorem of Picard: Every entire function assumes every arbitrarily preassigned value an infinite number of times, with the possible exception of one value.
The distribution of the points of the plane at which an entire function takes on a given value A is one of the central questions in the theory of entire functions.
The number of roots of a polynomial is equal to its degree. The degree of a polynomial is closely related to the rapidity of growth of |P(z)| as | z| → ∞. In fact, we can write:

and since for |z| → ∞, the second factor tends to |an|, a polynomial of degree n, for large values of |z|, grows like |an| · |z|n. So it is clear that for larger values of n, the growth of |Pn(z)| for |z| → ∞ will be faster and also the polynomial will have more roots. It turns out that this principle is also valid for entire functions. However, for an entire function f(z), generally speaking, there are infinitely many roots, and thus the question of the number of roots has no meaning. Nevertheless, we can consider the number of roots n(r, a) of the equation: ƒ (z) = a in a circle of radius r, and investigate how this number changes with increasing r.

The rate of growth of n(r, a) proves to be connected with the rate of growth of the maximum M(r) of the modulus of the entire function on the circle of radius r. As stated earlier, for an entire function there may exist one exceptional value of a for which the equation may not have even one root. For all other values of a, the rate of growth of the number n(r, a) is comparable to“the rate of growth of the quantity In M(r). We cannot give more exact formulations here for these laws.
The properties of the distribution of the roots of entire functions are connected with problems in the theory of numbers and have enabled mathematicians to establish many important properties of the Riemann zeta functions,on the basis of which it is possible to prove many theorems about prime numbers.

On analytic representation of functions.

We saw previously that in a neighborhood of every point where a function is differentiable it may be defined by a power series. For an entire function the power series converges on the whole plane and gives an analytic expression for the function wherever it is defined. In case the function is not entire, the Taylor series, as we know, converges only in a circle whose circumference passes through the nearest singular point of the function. Consequently the power series does not allow us to compute the function everywhere, and so it may happen that an analytic function cannot be given by a power series on its whole domain of definition. For a meromorphic function an analytic expression giving the function on its whole domain of definition is the expansion in principal parts.
If a function is not entire but is defined in some circle or if we have a function defined in some domain but we want to study it only in a circle, then the Taylor series may serve to represent it. But when we study the function in domains that are different from circles, there arises the question of finding an analytic expression for the function suitable for representing it on the whole domain. A power series giving an expression for an analytic function in a circle has as its terms the simplest polynomials anzn. It is natural to ask whether we can expand an analytic function in an arbitrary domain in a more general series of polynomials. Then every term of the series can again be computed by arithmetic operations, and we obtain a method for representing functions that is once more based on the simplest operations of arithmetic. The general answer to this question is given by the following theorem.
An analytic function, given on an arbitrary domain, the boundary of which consists of one curve, may be expanded in a series of polynomials:

The theorem formulated gives only a general answer to the question of expanding a function in a series of polynomials in an arbitrary domain but does not yet allow us to construct the series for a given function, as was done earlier in the case of the Taylor series. This theorem raises rather then solves the question of expanding functions in a series of polynomials. Questions of the construction of the series of polynomials, given the function or some of its properties, questions of the construction of more rapidly converging series or of series closely related to the behavior of the function itself, questions of the structure of a function defined by a given series of polynomials, all these questions represent an extensive development of the theory of approximation of functions by series of polynomials. In the creation of this theory a large role has been played by Soviet mathematicians, who have derived a series of fundamental results.

Uniqueness Properties and Analytic functions.

One of the most remarkable properties of analytic functions is their uniqueness, as expressed in the following theorem.
If in the domain D two analytic functions are given that agree on some curve C lying inside the domain, then they agree on the entire domain.
The proof of this theorem is very simple. Let f1(z) and f2(z) be the two functions analytic in the domain D and agreeing on the curve C. The difference: will be an analytic function on the domain D and will vanish on the curve C. We now show that ϕ(z) = 0 at every point of the domain D. In fact, if in the domain D there exists a point z0 (figure 21) at which ϕ(z0) ≠ 0, we extend the curve C to the point z0 and proceed along the extended curve toward z0 as long as the function remains equal to 0’ on Γ. Let ζ be the last point of Γ that is accessible in this way. If ϕ(z0) ≠ 0, then ζ ≠ z0 and on a segment of the curve Γ beyond ζ the function ϕ(z), by the definition of the point ζ, will not be equal to 0’. We show that this is impossible. In fact, on the part Γζ of the curve Γ up to the point ζ, we have ϕ(z) = 0. We may compute all derivatives of the function ϕ(z) on Γζ using only the values of ϕ(z) on Γζ, so that on Γζ all derivatives of ϕ(z) are equal to 0’. In particular, at the point ζ:

Let us expand the function ϕ(ζ) in a Taylor series at the point ζ. All the coefficients of the expansion vanish, so that we get: Φ(z)=0 in some circle with center at the point ζ, lying in the domain D. In particular, it follows that the equation ϕ(z) = 0 must be satisfied on some segment of the curve Γ lying beyond ζ. The assumption ϕ(z0) ≠ 0 gives us a contradiction:

This theorem shows that if we know the values of an analytic function on some segment of a curve or on some part of a domain, then the values of the function are uniquely determined everywhere in the given domain. Consequently, the values of an analytic function in various parts of the argument plane are closely connected with one another.
To realize the significance of this uniqueness property of an analytic function, it is only necessary to recall that the general definition of a function of a complex variable allows any law of correspondence between values of the argument and values of the function. With such a definition there can, of course, be no question of determining the values of a function at any point by its values in another part of the plane. We see that the single requirement of differentiability of a function of a complex variable is so strong that it determines the connection between values of the function at different places.
We also emphasize that in the theory of functions of a real variable the differentiability of a function does not in itself lead to any similar wnsequences. In fact, we may construct examples of functions that are infinitely often differentiable and agree on some part of the Ox axis but differ elsewhere. For example, a function equal to 0’ for all negative values of x may be defined in such a manner that for positive x it differs from 0’ and has continuous derivatives of every order. For this it is sufficient, for example, to set, for x > 0:

ƒ(x)=e¯¹/x

Analytic continuation and complete analytic functions.

The domain of definition of a given function of a complex variable is often restricted by the very manner of defining the function. Consider a very elementary example. Let the function be given by the series: This series, as is well known, converges in the unit circle and diverges outside this circle. Thus the analytic function given by formula (49) is defined only in this circle. On the other hand, we know that the sum of the series (49) in the circle |z| < 1 is expressed by the formula: ƒ(z)=1/1-z (50).  Formula (50) has meaning for all values of z ≠ 1. From the uniqueness theorem it follows that expression (50) represents the unique analytic function, agreeing with the sum of the series (49) in the circle |z| < 1. So this function, given at first only in the unit circle, has been extended to the whole plane.
If we have a function f(z) defined inside some domain D, and there exists another function F(z) defined in a domain Δ, containing D, and agreeing with f(z) in D, then from the uniqueness theorem the value of F(z) in Δ is defined in a unique manner.

The function F(z) is called the analytic continuation of f(z). An analytic function is called complete if it cannot be continued analytically beyond the domain on which it is already defined. For example, an entire function, defined for the whole plane, is a complete function. A meromorphic function is also a complete function; it is defined everywhere except at its poles. However there exist analytic functions whose entire domain of definition is a bounded domain. We will not give these more complicated examples.
The concept of a complete analytic function leads to the necessity of considering multiple-valued functions of a complex variable. We show this by the example of the function: where r = |z| and 4 = arg z. If at some point z0 = r0(cos ϕ0 + i sin ϕ0) of the z-plane we consider some initial value of the function:then our analytic function may be extended continuously along a curve C. As was mentioned earlier, it is easy to see that if the point z describes a closed path C0, issuing from the point z0 and circling around the origin (figure 22), and then returning to the point z0, we find at the point z0 the original value of In r0 but the angle ϕ is increased by 2π. This shows that if we extend the function Ln z in a continuous manner along the path C, we increase its value by 2πi in one circuit of the contour C. If the point z moves along this closed contour n times, then in place of the original value:

These remarks show that on the complex plane we are unavoidably compelled to consider the connection between the various values of Ln z. The function Ln z has infinitely many values. With respect to its multiplevalued character, a special role is played by the point z = 0, around which we pass from one value of the function to another. It is easy to establish that if z describes a closed contour not surrounding the origin, the value of Ln z is not changed. The point z = 0 is called a branch point of the function Ln z.
In general, if for a function f(z), in a circuit around the point a, we pass from one of its values to another, then the point a is called a branch point of the function f(z).
Let us consider a second example. Let:

As noted previously, this function is also multiple-valued and takes on n values:All the various values of our function may be derived from the single one:by describing a closed curve around the origin, since for each circuit around the origin the angle ϕ will be increased by 277.
In describing the closed curve (n − 1) times, we obtain from the first value of:, all the remaining (n − 1) values. Going around the contour the nth time leads back to the value:

Riemann surfaces for multiple-valued functions. There exists an easily visualized geometric manner of representing the character of a multiplevalued function.
We consider again the function Ln z, and on the z-plane we make a cut along the positive part of the axis Ox. If the point z is prevented from crossing the cut, then we cannot pass continuously from one value of Ln z to another. If we continue Ln z from the point z0, we can arrive only at the same value of Ln z.
The single-valued function found in this manner in the cut z-plane is called a single-valued branch of the function Ln z. All the values of Ln z are distributed on an infinite set of single-valued branches:
It is easy to show that the nth branch takes on the same value on the lower side of the cut as the (n + 1)th branch has on the upper side.
To distinguish the different branches of Ln z, we imagine infinitely many examples of the z-plane, each of them cut along the positive part of the axis Ox, and map onto the nth sheet the values of the argument z corresponding to the nth branch. The points lying on different examples of the plane but having the same coordinates will here correspond to one and the same number x + iy; but the fact that. this number is mapped on the nth sheet shows that we are considering the nth branch of the logarithm.
In order to represent geometrically the fact

“that. this number is mapped on the nth sheet shows that we are considering the nth branch of the logarithm.
In order to represent geometrically the fact that the nth branch of the logarithm, on the lower part of the cut of the nth plane, agrees with the (n + 1)th branch of the logarithm on the upper part of the cut in the (n + 1)th plane, we paste together the nth plane and the (n + 1)th, connecting the lower part of the cut in the nth plane with the upper part of the cut in the (n + 1)th plane. This construction leads us to a manysheeted surface, having the form of a spiral staircase (figure 23). The role of the central column of the staircase is played by the point z = 0.

If a point passes from one sheet to another, then the complex number returns to its original value, but the function Ln z passes from one branch to another.
The surface so constructed is called the Riemann surface of the function Ln z. Riemann first introduced the idea of constructing surfaces representing the character of multiple-valued analytic functions and showed the fruitfulness of this idea.
Let us also discuss the construction of the Riemann surface for the function w=√z . This function is double-valued and has a branch point at the origin.
We imagine two examples of the z-plane, placed one on top of the other and both cut along the positive part of the axis Ox. If z starts from z,, and describes a closed contour C containing the origin, then √z passes from one branch to the other, and thus the point on the Riemann surface passes from one sheet to the other. To arrange this, we paste the lower border of the cut in the first sheet to the upper border of the cut in the second sheet. If z describes the closed contour C a second time, then √z must return to its original value, so that the point in the Riemann surface must return to its original position on the first sheet. To arrange this, we must now attach the lower border of the second sheet to the upper border of the first sheet. As a result we get a two-sheeted surface, intersecting itself along the positive part of the axis Ox. Some idea of this surface may be obtained from figure 24, showing the neighborhood of the point z = 0.
In the same way we can construct a many-sheeted surface to represent the character of any given multiple-valued function. The different sheets of such a surface are connected with one another around branch points of the function. It turns out that the properties of analytic functions are closely connected with the geometric properties of Riemann surfaces. These surfaces are not only an auxiliary means of illustrating the character of a multiple-valued function but also play a fundamental role in the study of the properties of analytic functions and the development of methods of investigating them. Riemann surfaces formed a kind of bridge between analysis and geometry in the region of complex variables, enabling us not only to relate to geometry the most profound analytic properties of the functions but also to develop a whole new region of geometry. namely topology, which investigates those geometric properties of figures which remain unchanged under continuous deformation:

One of the clearest examples of the significance of the geometric properties of Riemann surfaces is the theory of ¬Algebraic functions, i.e., functions obtained as the solution of an equation: ƒ (z,w)=0  the left side of which is a polynomial in z and w. The Riemann surface of such a function may always be deformed continuously into a sphere or else into a sphere with handles

The characteristic property of these surfaces is the number of handles. This number is called the genus of the surface and of the ¬Algebraic function from which the surface was obtained. It turns out that the genus of an ¬Algebraic function determines its most important properties.

MORE COMPLEX DERIVATIVES: CURVATURE, TENSORS – ITS LIMIT OF UTILITY

Physical quantities may be of 3, s, t, st kinds.

NOW BEYOND 2 planes of existence, the utility of derivatives diminish, as organisms become invisible and do not organise further, and so because in the same plane we use a first derivative, in relationships between two any planes we use derivatives of the 2nd degree, the maximal use possible for derivatives are derivatives comes from third degree derivatives, which give us the limit of information, in the form of a single parameter, 1/r², curvature.

Beyond that planes of pure space and pure time are not perceivable, so departing from the fact that all is S with some t (energy) or T with some S(information), we can still broadly talk of dominate space-like parameters, time-like parameters and use the Spacetime parameters only for those in which S≈t (e=i) holds quite exactly.

Pure space and pure time.

The closest description of pure space as it emerges from ∆-1 and influences a higher ∆ scale as a ‘field of forces’.

And the closest thing to pure time, is a process of ‘implosion’ that ‘forces down’ or ‘depresses’ (inverse of emergence), a system from an ∆+1, time-like implossive force. And that is the meaning of mass, the force down, in/form/ative force coming from the ∆+1 scale.

Since pure, implosive time and pure expansive, entropic space are not observable, the best way to ‘get it’, is when the implosive time process is felt by something which is smaller inside the vortex. So we feel mass, from the ∆+3 galactic scale as Mach and Einstein mused, because inward implosive in-formative forces affect mostly the internal parts,  not the external ones. And we field inversely a field of expansive entropy, from smaller parts, exploding us from inside out.

Then we come to energy-like (max. Se x min Ti) and informative like (max. Ti x min. Se) parameters.

Some are completely characterized by their numerical values, e.g., temperature, density, and the like, and are called scalars. Scalars are then to be considered parameters of closed informative functions. In the case of density is evident.

Temperature is not so clear a time parameter. But temperature, when properly constrain it to what and where we truly measure as temperature (Not the frequency of a wave), that is the vibrating motions of molecules of the ∆-1 scale, in a closed space, hence a time parameter. So goes for mass-energy, as energy becomes added to mass, always that we can measure it in an enclosed region of space, belonging therefore to a time-closed world. So gluons of motion-energy enclosed in a nucleus do store most of the mass of atoms; as they are to be understood in terms of closed time-parameters from a potential point of view.

So goes from potential energy, which is stored in time cycles. So as a rule, regardless of how ‘distorted’ is conceptually current science and how unlikely will be a change of paradigm for centuries to come (we still drag, for example the – sign in electrons since Franklin chose it), the non-distorted truth can classify all parameters and physical quantities in terms of time and space.

On the other hand energy-like parameters will have direction as vector quantities: velocity, acceleration, the strength of an electric field, etc. The simpler those parameters, with less ‘scalar’ elements the more space-like, entropy-like, field-like they will be.  So again as a rule, the less dimensions we need to define a system the more space-entropy-field like it will be.

Thus space-like Vector quantities may be expressed just by the length of the vector and its direction or its space-dimensional “components” if we decompose it into the sum of three mutually perpendicular vectors parallel to the coordinate axes.

While a space-time balanced process will have more ‘parameters’ to study than a simple vector, growing in dimensions till becoming 4-vectors and finally a ‘tensor’, which will be the final ‘growth into a 5D ∆-event

So it is easy just looking at a equation to know what kind of s, t, or st (e, i exi) process we study.

For example, a classic st process, which is, let us remember an S≈T process is one which tends to a dynamic balance between both parameters

So it is an oscillatory system in any ∆-scale or ‘space-time medium’. In such oscillations every point of the medium, occupying in equilibrium the position (x, y, z), will at time t be displaced along a vector u(x, y, z, t), depending on the initial position of the point (x, y, z) and on the time t.

In this case the process in question will be described by a vector field. But it is easy to see that knowledge of this vector field, namely the field of displacements of points of the medium, is not sufficient in itself for a full description of the oscillation.

It is also necessary to know, for example, the density ρ(x, y, z, t) at each point of the medium, the temperature T(x, y, z, t).

So we ad to the Spe-vector field, some T-parameters, (closed T-vibration, density, and stress, which connects the system in a time-network like fashion to all other points of the whole).

∆-events. Finally in processes which require the study of interactions between ∆-Planes, hence 5D processes, we need even more complex elements.

For example a classic ∆-event is the internal stress, i.e., the forces exerted on an arbitrarily chosen volume of the body by the entire remaining part of it.

And so we arrive to systems defined by tensors, often with 6 dimensions (we forget the final r=evolution of thought that would make it all less distorted of working on bidimensional space and time, as to simplify understanding so the idea of a tensor is that the whole works dynamically into the ‘point’, described as a ‘cube’, with 6 faces, or ± elements on the point-particle-being from the 3 classic space-dimensions:

Examples of it are  the mentioned stress, shown in the graph, or in the relativity description of how the ∆+3 scale of gravitational space-time, influences the lower Planes with the effect of implosive mass.

Thus in addition to S-vector and T-scalar quantities, more complicated entities occur in SPACE-time events, often characterized everywhere by a set of functions of the same four independent variables; where the function is a description of the ∆-1 scale, which come together into the upper new ‘function’ of functions, or ‘functional equation’.

And so we can classify in this manner according to the ∆@st ternary method all the parameters of mathematical physics.

And they will reveal us the real structure and ∆ST symmetries they analyse according to the number of dimensions of complexity they use.

Yet beyond the range of ‘tensors’, which study relationships between 2 or at best a maximal of 3 Planes of reality, there is nothing to be found. So happens when we consider the number of differential equations, we want to study. Nothing really goes on beyond the 3rd derivative of a system that Planes down 2 Planes (entropy-death events, dual emergence upwards from seed to world).

So one of the most fascinating events of the relationship of i-logic and the real world is to properly interpret what Einstein said he could not resolve: ‘I know when mathematics is truth but not when it is real’

So as Lobachevski explained, we need an experimental method inserted in mathematics to know when mathematics is both, logically consistent as a language in its inner syntax but not fiction of beauty but real.

III AGE: I∆≥|3| FUNCTIONAL SPACES

The arrival of the modern age of mathematics with analytic geometry, analysis and finally the not studied age of digital thought (computers) gives us a more complex analysis of numbers, no longer as mere collections of equal forms, which is the pervading notion behind the classic Greek age, but as ‘elements’ of a topology )the more sophisticated understanding of numbers as points of a network which is the form in itself; and ratios and scalar irrational constants, which are key points on open and closed curves that in the 3rd age of ‘i-logic mathematics’ those texts start, will themselves be ‘fundamental elements’ of the geometrization of space-time world cycles of existence.

Yet even more fruitful as the modern field of space-time representations that works perfectly for systems of multiple ∆-Planes are

∆: Functional spaces, which are the ideal representation of two Planes of the fifth dimension where each point is a world in itself, of which the best known are the Hilbert>Banach spaces used to model Spe-Fields (the simplest functionals) and hence the lower Planes of lineal entropy of quantum physics (where each point is a lineal operator).

Thus Hilbert spaces fully  studies ∆ST, as they can model the whole 5th dimension elements, expanding points into vectors (field spaces), functionals (scalar spaces) and any other ‘content for each ∆fractal Universe, you want to represent. Hence they are essential to study both spatial quantum physics and fourier temporal series.

And so we could truly call function spaces  ∆-Spaces.

∆S≈T. Complex functional spaces.

So if we consider both ‘symmetries together’, it is evident that the most complete modelling is done with function spaces in the complex plane, as we can represent both motions in worldcycles and across planes of the 5th dimension.

Such space-time representations are ‘complete’ (as the functional is infinite in its parts, so are the real numbers, which in fact are so numerous that the rational ones are in real space a relative 0’, showing the infinite Planes of the mathematical Universe) and show properties of world cycles, allowing the ‘generation’ of smaller and larger Planes (as in a Mandelbrot set).

i=√-1, is the root of an inverse number, and represents the information of a bidimensional system.

Where we must consider a ‘co-ordinate system of ‘square, bidimensional’ space and time elements, where the x² coordinates responds to space, and the i²=-1, represents the inverse mostly time Planes.

This easily explains why in special relativity, concerned with the creation of a wave of light space-time, ∆-3, we subtract -(ct)² to create a ‘light space-time’ from the pure Euclidean, underlying structure of the neutrino, ∆-4 scale.

Hence the whys of special relativity: the light space-time ‘feeds’ and subtracts that quantity from the ∆-4 neutrino/dark energy scale to ‘exist’. And we use as in quantum physics for similar reasons complex numbers to represent this.

Conclusion

The theory of analytic functions arose in connection with the problem of solving ¬Algebraic equations. But as it developed it came into constant contact with newer and newer branches of mathematics. It shed light on the fundamental classes of functions occurring in analysis, mechanics, and mathematical physics. Many of the central facts of analysis could at last be made clear only by passing to the complex domain. Functions of a complex variable received an immediate physical interpretation in the important vector fields of hydrodynamics and electrodynamics and provided a remarkable apparatus for the solution of problems arising in these branches of science. Relations were discovered between the theory of functions and problems in the theory of heat conduction, elasticity, and so forth.

General questions in the theory of differential equations and special methods for their solution have always been based to a great extent on the theory of functions of a complex variable. Analytic functions entered naturally into the theory of integral equations and the general theory of linear operators. Close connections were discovered between the theory of analytic functions and geometry. All these constantly widening connections of the theory of functions with new areas of mathematics and science show the vitality of the theory and the continuous enrichment of its range of problems.

In our survey we have not been able to present a complete picture of all the manifold ramifications of the theory of functions. We have tried only to give some idea of the widely varied nature of its problems by indicating the basic elementary facts for some of the various fundamental directions in which the theory has moved. Some of its most important aspects, its connection with the theory of differential equations and special functions, with elliptic and automorphic functions, with the theory of trigonometric series, and with many other branches of mathematics, have been completely ignored in our discussion. In other cases we have had to restrict ourselves to the briefest indications. But we hope that this survey will give the reader a general idea of the character and significance of the theory of functions of a complex variable.

HILBERT SPACES AS REFLECTION OF FRACTAL, MULTIPLE QUANTA AND ITS WAVE-SOCIAL SYSTEMS.

If we consider all systems ultimately a reflection of the ideal form, an ‘open ST-ball (the area enclosed by the curve), with a T-singularity (the center of reference of the mathematical representation) and an S-membrane (the curve), we can easily translate the findings of ∆nalysis in ÐST laws.

Thus while those problems can be stated purely as questions of mathematical technique, they have a far wider importance because they possess a broad variety of interpretations for all the species of the ∆ST world; once we apply the ternary method and study the technique in St, Ts, or ∆ problems (as usual pure S and pure T is not measurable, as S has no form, T doesn’t communicate, even though we ab. St as s and Ts as T).

– St. The simplest results will give us the value of the ST body-wave measured with a parameter of space (area, volume, etc.). I.e:

The area inside a curve, for instance, is of direct interest in land measurement: how many acres does an irregularly shaped plot of land contain? But the same technique also determines the mass of a uniform sheet of material bounded by some chosen curve or the quantity of paint needed to cover an irregularly shaped surface.

– Ts. However if we apply analysis to the study of the symmetric sequential accumulative processes of time, we can do the equivalent calculus on a function of space. So ∆nalysis will be a key element to study processes of growth and decay of systems along its ∆-Planes, most often ∆±1 inverse processes, and so it connects directly with the reproductive and entropic, dying phases of space-time beings. I.e. to calculate the total fuel consumption of a rocket or the reproductive growth of a to predator population… And so on.

i.e, these techniques can be used to find the total distance traveled by a vehicle moving at varying speeds, by accumulating sequentially the ‘area’ below the curve of speed.

– ∆: And finally analysis emerged into Hilbert and Function spaces, where each point is a function in itself of the lower scale, whose sum, can be considered to integrate into a finite ‘whole one’, a vector in the case of a Hilbert or Banach space (Spe-function space):

In graph, 3 representations of Hilbert spaces, which are made of ¬E fractal points, with an inner 5th dimension, (usually and Spe-vectorial field with a dot product in Hilbert spaces, which by definition are ‘complete’ because as real number do ‘penetrate’ in its inner regions, made of finitesimal elements, such as the vibrations of a string, which in time are potential motions of the creative future encoded in its functions (2nd graph).

The 3 graphs show the 3 main symmetries of the Universe, lineal spatial forces, cyclical time frequencies and the ‘wormholes’ between the ∆ and ∆-1 Planes of the 5th dimension (ab. ∆), which structure the Universe, the first of them better described with ‘vector-points’ of a field of Hilbert space and the other 2 symmetries of time cycles/frequencies and Planes with more general function spaces.

They are part of the much larger concept of a function space, which can represent any ∆±1 dual system of the fifth dimension.  They grasp the scalar structure of ∆nalysis, where points are fractal non-euclidean with a volume, which grows when we come closer to them, so ∞ parallels can cross them – 5th Non-E postulate: so point stars become worlds and point cells living being. When those ∞ lines are considered future paths of time that the point can perform, they model ‘parallel universes’ both in time (i.e. the potential paths of the point as a vector) or space (i.e the different modes of the volume of information of the point, described by a function, when the function represents a complete volume of inner parts, which are paradoxically larger in number than the whole – the set of sets is larger than the set; Cantor Paradox). Thus function spaces are the ideal structure to express the fractal Planes of the fifth dimension and used to represent the operators of quantum physics.

– •: No less important will be the use of analysis for mind-perspectives and the comprehension of the ‘Galilean paradoxes’ and symmetries between motion in time and distances/curvatures in space.

I.e.:   the mathematical technique for finding a tangent line to a curve at a given point can also be used to calculate the curvature or steepness of the curve, which becomes then, a measure in its time symmetry of the acceleration of that curve, which becomes in physical space-time a measure of the attractive force of the system

Connection of functional analysis with other branches of mathematics and quantum mechanics.

We have already mentioned that the creation of quantum mechanics gave a decisive impetus to the development of functional analysis. Just as the rise of the differential and integral calculus in the 18th century was dictated by the requirements of mechanics and classical physics, so the development of functional analysis was, and still is, the result of the vigorous influence of contemporary physics, principally of quantum mechanics. The fundamental mathematical apparatus of quantum mechanics consists of the branches of mathematics relating essentially to functional analysis. We can only briefly indicate the connections existing here, because an explanation of the foundations of quantum mechanics exceeds the framework of this post and we keep it for the 4th line.

In quantum mechanics the state of the system is given in its mathematical description by a vector of Hilbert space. Such quantities as energy, impulse, and moment of momentum are investigated by means of self-adjoint operators. For example, the possible energy levels of an electron in an atom are computed as eigenvalues of the energy operator. The differences of these eigenvalues give the frequencies of the emitted quantum of light and thus define the structure of the radiation spectrum of the given substance. The corresponding states of the electron are here described as eigenfunctions of the energy operator.

The solution of problems of quantum mechanics often requires the computation of eigenvalues of various (usually differential) operators. In some complicated cases the precise solution of these problems turns out to be practically impossible. For an approximate solution of these problems the so-called perturbation theory is widely used, which enables us to find from the known eigenvalues and functions of a certain self-adjoint operator A the eigenvalues of an operator A1 slightly different from it.

In quantum physics WE are using the position basis, the momentum basis, or the energy basis, because they represent the singularity=position, the membrane (momentum) or its conversion through an SHF, simple harmonic function into a lineal wave of motion and the energy or vital space, between them, which are the 3 elements of any system of reality (T.œ).

What this simple means is that there is a natural correspondence between:

1D: Position and singularity or active magnitude ‘scalar’, valuation; which is the ultimate meaning of a point-0’-particle which becomes the o-1 dimension as a ‘Dirac function of value 1.

2D: The membrane or constrain, which becomes the angular momentum of the system, transformed into lineal momentum-motion as the angular momentum develops a wave of communication through a sinusoidal/fourier function of  (boson) transmission of energy and information.

3D: Which ad in several ways, mostly through superposition, 1+2 = 3D, into a wave function (Schrondinger’s energy equation) that fills up the vital space (Hilbert space that includes all the possible configurations of the wave and its enclosed particles).

It is then when the specific eigenvalues and forms that the ternary system can form are expressed as position=singularity momentum= and energy values to characterise the system (constant).

Yet the minimum measure we must obtain will be a ‘planckton’, an h-planck of vital energy, appropriately defined as the uncertain piece we ABSORB TO OBSERVE, so the 3 dimensions can only be related by an equation of uncertainty, h= momentum x position, one of the many expressions of ∑∏= s x t 5D metric, in this case: minimal quanta of energy of a quantum system, h/2 equal to position-singularity x momentum.

Those simple elements appear always in mathematical equations, which are for that reason so often ternary games of the s ‘operator’ ð = ∑∏.

For example, in its most general expression angular momentum expresses the value and relationship of the membrane with its singularity through a common space with a self-centred radius= m (membrane singularity) x r (radius of the vital space) o (singularity center).

Quantum physics is then expressed in a complex Hilbert space just because such space is complex enough to accommodate all the various possible states of an ∆-3 minimal scale of reality, where by sxt=k 5D metric will have maximal number of forms-populations and time-speeds.

To which we might also ad ‘perturbations’ of other systems and Planes, which makes it all more complex – and even more so because of the pedantic probability choice of formalism that further complicates the business, as all operators systems and ensembles have to be normalised to get the value 1 (of  those processes the Delta function is the only worth to mention at this stage, as we have shown to rightly give us the value of dimension 1 for the 0’ particle).

In conclusion, we wish to emphasize once more that functional analysis is one of the rapidly developing branches of contemporary mathematics. Its connections and applications in contemporary physics, differential equations, approximate computations, and its use of general methods developed in ¬Algebra, topology, the theory of functions of a real variable, etc., make functional analysis one of the focal points of contemporary mathematics.

In that sense many-dimensional manifolds, the so-called phase spaces of dynamical systems that take into account not only the configurations that the given mechanical system can have, but also the velocities with which its various constituent points move are treated by huminds with hidden topological methods, which prove the Galilean paradox or equivalence between space-forms and time motion; which started modern physics.

So less directly, ∆nalysis is related to the extremely important question of the calculation of instantaneous velocity or other instantaneous rates of change, such as the cooling of a warm object in a cold room or the propagation of a disease organism through a human population; by constantly switching the study of the ∆st process between its S, t and ∆ components.

This post begins with a brief introduction to the historical background of analysis and to basic concepts such as numbers and infinities, functions, continuity, infinite series, and limits, all of which are necessary for an understanding of analysis.

Following this introduction is a full technical review, from calculus to nonstandard analysis.

Hilbert Space (Infinite-Dimensional Space)

Connection with n-dimensional space.

The introduction of the concept of n-dimensional space turned out to be useful in the study of a number of problems of mathematics and physics. In its turn this concept gave the impetus to a further development of the concept of space and to its application in various domains of mathematics. An important role in the development of linear ¬Algebra and of the geometry of n-dimensional spaces was played by problems of small oscillations of elastic systems. Let us consider the following classical example of such a problem (figure).

Let AB be a flexible string spanned between the points A and B. Let us assume that a weight is attached at a certain point C to the string. If it is moved from its position of equilibrium, it begins to oscillate with a certain frequency ω, which can be computed when we know the tension of the string, the mass m and the position of the weight. The state of the system at every instant is then given by a single number, namely the displacement y1 of the mass m from the position of equilibrium of the string.

Now let us place n weights on the string AB at the points C1, C2, ···, Cn. The string itself is taken to be weightless. This means that its mass is so small that compared with the masses of the weights it can be neglected. The state of such a system is given by n numbers y1, y2, ···, yn equal to the displacements of the weights from the position of equilibrium. The collection of numbers y1, y2, ···, yn can be regarded (and this turns out to be useful in many respects) as a vector (y1, y2, ···, yn) of an n-dimensional space.
The investigation of the small oscillations that take place under these circumstances turns out to be closely connected with fundamental facts of the geometry of n-dimensional spaces. We can show, for example, that the determination of the frequency of the oscillations of such a system can be reduced to the task of finding the axes of a certain ellipsoid in n-dimensional space.
Now let us consider the problem of the small oscillations of a string spanned between the points A and B. Here we have in mind an idealized string, i.e., an elastic thread having a finite mass distributed continuously along the thread. In particular, by a homogeneous string we understand one whose density is constant.
Since the mass is distributed continuously along the string, the position of the string can no longer be given by a finite set of numbers y1, y2, ···, yn, and instead the displacement y(x) of every point x of the string has to be given. Thus, the state of the string at each instant is given by a certain function y(x).
The state of a thread with n weights attached at the points with the abscissas x1, x2, ···, xn, is represented graphically by a broken line with n members (figure 4), so that when the number of weights is increased, then the number of segments of the broken line increases correspondingly. When the number of weights grows without bound and the distance between adjacent weights tends to 0’, we obtain in the limit a continuous distribution of mass along the thread, i.e., an idealized string. The broken line that describes the position of the thread with weights then goes over into a curve describing the position of the string:

So we see that there exists a close connection between the oscillations of a thread with weights and the oscillations of a string. In the first problem the position of the system was given by a point or vector of an n-dimensional space. Therefore it is natural to regard the function f(x) that describes the position of the oscillating string in the second case as a vector or a point of a certain infinite-dimensional space. A whole series of similar problems leads to the same idea of considering a space whose points (vectors) are functions f(x) given on a certain interval.
This example of oscillation of a string, to which we shall return again in §4, suggests to us how we shall have to introduce the fundamental concepts in an infinite-dimensional space.
Hilbert space. Here we shall discuss one of the most widespread concepts of an infinite-dimensional space of the greatest importance for the applications, namely the concept of the Hilbert space.
A vector of an n-dimensional space is defined as a collection of n numbers fi, where i ranges from 1 to n. Similarly a vector of an infinite-dimensional space is defined as a function f(x), where x ranges from a to b.
Addition of vectors and multiplication of a vector by a number is defined as addition of the functions and multiplication of the function by a number.
The length of a vector f in an n-dimensional space is defined by the formula:Since for functions the role of the sum is taken by the integral, the length of the vector f(x)  of a Hilbert space is given by the formula:

The distance between the points f and g in an n-dimensional space is defined as the length of the vector f — g, i.e., as:

“Similarly the “distance” between the elements f(t) and g(t) in a functional space is equal to:

called the mean-square deviation of the functions f(t) and g(t). Thus, the mean-square deviation of two elements of Hilbert space is taken to be a measure of their distance.
Let us now proceed to the definition of the angle between vectors. In an n-dimensional space the angle ϕ between the vectors f = {fi} and g = {gi} is defined by the formula:In an infinite-dimensional space the sums are replaced by the corresponding integrals and the angle ϕ between the two vectors f and g of Hilbert space is defined by the analogous formula:This expression can be regarded as the cosine of a certain angle ϕ, provided the fraction on the right-hand side is an absolute value less than one, i.e., if:

This inequality in fact holds for two arbitrary functions f (t) and g (t). It plays an important role in analysis and is known as the Cauchy-Bunjakovskiĭ inequality. Let us prove it.
Let f(x) and g(x) be two functions, not identically equal to 0’, given on the interval (a, b). We choose arbitrary numbers λ and μ and form the expression:

Since the function [λf(x) – μg(x)]2 under the integral sign is nonnegative, we have the following inequality:

This inequality is valid for arbitrary values of λ and μ; in particular we may set:

Substituting these values of λ and μ in (9), we obtain: c/√AB≤1

When we replace A, B and C by their expressions in (8), we finally obtain the Cauchy-Bunjakovskiĭ inequality.
In geometry the scalar product of vectors is defined as the product of their lengths by the cosine of the angle between them. The lengths of the vectors f and g in our case are equal to:

the cosine of the angle between them is defined by the formula:When we multiply out these expressions, we arrive at the following formula for the scalar product of two vectors of Hilbert space:

From this formula it is clear that the scalar product of the vector f with itself its the square of its length.
If the scalar product of the non 0’ vectors f and g is equal to 0’, it means that cos ϕ = 0, i.e., that the angle ϕ ascribed to them by our definition is 90°.

Therefore functions f and g for which:are called orthogonal.
Pythagoras’ theorem (see §1) holds in Hilbert space as in an n-dimensional space. Let f1(x), f2(x), ···, ƒn(x) be N pairwise orthogonal functions and: ƒ(x) = f1(x) + f2(x)+···+ ƒn(x)…

Then the square of the length of f is equal to the sum of the squares of the lengths of f1, f2, ···, fN.
Since the lengths of vectors in Hilbert space are given by means of integrals, Pythagoras’ theorem in this case is expressed by the formula:

The proof of this theorem does not differ in any respect from the one given previously (§1) for the same theorem in n-dimensional space.
So far we have not made precise what functions are to be regarded as vectors in Hilbert space.

For such functions we have to take all those for which:has a meaning.

It might appear natural to confine ourselves to continuous functions for which it always exists.

However, the theory of Hilbert space becomes more complete and natural if: is interpreted in a generalized sense, namely as a Lebesgue integral.

This extension of the concept of integrals (and correspondingly of the class of functions to be discussed) is necessary for functional analysis in the same way as a strict theory of the real numbers is necessary for the foundation of the differential and integral calculus. Thus, the generalization of the ordinary concept of an integral that was created at the beginning of the 20th century in connection with the development of the theory of functions of a real variable turned out to be quite essential for functional analysis and the branches of mathematics connected with it.

Expansion by Orthogonal Systems of Functions

If in a plane two arbitrary mutually perpendicular vectors e1 and e2 of unit length are chosen (figure), then every vector of the same plane can be decomposed in the directions of these two vectors, i.e., can be represented in the form: ƒ= a1e1+a2e2

where a1 and a2 are the numbers equal to the projections of the vector f in the direction of the axis of e1 and e2. Since the projection of f on an axis is equal to the product of the length of f by the cosine of the angle between f and the axis, we can write, remembering the definition of the scalar product: a1= (ƒ,e1), a2 = (ƒ,e2).
Similarly if in a three-dimensional space any three mutually perpendicular vectors e1, e2, e3 of unit length are chosen, then every vector f in this space can be written in the form:In Hilbert space we can also consider systems of pairwise orthogonal vectors of the space, i.e., functions ϕ1(x), ϕ2(x), ···, ϕn(x), ···.Such systems of functions are called orthogonal and play an important role in analysis.

They occur in very diverse problems of mathematical physics, integral equations, approximate computations, the theory of functions of a real variable, etc. The ordering and unification of the concepts relating to such systems formed one of the motivations that led at the beginning of the 20th century to the creation of the general concept of a Hilbert space.Let us give a precise definition.

A system of functions :ϕ1(x), ϕ2(x), ···, ϕn(x) is called orthogonal if any two functions of the system are orthogonal, i.e., if:

In three-dimensional space we required that the vectors of the system should be of unit length. Recalling the definition of length of a vector we see that in the case of Hilbert space this requirement can be written as follows:A system of functions satisfying the conditions (13) and (14) is called orthonormal.
Let us give examples of such systems of functions.
1. On the interval (−π, π) we consider the sequence of functions:  l, cos x, sin x, cos 2x, sin 2x, …, cos nx, sin nx,…
Any two functions of this sequence are orthogonal to each other. This can be verified by the simple computation of the corresponding integrals. The square of the length of a vector in Hilbert space is the integral of the square of the function. Thus, the squares of the lengths of the vectors of the sequence: l, cos x, sin x, cos 2x, sin 2x, …, cos nx, sin nx,…
are the integrals:

i.e., the vectors of our sequence are orthogonal, but not normalized. The length of the first vector of the sequence is equal to , and all the others are of length . When we divide every vector by its length, we obtain the orthonormal system of trigonometric functions:

This system is historically one of the first and most important examples of orthogonal systems. It appeared in the works of Euler, D. Bernoulli, and d’Alembert in connection with problems on the oscillations of strings. The study of it plays an essential role in the development of the whole of analysis.
The appearance of the orthogonal system of trigonometrical functions in connection with problems on oscillations of strings is not accidental. Every problem on small oscillations of a medium leads to a certain system of orthogonal functions that describe the so-called characteristic oscillations of  the given system . For example in connection with problems on the oscillations of a sphere there appear the so-called spherical functions, in connection with problems on the oscillations of a circular membrane or a cylinder there appear the so-called cylinder functions, etc.
2. We can give an example of an orthogonal system of functions in which every function is a polynomial. Such an example is the sequence of Legendre polynomials: i.e., Pn(x) is (apart from a constant factor) the nth derivative of (x2 − 1)n. Let us write down the first few polynomials of this sequence:Obviously Pn(x) is a polynomial of degree n. We leave it to the reader to convince himself that these polynomials are an orthogonal sequence on the interval (−1, 1).

Expansion by orthogonal systems of functions. Just as in three-dimensional space every vector can be represented in the form of a linear combination of three pairwise orthogonal vectors e1, e2, e3 of unit length:   ƒ=a1e1+aϕ2e2+a3 e3     so in a functional space there arises the problem of the decomposition of an arbitrary function f in a series with respect to an orthonormal system of functions, i.e., of the representation of f in the form: ƒ(x) =a1ϕ1(x)+aϕ2(x)+ ···+an ϕn(x)+ (15)
Here the convergence of the series (15) to the function f has to be understood in the sense of the distance between elements in Hilbert space. This means that the mean-square deviation of the partial sum of the series:

This convergence is usually called “convergence in the mean.”
Expansions in various systems of orthogonal functions often occur in analysis and are an important method for the solution of problems of mathematical physics.

Linear Operators and Further Developments of Functional Analysis

In the preceding section we have seen that problems on the oscillations of an elastic system lead to the search for the eigenvalues and eigenfunctions of integral equations. Let us note that these problems can also be reduced to the investigation of the eigenvalues and eigenfunctions of linear differential equations.* Many other physical problems also lead to the task of computing the eigenvalues and eigenfunctions of linear differential or integral equations.
Let us give one more example. In modern radio technology the so-called wave guides are widely used for the transmission of electromagnetic oscillations of high frequencies, i.e., hollow metallic tubes in which electromagnetic waves are propagated. It is known that in a wave guide only electromagnetic oscillations of not too large a wave length can be propagated. The search for the critical wave length amounts to a problem on the eigenvalues of a certain differential equation.
Problems on eigenvalues occur, moreover, in linear ¬Algebra, in the theory of ordinary differential equations, in questions of stability, etc.
So it became necessary to discuss all these related problems from one single point of view. This common point of view is the general theory of linear operators. Many problems on eigenfunctions and eigenvalues in various concrete cases came to be fully understood only in the light of the general theory of operators. Thus, in this and a number of other directions the general theory of operators. Thus, in this and a number of other directions the general theory of operators turned out to be a very fruitful research tool in those domains of mathematics in which it is applicable.
In the subsequent development of the theory of operators, quantum mechanics played a very important role, since it makes extensive use of the methods of the theory of operators. The fundamental mathematical apparatus of quantum mechanics is the theory of the so-called self-adjoint operators. The formulation of mathematical problems arising in quantum mechanics was and still is a powerful stimulus for the further development of functional analysis.
The operator point of view on differential and integral equations turned out to be extremely useful also for the development of practical methods for approximate solutions of such equations.

Fundamental concepts of the theory of operators.

Let us now proceed to an explanation of the fundamental definitions and facts in the theory of operators.
In analysis we have come across the concept of a function. In its simplest form this was a relation that associates with every number x (the value of the independent variable) a number y (the value of the function). In the further development of analysis it became necessary to consider relations of a more general type.
Such more general relations are discussed, for example, in the calculus of variations we associate with every function a number. If with every function a certain number is associated, then we say that we are given a functional. As an example of a functional we can take the association between an arbitrary function:y = ƒ(x) (a≤x≤b) and the arc length of the curve represented by it. We obtain another example of a functional if we associate with every:y = ƒ(x) (a≤x≤b) function its definite integral:If we regard f(x) as a point of an infinite-dimensional space, then a functional is simply a function of the points of the infinite-dimensional space. From this point of view the problems of the calculus of variations concern the search for maxima and minima of functions of the points of an infinite-dimensional space.
In order to define what we mean by a continuous functional it is necessary to define first what we mean by proximity of two points of an infinite-dimensional space. we gave the distance between two functions f(x) and g(x) (points of an infinite-dimensional space) as:This method of assigning a distance in infinite-dimensional space is often used, but of course it is not the only possible one. In other problems other methods of giving the distance between functions may turn out to be better. We may point, for example, to the problem of the theory of approximation of functions (see Chapter XII, §3), where the distance between functions, which characterizes the measure of proximity of the two functions f(x) and g(x), is given, for example, by the formula:  max |ƒ(x)-g(x)|

Other methods of giving a distance between functions are used in the investigation of functionals in the calculus of variations. Distinct methods of giving the distance between functions lead us to distinct infinite-dimensional spaces.
Thus, various infinite-dimensional (functional) spaces differ from each other by their set of functions and by the definition of distance between them. For example, if we take the set of all functions with integrable square and define distance as:Then we arrive at the Hilbert space that was introduced in §2; but if we take the set of all continuous functions and define distance as max | f(x) − g(x) |, then we obtain the so-called space (C.)  For a given kernel k(x, y): indicates a rule by which every function f(x) is set in correspondence with another function g(x).
This kind of a correspondence that relates with one function f another function g is called an operator.
We shall say that we are given a linear operator A in a Hilbert space if we have a rule by which we associate with every function f another function g. The correspondence need not be given for all the functions of the Hilbert space. In that case the set of those functions f for which there exists the function g = Af is called the domain of definition of the operator A similar to the domain of definition of a function in ordinary analysis). The correspondence itself is usually denoted as follows:   g = Aƒ

The linearity of the operator means that the sum of the functions f1 and f2 is associated with the sum of Af1 and Af2, and the product of f and a number λ with the function λAf; i.e.: Occasionally continuity is also postulated for linear operators; i.e., it is required that the convergence of a sequence of functions fn to a function f should imply that the sequence Afn should converge to Af.
Let us give examples of linear operators.
1. Let us associate with every function f(x) the function:

i.e., the indefinite integral of f. The linearity of this operator follows from the ordinary properties of the integral, i.e., from the fact that the integral of the sum is equal to the sum of the integrals and that a constant factor can be taken out of the integral sign.
2. Let us associate with every differentiable function f(x) its derivative f′(x). This operator is usually denoted by the letter D; i.e.:  ƒ'(x)= D ƒ(x)

Observe that this operator is not defined for all the functions of the Hilbert space but only for those that have a derivative belonging to the Hilbert space. These functions form, as we have said previously, the domain of definition of this operator.
3. The examples 1 and 2 were examples of linear operators in an infinite-dimensional space. But examples of linear operators in finite-dimensional spaces have occurred in other chapters of this book. Thus, affine transformations were investigated. If an affine transformation of a plane of space leaves the origin of coordinates fixed, then it is an example of a linear operator in a two-dimensional, or three-dimensional, space. The linear transformations of an n-dimensional space now appear as linear operators in n-dimensional space.
4. In the integral equations, we have already met a very important and widely applicable class of linear operators in a functional space, namely the so-called integral operators. Let us choose a certain definite function k(x, y). Then the formula:

associates with every function f a certain function g. Symbolically we can write this transformation as follows:

g=Aƒ

The operator A in this case is called an integral operator. We could mention many other important examples of integral operators.
In §4 we spoke of the inhomogeneous integral equation:

In the notation of the theory of operators this equation can be rewritten as follows:

ƒ=λ Aƒ+h (33) where λ is a given number, h given function (a vector of an infinite-dimensional space), and f the required function. In the same notation the homogeneous equation can be written as follows: ƒ = λA ƒ (34).
The classical theorems on integral equations, such as, for example, the theorem formulated in §4 on the connection between the solvability of the inhomogeneous and the corresponding homogeneous integral equation, are not true for every operator equation. However, one can indicate certain general conditions to be imposed on the operator A under which these theorems are true.
These conditions are stated in topological terms and express that the operator A should carry the unit sphere (i.e., the set of vectors whose length does not exceed 1) into a compact set.

Eigenvalues and eigenvectors of operators.

The problem of eigenvalues and eigenfunctions of an integral equation to which we were led by problems on oscillations can be formulated as follows: to find the values λ for which there exists a non 0’ function f satisfying the equation:As before, this equation can be written as follows: ƒ=λ Aƒ or Aƒ=1/λƒ (35)
Now we shall understand by A an arbitrary linear operator. Then a vector f satisfying the equation (35) is called an eigenvector of the operator A, and the number 1/λ the corresponding eigenvalue.Since the vector (1/λ)ƒ coincides in direction with the vector f (differs from f only by a numerical factor), the problem of finding eigenvectors can also be stated as the problem of finding non 0’ vectors f that do not change direction under the transformation A.
This way of looking at the eigenvalues enables us to unify the problem of eigenvalues of integral equations (if A is an integral operator), differential equations (if A is a differential operator), and the problem of eigenvalues in linear ¬Algebra (if A is a linear transformation in finite-dimensional space; see Chapter VI and Chapter XVI). In the case of three-dimensional space this problem arises in the search for the so-called principal axes of an ellipsoid.
In the case of integral equations a number of important properties of the eigenfunctions and eigenvalues (for example the reality of the eigenvalues, the orthogonality of the eigenfunctions, etc.) are consequences of the symmetry of the kernel, i.e., of the equation k(x, y) = k(y, x).
For an arbitrary linear operator A in a Hilbert space the analogue of of this property is the so-called self-adjointness of the operator.
The condition for an operator A to be self-adjoint in the general case is that for any two elements f1 and f2 the equation:

(Aƒ1,ƒ2)=(ƒ1,Aƒ2) holds, where (Af1, f2) denotes the scalar product of the vector Af1 and the vector f2.
In problems of mechanics the condition of self-adjointness of an operator is usually a consequence of the law of conservation of energy. Therefore it is satisfied for operators connected with, say, oscillations for which there is no loss (dissipation) of energy.
The majority of operators that occur in quantum mechanics are also “self-adjoint.
Let us verify that an integral operator with a symmetric kernel k(x, y) is self-adjoint. In fact, in this case Af1 is the function:

.Therefore the scalar product (Af1, f2), which is equal to the integral of the product of this function with f2, is given by the formula:

The equation (Af1, f2) = (f1, Af2) is an immediate consequence of the symmetry of the kernel k(x, y).
Arbitrary self-adjoint operators have a number of important properties that are useful in the applications of these operators to the solution of a variety of problems. Indeed, the eigenvalues of a self-adjoint linear operator are always real and the eigenfunctions corresponding to distinct eigenvalues are orthogonal to each other.
Let us prove, for example, the last statement. Let λ1and λ2 be two distinct eigenvalues of the operator A, and f1 and f2 eigenvectors corresponding to them. This means that:ƒ=λ Aƒ1 = λ1ƒ1, Aƒ2 = λ2ƒ2 (36)

We form the scalar product of the first equation (36) by f2, and of the second by f1. Then we have:

Since the operator A is self-adjoint, we have (Af1, f2) = (Af2, f1). When we subtract the second equation (37) from the first, we obtain:

0 = (λ1-λ2) ( ƒ1, ƒ2).

Since λ1 ≠ λ2, we have (f1, f2) = 0, i.e., the eigenvectors f1 and f2 are orthogonal.
The investigation of self-adjoint operators has brought clarity into many concrete problems and questions connected with the theory of eigenvalues. Let us dwell in more detail on one of them, namely on the problem of the expansion by eigenfunctions in the case of a continuous spectrum.
In order to explain what a continuous spectrum means, let us turn again to the classical example of the oscillation of a string. Earlier we have shown that for a string of length l the characteristic frequencies of oscillations can assume the sequence of values:Let us plot the points of this sequence on the numerical axis Oλ. When we increase the length of the string l, the distance between any two adjacent points of the sequence will decrease, and they will fill the numerical axis more densely. In the limit, when l → ∞, i.e., for an infinite string, the the eigenfrequencies fill the whole numerical semiaxis λ≥0 . In this case we say that the system has a continuous spectrum.
We have already said that for a string of length l the expansion in a series by eigenfunctions is an expansion in a series by sines and cosines of n(π/l)x; i.e., in a trigonometric series:For the case of an infinite string we can again show that a more or less arbitrary function can be expanded by sines and cosines. However, since the eigenfrequencies are now distributed continuously along the numerical line, this is not an expansion in a series, but in a so-called Fourier integral:The expansion in a Fourier integral was already well known and widely used in the 19th century in the solutions of various problems of mathematical physics.
However, in more general cases with a continuous spectrum* many problems referring to an expansion of functions by eigenfunctions were not properly clarified. Only the creation of the general theory of self-adjoint operators brought the necessary clarity to these problems.
Let us mention still another set of classical problems that have been solved on the basis of the general theory of operators. The discussion of oscillations involving dissipation (scattering) of energy belongs to such problems.
In this case we can again look for free oscillations of the system in the form u(x) ϕ(t). However, in contrast to the case of oscillations without dissipation of energy, the function ϕ(t) is not simply cos ωt, but has the form eˆ−kt cos ωt, where k > 0. Thus, the corresponding solution has the form u (x)eˆ−kt cos ωt. In this case every point x again performs oscillations (with frequency ω), however the oscillations are damped because for t → ∞ the amplitude of these oscillations containing the factor e−kt tends to 0’.
It is convenient to write the characteristic oscillations of the system in the complex form u(x)e−iλt, where in the absence of friction the number λ is real and in the presence of friction λ is complex.

The problem of the oscillations of a system with dissipation of energy again leads to a problem on eigenvalues, but this time not for self-adjoint operators. A characteristic feature here is the presence of complex eigenvalues indicative of the damping of the free oscillations.

Connection of functional analysis with other branches of mathematics and quantum mechanics.

Needless to say with so much confusion perturbation theory has not yet received a full mathematical foundation, which is an interesting and important mathematical problem.
Independently of the approximate determination of eigenvalues, we can often say a good deal about a given problem by means of qualitative investigation. This investigation proceeds in problems of quantum mechanics on the basis of the symmetries existing in the given case. As examples of such symmetries we can take the properties of symmetry of crystals, spherical symmetry in an atom, symmetry with respect to rotation, and others. Since the symmetries form a group, the group methods (the so-called representation theory of groups) enables us to answer a number of problems without computation. As examples we may mention the classification of atomic spectra, nuclear transformations, and other problems. Thus, quantum mechanics makes extensive use of the mathematical apparatus of the theory of self-adjoint operators. At the same time the continued contemporary development of quantum mechanics leads to a further development of the theory of operators by placing new problems before this theory.
The influence of quantum mechanics and also the internal mathematical developments of functional analysis have had the effect that in recent years ¬Algebraic problems and methods have played a significant role in functional analysis. This intensification of ¬Algebraic tendencies in contemporary analysis can well be compared with the growth of the value of ¬Algebraic methods in contemporary theoretical physics in comparison with the methods of physics of the 19th century.