Home » ‘²Math » ∆nalysis

∆nalysis

±∞∆•ST

 SUMMARY

INTRODUCTION: ANALYSIS: SŒTS IN MOTION: STeps.

Differences of algebra and analysis.

0 age: Polynomials approximating analysis.

Limits: finitesimals

Derivatives and Integrals

The Rashomon Effect. ∆st views.

∫∂ in different st dimensions.

 

THE 3±i TYPES OF ANALYSIS: ODES, PDES  & FUNCTIONALS

I. ODEs

Simple derivatives and integrals: Change in a single Plane.

Single derivatives

Single integrals.

Main functions under ∫∂.

Main actions described by ODEs.

II. PDEs.

Change in multiple Planes and dimensions.

Partial Differential Equations. Calculating Changes through multiple planes and variables.

Approximation through polynomials.

Main functions of multiple derivatives.

Multiple Integrals. Calculating whole T.œs in Space.

Main functions of multiple integrals.

Main actions described by PDEs.

III. FUNCTIONALS

Calculus of variations.

Exploring beyond the ∆±2 scales.

Functionals of functions, operators.

Main functions of Functionals.

Scales and actions described by functionals: quantum equations.

IV. DISCONTINUOUS ANALYSIS: FRACTALS.

Fractal mathematics: DISCONTINUOUS DERIVATIVES, its steps.

Chaotic attractors: Discontinuous Integrals, its steps.

 

 

INTRODUCTION. ANALYSIS:

SŒTS IN MOTION. ITS PROPERTIES.

Foreword. Comparison between Algebra and Analysis.

Let us remember the definition that distinguishes the 2 modern fields of mathematics, as mirrors of space & time; algebra more focused in space-forms and its mental structures and analysis more focused in time motions and its scalar laws of change among ‘absolute’ 5D scalar dimensions of spacetime:

“Algebra studies  a(nti)symmetries between the 5D space-time dimensions of a mathematical SœT, through its inverse ‘OPERATIONS’ that reveal the 3 <≈> elements of any a(nti)symmetry, perceive in its PRESENT-SPATIAL state’.

Analysis studies those SœT a(nti)symmetries, ≤≥, as stop and go motions through the scalar 4th and 5th space-time dimensions of the SœT, focused on the VARIABLES, of Temporal Change maximised between ∆±i planes of the 5th dimension.’

So basically Analysis is a small part of algebra, as it focuses only on a(nti)symmetries between Social planes, ∆§, which are either integrated into a higher plane of the 5th dimension, or derived into its parts. But Analysis also goers beyond Algebra, in as much as Algebra is a more static, spatial, structural view; whereas analysis considers in depth the ‘motions’ of the set.

In those two definitions we must make some precissions of terminology:

Sœt or §œT which the reader should observe is the inverse of T.œS, expresses the modern unit of mathematical thought, constructed as always by arrogant man from the roof of the mind to the bottom of reality, the set in inverse fashion as a collection of ‘points of space or numbers of time’, which ARE the real units of the mathematical space and time Universe, gathered then in social collections called functions, connected through the inverse operations that reflect the main symmetries and relationships between ‘herds of points or numbers’ (±, x÷, xª √ log, ∫∂).

The difference then between Algebra and Analysis is in the different focus on the operations=symmetries between Sœt§ and its study as Steps of timespace motions (Analysis of derivatives) then gathered into longer super organisms (volume integrals) or worldcycles (time integrals), where the existence of limits (singularities and membranes that encircle the system or set the beginning and end duration of the world cycle) will become fundamental to reveal a solution to the equation – the finding of the duration, surface and interaction between the parts of the Tœ expressed as event in time or system in space.

So while the elements of algebra and analysis – equations of SŒTS is the same, the focus on spatial form (algebra) or temporal motion (analysis) makes them diverge.

Algebra operandi.

The key connector of T.Œ with classic science is the full understanding of the dual algebra operandi, ±, x/, ∂∫, √xª as part of the classic logic game.

It is immediate the correspondence of those operandi with the dimensional elements ∆st, as:

  • The sum-rest are the inverse arrows of the simplest superpositions of dimensions between species which are identical in motion and form.
  • The product/division rises the complexity of operandi a first layer, and serves the purpose, besides the obvious sum of sums, of calculating the margin of dimensions, as combinations which are not purely parallel between clone beings, most likely through the recombination of its ∆-1 elements, as the product of 2 Sœts inner elements give us all possible combinations. Ie. 5 x 4 = 20 IS also the number of connections between all the 5 elements and 4 elements of both sets. So multiplication ads either a dimension of multiple sums in the same plane, or probes for the first time in an inner scalar dimension.

    • The key algebraic concept of ∆st systems is the existence of a region of balance between planes or topologies where the asymmetry of the system is fairly lineal operated in decametric scales of growth and superposition, and the regions of relative past and future, | or O, ∆-1 or ∆+1, where there is a split towards the purity of motion or form, disconnected parts or wholes, accelerated vortices or lineal scattering and must be operated not with scalar potencies but finitesimal integrals and derivatives, more precise in their measure of the ‘curvature’ of the phase space we study.

    Then we arrive finally to the potency-root systems and integral-derivatives, which operate fully on the ∆§cales and planes of the system, which require two slightly different operandi. As §¹º ‘social decametric scales’ are lineal, regular, so we can operate them with potencies, roots and logarithms.

  • ∂∫ But when we change between scales into new wholes and new planes of existence we are  into ‘a different species’ and so we need to operate with the magic of finitesimal derivatives and analytical integrals, which keep a better track of the infinitesimal ‘curved’ exponential changes that happen between two planes, where linearity is lost:

Galilean px in analysis: finitesimal steps (derivatives) integrated to calculate precisely a cyclical whole.

Analysis has over all other branches of mathematics a special quality to study ‘changes’ between planes of the fifth dimension, as multiple derivatives ‘jump’ (albeit with different degrees of ‘focus’) better than mostly ‘lineal polynomials’ between planes, and the ‘curvilineal, Lorentzian’ variations, slow downs and accelerations on the S x T= K parameters happen between scales:

The formal stience of 4, 5 Disomorphisms in the mathematical mirror is analysis, which deals directly with the relationships between  ∆-1 ‘finitesimal’ parts’  and (in)’finities’. Two new terms we still accompany, with the lost inflationary  term ‘in’; since infinitesimals and infinities are a Kantian paralogism; as all planes have a limit in its quantic units, and all wholes a finite circle that encloses them into a relative 0-1 ‘circle unit’. 

Besides the duality of the 0-1 probabilistic mind unit which reflects the external 1-∞ universe, a second duality that weights heavily in analysis is that of perception of lineal vs. cyclical form:  We are minds of space that measure time cycles: ∫@≈∆ð.

Hence the equation of mind-measure defines the understanding of differential calculus: As always in praxis, the concept is based in the duality between huminds that measure with fixed rulers, lineal steps, over a cyclical, moving Universe. So Minds measure Aristotelian, short lines, in a long, curved Universe.

So the question comes to which minimalist lineal step of a mind is worthy to make accurate calculus of those long curved Universal paths.

The general rule to identify both polynomials and analysis, is this:

Y=S= ƒ (x=t)

This simple equation of algebra translates most time actions to space, on account of a simple realisation: that space slows down time cycles to accumulate them in still simultaneous bigger forms, and as such most spatial dimensions are referred as Y composite of multiple elements of the smaller, much more abundant time cycles space normally fixes and encircles with its @ =membrane.

In that regard, variations over the same theme respond to the ternary structure of all T.œs:

In the graph, when deriving and integrating, most operations refer to a ‘limited’ system, in which first we extract the finitesimal part-element, and then we integrate it to obtain a whole; so most likely the system described with depart from a time-changing-variable quanta, and integrate it to obtain a ‘static whole-spatial view’.

But variations on the same theme happen by the natural symmetry of space and time states.

So we can also start with a quanta of space integrated over time to get a spatial area or volume.

What we shall always need to find ‘single solutions’ is the parameters that describe in time or space the 3 elements of the T.œ: So we shall start with initial or final conditions (definite integrals), and define mostly in space as a whole, the enclosure or membrane thate limits the domain of the function (which might include as a different limit the singularity).

All in all the analytical approach will try to achieve a quantitative description of the unit/variable of ‘change’, the ‘finitesimal quanta of space – interval, area, volume’ or the ‘steps of time’ (frequency), and then integrate it over a super organism of space or an interval of time, we wish to study, often because it forms a whole or a zero sum world cycle.

Galilean Paradox. LINEAL vs. Cyclical view.

In that regard, the S=T symmetry will once more become essential to the technical apparatus of analysis as it has done in all other sub disciplines.

Of them the 3 key ‘dualities’ between lineal perception in short and cyclical perception in large, is the key to obtain solutions, as the mind of measure is lineal made of small steps that approximate larger cyclical wholes. It is in essence the method of differential equations, where the differential dy= ƒ'(x) ∆x + α∆x, approaches to a lineal derivative, ƒ'(x) ∆x in short increases, and so we can prescind of the smaller element that curves in longer distances the solution.

Finally the third Galilean paradox between continuity and discontinuity is also at the heart of analysis (and most forms of dual knowledge). Analysis has accepted as a dogma the continuity of the real number and so it considers continuity a necessary condition for differentiability but we disagree in a discontinuous Universe, continuity has a loose definition (as neither the axiomatic method is the proof of mathematical statements but experience also matters). So continuity is defined by a simpler rule: that the term α∆x of the discontinuity between the lineal and cyclical view of an infinitesimal derivative does indeed diminish faster the closer we are to the point ‘a’ in which the differential equation is defined. In brief, continuity means no big jumps and big changes in the direction of a function and the T.œ it reflects.

Birth of Analysis from Algebra.

Departing from algebra as such, analysis  came to be the main branch of ‘realist mathematics’, that is mathematical systems with applications to describe the real ST-world and its changing dimensions.

In contrast, Algebra as it happened with the visual language when realist photographies came, displaced from reality by analysis moved inwards into more complex, abstract Nature, became a ‘baroque language’ purely  formal trying to explain it all departing from reality into a multiple ‘cubist perspective’ that tried to gather all perspectives into a single painting (Set and group theory, often a dead end with no reference to experience.)

In the graph, we see that parallelism of evolution of algebra and painting, under the Ðisomorphic laws of languages as formal mirrors of reality which become inflationary, when departing from its immediate constraining experience of that reality, suffering then Kantian paralogisms – pretentious expansions that try to freeze all the ∆@s=t kaleidoscopic perspectives of the Universe with a single mirror; a meaningless attempt to reflect complex 5D reality that seldom gives us relevant information about the connections between all the elements of the T.œ.

The problem with ∞ counting. Borgian Libraries – the monkey method.

To understand why Group and set theory is NOT the way to find the meaning of the Universe, in layman terms, a Group is nothing but a whole enumeration of all the permutations that can be achieved with the elements of a §œT (social group of T.œs), which is like the surrealist tale of Borges, in which a group of monkeys were writing Shakespeare’s full works by the pedestrian method of typing all possible combinations made with the alphabet in the eternal time of existence of the Babel library. Sooner or latter they would combine the letters to write Macbeth… So those enumerations of Group theory do have a clear use as collections of data, NOT as explanations of whys, which require to understand the ultimate substances, space and time, its Disomorphic properties and hence have both theoretical ‘Ockham’s cribes’ to select proper combinations of the 2 letters of the Universe, and then check it with the experimental facts.

This happened in the eternal infinite task of Mathematical monkeys filling today computers with infinite numerical methods of classification of ALL the groups of the Universe, with ALL the transformations, motions and infinite ST-mirror symmetries, when Lie decided to focus in Groups that mattered TO experimental mathematics, which was obviously all around ∆-motions up and down the ‘derivative/integral scales’ of wholes and parts. 

So group theory focused on writing Shakespeare’s plays only, even if the number of ‘finitesimal’ transformations (steps of a derivative, which are not actually infinite but equal to the number of  ∑1/n ∆-1 quanta that make the whole, ∆) was ginormous. This is called a Lie Algebra, accounting for all the minute transformations of a derivative curve.

So the re-encounter of Abstract Algebra (§œTs and groups) with the scalar reality of the fractal universe would happen with Lie Algebras, precisely through the 4th and 5th dimension, as they represent equations which are ‘twice’ differentiable, i.e. forms that keep a certain ‘continuous meaning’, when decaying through 2 ∆-planes, as in the case of death or transcendental emergence into the 11th dimension of an hendecagram.

That is Groups whose elements depend continuously on the values of a finite system of parameters and whose multiplication law can be expressed by means of twice-differentiable functions ϕ1, ···, ϕr…

Since the fifth dimension is indeed a dimension of past (∆-i) to present (∆º), to future (∆+i) time.

 In that regard, group theory is more of a regulative idea in a kantian sense that a true description of the whole:


In that sense Algebra’s Group and Set theories as the meaning of it all is more in tune with the final realisation by Kant of paralogic thought and in more ‘sound’ conceptual terms, as it normally happens, among humans who loose their connection with reality, when they start to manipulate, points, numbers and equations without the slightest conceptual understanding of what they are.

So in existential algebra, we use more concepts of Analysis than those of algebra. Yet we said algebra was also a good mirror of S=t motions and so we have seen indeed in the previous pages examples on how classic algebra and its group theory allow us to resolve the meaning of mirror symmetries as reproductive motions in space-time and the other simpler motions as dimensions in present space-time or in the case of spiral motions clearly, motions in the fifth dimension.

Polynomial Algebra as an approximation to Analysis.

If we were to make a Generator equation in time of the ‘body of analysis’ and its pre and post-scales of study, we could write the 3±∆ fields of observance of the scalar Universe through mathematical mirrors:

Γ∆nalysis: ∆-i: Fractal Mathematics (discontinuous analysis of finitesimals) < Analysis – Integrals and differential equations (∆º±1: continuous=organic space): |-youth: ODEs<∑Ø-PDEs<∑∑ Functionals ≈ < ∆+i: Polynomials (diminishing information on wholes).

The 3±∆ approaches of mathematical mirrors to observe the scales of reality is thus clear: Fractal maths focuses on the point of view of the finitesimals, and its growing quantity of information, enlarging the perspective of the @-observer as we probe, enlarging smaller scales of smaller finitesimals; and in the opposite range polynomials observer larger scales with restriction of solutions, as basically the wholes we observe are symmetric within its internal equations, and the easiest solutions are those of a perfect holographic bidimesional structure (where even polynomials can be reduced to products of 2-manifolds).

Now within analysis proper, we find that the complexity or rather ‘range’ of phenomena studied by each age of analysis increases, from single variables (ODEs) to multiple variables (PDEs) to functions of functions (Functionals).

So the most balanced, extended field is that of differential equations focused on the ∆±1 organic (hence neither lineal not vortex like but balanced S=T), SCALES of the being, where we focus on finding the precise finitesimal that we can then integrate properly guided by the function of growth of the system. And we distinguish then ODE, where we probe a single ST symmetry or PDE obviously the best mirror, as we extend our analysis to multiple S and T dimensions and multiple S-T-S-T variations of those STep motions; given the fact that a ‘chain of dimensions’ do not fair well beyond the 3 ‘s-s-s’, distance-area-volume dimensions of space and t-t-t-t deceleration- lineal motion-cyclical motion- acceleration  related time motions that can ‘change’ a given event of space-time.

So further ODE derivatives are only significant to observe the differences between the differential and/or fractal and polynomial approaches – this last comparison, well established as an essential method of mathematics, worth to mention in this intro.

A space of formal algebra thus is a function of space, which can be displayed as a continuous sum of infinitesimals across a plane of space-time of a higher dimension.

In such a geography of Disomorphic space-time the number of dimension matters to obtain different operations but we are just gliding on the simpler notions of the duality algebra=polynomials vs. Analysis: integrals of infinitesimals.

Yet soon the enormous extension of ‘events’ that happen between the 3 ∆±1 planes of T.œs as forms of entropic devolution or informative evolution across ∆±i, converted analysis in a bulky stience much larger than the study of an ST-single plane of geometry, the 2 planes of topology and the polynomials of algebra – which roughly speaking are an approximation to the more subtle methods of finding dimensional change proper of analysis – even if huminds found first the unfocused polynomials and so we call today Taylor’s formulae of multiple derivatives, approximations to Polynomials.

Since Derivatives & integrals often transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in ages and form.
Polynomials are better suited for simpler systems, scales of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.

So in principle ∆nalysis was a sub-discipline of algebra. But as always happens, time increases the informative complexity of systems and refines closer to a better linguistic focus with finer details the first steps of the mind. So Algebra became with Analysis more precise, measuring dimensional polynomials and its finite steps.

In any case such huge size of ∆-nalysis is a clear proof that in mathematics and physics the ∆ST elements of reality are also its underlying structure.

As such since ∆-scales are the less evident components of the Universe, ∆nalysis took long to appear, till humans did not discovered microscopes to see those planes but while maths has dealt with the relativism of human individual planes of existence, philosophy has yet to understand Leibniz’s dictum upon discovery ‘finitesimals’, 1/n, mirror reflections of the (in)finite whole, n: ‘every point-monad is a world in itself’.
In that sense Analysis was already embedded in the Greek Philosophical age, in the disquisition about Universals and Individuals.

So we shall study first an introduction to the ‘real foundations’ of analysis.

Then a brief account of ∆nalysis in its 3±1 ages, through its time-generator:

Ps (youth: Greek age) < St: Maturity (calculus) > T (informative age: ∆nalysis) >∆+1:emergence: Functionals (Hilbert Spaces)<∆-1: Humind death: Digital Chip thought…

…So of these 5 ages we shall leave as usually for ethic reasons unresolved the post-human age of computer analysis now in a rage…

Along that path, we shall consider in a more orderly fashion, the main themes of analysis, in its 3 ‘scales’:

∆-1: Derivatives > ∫∆: integrals > ∆+1 differential equations.

To cap it all, with examples of all sciences in which analysis reveals the fundamental space-time events of the 5th dimensions.

The page will be loaded slowly into the future with a full study of finitesimal calculus of parts and wholes through differential equations; as it will be more technical, I do not expect to tackle it till summer 2017 when the simpler upper hierarchies (¬ æ, ¬e mathematics) are more or less completed.

∆nalysis in that sense has a simple definition: the mathematics of the 5th dimension and its evolution of parts into wholes.

Thus we shall do as usual a diachronic analysis of its informative growth in complexity in 3 ages; from its:

-I Age, from Leibniz to Heaviside in which the fundamental applications to physics are found. While the level of complexity of ∆∫∂udies is maintained in strict realist basis, as physicists try to correspond those finitesimals and wholes with experimentally sound observations of the real world at the close range of scales in which humans perceive. While the formalism of its functions is built from Leibniz’s finitesimal 1/n analysis to the work of Heaviside with vectors and ∇ functions. Partial derivatives are kept then at the ‘holographic level’ of 2 dimensions (second derivatives on ∆±2).

∆  will be thus the general symbol of the 5th dimension of mental wholes or social dimension and ∫∂ the symbol of the 4th dimension of aggregate finitesimals or entropic dimension.

-II Age, from Riemann to the present. The extension of analysis happens to infinite dimensions with the help of the work of Riemann and Hilbert, applied by Einstein and quantum physicists to the study of scales of reality beyond our direct perception (∆≥|3|).

This implies that physicists according to 5D metrics, Sp x Tƒ=K must describe much larger structures in space extension and time duration (astrophysics) and viceversa, much faster populous groups of T.œs in the quantum realm; so ‘functionals’ – functions of functions – ad new dimensions of time, and Hilbert quasi-infinite spaces and statistical methods of collecting quasi-infinite populations are required in the relentless pursuit of huminds for an all-comprehensive ‘mental metric’ of a block of time-space, where all the potential histories and worldcycles of all the entities they study can be ‘mapped’.

The impressive results obtained with those exhaustive mappings bare witness of the modern civilisation based in the manipulation wholesale of electronic particles, but the extreme ‘compression’ of so huge populations in time and space blurs its ‘comprehension’ in ‘realist’ terms, and so the age of ‘idealist science’, spear-headed by Hilbert’s imagination of points lines and congruences detaches mathematical physics and by extension analysis from reality.

-III Age, the digital era, is the last age of humind mathematics, where Computers will carry this confusing from the conceptual perspective, detailed from the manipulative point of view, ∆nalysis to its quantitative exhaustion. But as usual in this blog for ethic reasons, as a ‘vital humind’ we shall not comment or advance the evolution of the future species that is making us obsolete.

Instead, we shall just advance further the discipline along its description with some conceptual philosophical considerations, from the third age of the ‘scientific method’, the age of the organic paradigm that this blog represents – in this case unlike other posts being Analysis the most thoroughly researched field of human thought in any science anywhere anytime of history, there is nothing we can contribute to the enlargement of the field. 

Indeed, because we live in a mechanical civilisation and analysis is the essential quantitative language of change computed by machines, unlike all other sciences, notably those who deal with human, historic super organisms we have nothing else to say, just clarify and only on the most basic principles of the discipline, which has overgrown in parallel to our mechanical civilisation.

The actions it describes.

The minimal unit for any T.Œ are its a,e,i,o,u actions of existence, its accelerations energy feedings, information processing, offspring reproduction and universal evolution. So the immediate question about mathematical mirrors and its operations is what actions reflect. We have treated the theme extensively in the algebraic post, concluding that being mathematics a mostly spatial, social more than organic language, its operations are perfect to mirror simple systems of huge social numbers=herds; and as such to describe the simpler accelerations=motions, which are reproductions between two continuous scales of the fifth dimension; informative processes, where the quanta perceive are truly finitesimal ∆-i elements pegged together into the mirror images of the singularity and so we talk of motions, simple reproductions and vortices of information, and time>space processes of deceleration of motion into form, as the key actions reflected by mathematical operations.

It also follows that when we study the more complex systems and actions of reality, reproduction and social evolution of networks into organisms, mathematics will provide limited information, and miss properties for which illogical biological and verbal languages are better.

And it follows that physical and chemical systems are the best to be described with mathematical equations, either in algebraic terms or analytic terms, which fusion together when we try to describe the most numerous, simpler systems of particles and atoms (simpler because by casting upon them only mathematical mirrors we are limited to obtain mathematical properties).

Concluding introductory remarks.

5D Metric analysis is complex and varied as almost all what happens can be enclosed within it and as such it has grown to be the most important field of mathematics, along topology, the study of Space-time varieties, as it studies space with temporal motions apt to analyse space-time symmetries.

Algebra on the other hand, from where analysis came, is a bit more ‘deranged’ due to the axiomatic, category, group and set theory ill-understood beyond its abstract manipulation in its ‘experimental GST meaning’.

Analysis in that sense is an offshoot of ∆lgebra, both related to the temporal perspective of discrete, social numbers as opposed to points, whose topological, simultaneous location determines the geometry of space. But it has evolved fast and better as a tribute to the ∆§calar nature of the Universe.

As it happens the three sub-disciplines of mathematics have today merged in many ways, as the ∆ºST universe is also merged, but without the conceptual clarity, which would have taken place of humans had understood the duality of the Universe (or the Asian world, which did understand had dominated history).

In that regard, while in old texts we respected the ternary elements, ascribing algebra to time, to ‘bridge’ the nebulous thoughts of humans and GST, in this web as we keep building it we will clearly separate mathematics in two major fields:

  • S≈T Algebra & ∆nalysis which studies §ocial scales of numbers, whereas ∆lgebra tends to concentrate in a single plane through its social, decametric scales and ∆nalysis correspond closely to the process of growth between scales.
  • S- Geometry and its modern form Topology which deals closely with space-time symmetries.
  • And all the branches, which mixes the topological and algebraic approaches; such as probability (time view) and statistics (space view) analytic geometry and so on.

So if we were to define i-logic mathematics we would say that it is composed of Non-Aristotelian ∆lgebra and Non-Euclidean STopology; the first including analysis the second bidimensional and static geometry. And the natural evolution of the discipline correspond to the combination of them all to express ∆st whole processes. 

From the practical purpose though beyond the introductory texts of GSTructure we shall maintain the general division of disciplines trying whenever possible to include as usual some small change of symbols to remind us of what they are mainly concerned with: ∆nalysis, ∆lgebra or ¬Ælgebra (which studies specifically the way arrows of time mess sequentially to give birth to space-time changes of state).

On the other hand S-Geometry and Topology highlights the capacity of the discipline to study space-time symmetries, mostly in a single plane. While Statistics & Probability STudies from both perspectives, space and time the same phenomena. And so on.

And the similarity between GST and i-logic mathematics as the most experimental of all sciences is so great we shall only consider an entire new discipline, to ad to the three classic ones, Ælgebra (Existential Algebra, not to confuse with ¬Æ, non-aristotelian time and non-euclidean space), which studies from scratch the formalism of the 10 dimensional ∆ûst of space-time of which we are all made, with three time arrows, three ±1 ∆-scales and 3 Spatial-topologies.

Regarding ∆nalysis, we just need to add conceptual in-depth understanding of its laws, to further develop the classic disciplines of science that use it.

The reconstruction of the equations of physics in terms of ∆nalysis thus will be the task carried in two different posts of this web, generally and then in the last scales by me and others applied to enlighten the details:

  • Mathematical Physics.
  • ∆nalysis.

We will do as USUAL a full diachronic analysis to grow in complexity, using classic texts of mathematics for easier comprehension enlightened with ∆st insights.

But the focus here will be not so much in the ages of analysis, as it is a modern discipline with few insights mostly philosophical on the theme of individuals, infinitesimals and universals, on the first Greek and original classic age (Newton and Leibni) – the introductory themes developed next.

Instead our focus is on the 3 ages of growing complexity and generalisation as analysis and its 4D ∂ and 5D ∫ operations expand to study MULTIPLE DIMENSIONS of space-time together.

So after make only some basic remarks on the earlier era we consider 3 scales of growing complexity, we will term loosely as ‘Calculus’, ‘Analysis’ and ‘Functional’ ages:

  • The classic age of polynomial limits, infinitesimal calculus and simple derivatives and integrals.
  • The modern age of ∫∂ applied to multiple space-time variables (Γst view: Ordinary differential equations) with different degrees of depth (∆ view: parti a differential equations).
  • And the 3rd age of ∆nalysis, in which Lie Groups and/or functionals of functions are the all-extended field of inquire, causing very profound all-encompassing attempts to analyse a function or T.œ at all levels.

As we go along obviously our purpose is NOT to make a classic text of analysis but considering the main themes to enlighten it with   the insights of ∆st, to resolve the whys of analysis, latter applied in detail to the many stiences described today with the formalism of analysis without understanding what truly those equations mean. 

 

  (IN)FINITESIMALS VS. UNIVERSALS≈ WHOLES.

Universal wholes and individual finitesimals.

The first age of analysis had a great deal of philosophical disquisitions on the nature of wholes and parts, connecting directly with the greek logic arguments on the nature of individuals and universals.

The historical origins of analysis can be found in attempts to calculate spatial quantities such as the length of a curved line or the area enclosed by a curve.

As we know, a curve, is always part of a worldcycle, and so the conclusions of those earlier studies can be extended to understand better the space-time worldcycle in a general way.

Numbers and (in)finities.

Mathematics divides phenomena into two broad classes, discrete or temporal and continuous, or spatial historically corresponding to the earlier division between T-arithmetic and S-geometry.

Discrete systems can be subdivided only so far, and they can be described in terms of whole numbers 0, 1, 2, 3, …. Continuous systems can be subdivided indefinitely, and their description requires the real numbers, numbers represented by decimal expansions such as 3.14159…, possibly going on forever. Understanding the true nature of such infinite decimals lies at the heart of analysis.

And yet lacking the proper ∆ST theory it is yet not understood.

The distinction between continuous mathematics and discrete mathematics IS ONE BETWEEN SINGLE, SYNCHRONOUS, CONTINUOUS SPACE WITH LESS INFORMATION, and the perception in terms of ‘time cycles, or fractal points; space-time entities’, which will show to be ALWAYS discrete in its detail, either because it will HAVE BOUNDARIES IN SPACE, or it will be A SERIES OF TIME CYCLES AND FREQUENCIES, perceived only when the time cycle is ‘completed’, and hence will show DISCONTINUITIES ON TIME.

Thus the dualities of ST on one side, and the ‘Galilean paradox’ of the mind’s limits of perception of information lay at the heart of the essential philosophical question: it is the Universe discrete or continuous in space and time. Both, but always discrete when in detail due to spatial boundaries, and the measure of time cycles in the points of repetition of its ‘frequency’.

So ultimately we face a mental issue of mathematical modeling: the ‘mind-art’ (as pure exact science does not exist, all is art of linguistic perception) of representing features of the natural world in a reduced mental, mathematical form.

The universe does not contain or consist of actual mathematical objects, but a language can model all aspects of the universe. So all resembles mathematical concepts.

For example, the number two does not exist as a physical object, but it does describe an important feature of such things as human twins and binary stars; and so we can extract by the ternary method, 3 sub-concepts of it:

2 means the first ∆-scale of growth of 1 being into 2, by:

S-imilarity and S-imultaneity in space (ab. Sim)’, ‘i-somorphism in time-information (ab. Iso)’ and ‘equality in ∆-scale’ (ab. Eq), as perceived by a linguistic observer, @, which will deem both beings ‘IDENTICAL’. Whereas identity means that an @-bserver will deem the being ∆st≈St, (Sim, Iso and Eq). So identity is the maximal perfection of a number, for a perceiver, even if ultimately:

‘Not 2 beings are identical for the Universe, but can be identical for the observer’… an intuitive truth, whose pedantic proof is of course of no importance (: we do not follow the axiomatic method of absolute minds here):, but it is at the heart of WHY REALITY IS NOT COLLAPSED INTO THE NOTHINGNESS OF A BIG-BANG POINT.

Thus those 3+0 elements of the ∆•ST coincide a social number can be used whose intrinsic properties define conceptually ‘S-imultaneity, Ti-somorphism’ and ∆-equality or equivalence (ab. Eq) in size, which becomes an @identity for the mind. THEN A NUMBER IS BORN.

I(n this ‘infinitorum’ of Universal thoughts, which bring always new depths as soon as we observe it with an ∆•st trained mind, there are differences between S-imilarity and Simultaneity to define in space an ‘identity’ and ‘equality’ and equivalence, treated elsewhere)

It IS THEN CLEAR that a number being a sum of points, encodes more information in a synoptic way about the T-informative nature of the ‘social group’ than an array of points, which unlike a number tells us less about the ‘informative identity of the inner parts of the being’, but provides us more spatial knowledge about the relative position in space of the members of a number-group.

And this is OBVIOUS, when we return to the origin of geometry and consider an age in which both concepts were intermingled so ‘points were numbers’ and displayed geometrical properties:

Numbers as points, showing also the internal geometric nature, used in earlier mathematics to extract the ‘time-algebraic’, ‘∆nalytical-social’ and S-patial-geometrical properties from them.

We study them in depth in the article on Temporal, Social numbers.

o-1: ∆-1: 1/n finitesimal scale vs. 1-∞: ∆+1: whole scale.

So only a question of that section is worth to mention here, on how to ‘consider scales’, which tend to be decametric, good! One of the few things that work right on the human mind and do no have to be adapted to the Universal mind, from d•st to ∆ûst.

Shall we study them downwards, through ‘finitesimal decimal scales’ or upwards, through decametric, growing ones?

Answer, an essential law of Absolute relativity goes as follows:

‘The study of decametric, §+ scales (10§≈10•10 ∆ ≈ ∆+1) is symmetric to the study of the inverse, decimal ∆>∆-1 scale’.

Or in its most reduced ‘formula’: ( ∞ = (1) = 0): (∞-1) ≈ (1-0)

Whereas ∞ is the perception of the whole ‘upwards’ in the domain of 1, the minimal quanta to the relative ∞ of the ∆+1 scale. While 1 is the relative infinite of a system observed downwards, such as ∆+1 (1) is composed of a number of ‘finitesimal parts’ whose minimal quanta is 0.

It is from that concept from where we accept as the best definition of an infinitesimal that of Leibniz: N (whole) = 1/N (Finitesimal).

So in absolute relativity the ∆-1 world goes from 1 to 0, and the ∆+1 equivalent concept goes from 1 to ∞. And so now we can also extract of the ‘infinitorum thought receptacle’J a key difference between both mathematical techniques:

A conceptual analysis upwards has a defined lower point-quanta, 1 and an undefined upper ∞ limit. While a downwards analysis has an upper defined whole limit, 1 and an undefined ‘finitesimal minimum, +0).

Finally to notice that as all ∆-scales have relative finitesimal +0 and relative infinities ∞ (see ∞|º to understand the limits and meaning of numbers and its scales)essential to all theory of calculus is the study of the domain in which the system works, and the ‘holes’ or singularities and membranes which are not part of the open ball-system. So functions can be defined with certain singularity points and borders; hence functions need not be defined by single formulas.

This would be understood by Leibniz – who else 🙂

Unlike Newton, who made little effort to explain and justify fluxions, Leibniz, as an eminent and highly regarded philosopher, was influential in propagating the idea of finitesimals, which he described as actual numbers—that is, less than 1/n in absolute value for each positive integer n and yet not equal to zero.

For those who insisted in infinities, Berkeley would reveal those contradictions in the book ‘The Analyst’. There he wrote about fluxions : “They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?”

Definition of ∆t, ∆s, finitesimals: A quantum of time and space.

Berkeley’s criticism was not fully met until the 19th century, when it was realized that, in the expression dy/dx, dx and dy need not lead an independent existence. Rather, this expression could be defined as the limit of ordinary ratios Δy/Δx.

And here is where we retake it; before the formal age of mathematics, made a ‘pretentiously rigorous definition of infinitesimal limits and the the logician Abraham Robinson  showed the notion of infinitesimal to be logically consistent, but NOT real.

As we believe mathematics must be real to be ‘consistent’ (Godel’s theorem), we return to the finitesimal concept, ±∆y, either as a ‘real’ increase/decrease of a quantity, with a variation ±∆x of either the surface of space or the duration in time of the being.

Thus finitesimals depend for each species of the ‘quanta’ of space or ‘minimal cell’ and quanta of time or minimal moment which the system can measure.

For man, for example time actions are measured with its minimal time quanta of a second, below which it is difficult to perceive anything; a nanosecond in that regard in the human scale of existence is NOT worth to measure, as nothing happening in a nano-second will be perceived as motion or change. For an atom however a nanosecond is a proper finitesimal to measure changes.

In space, man does not perceive sensations below certain limits, which vary for each sense, a millimetre, 100 hertzs of sound, the frequency of infrared waves; and so on.

Universals

According to a traditional interpretation of the metaphysics of Plato’s middle dialogues, Plato maintained that exemplifying a property is a matter of imperfectly copying an entity he called a form, which itself is a perfect or pure instance of the property in question. Several things are red or beautiful, for example, in virtue of their resembling the ideal form of the Red or the Beautiful. Plato’s forms are abstract or transcendent, occupying a realm completely outside space and time. They cannot affect or be affected by any object or event in the physical universe.

This is correct, though the ERROR LIES in positioning Universals OUTSIDE space and time. They are IN FACT THE ULTIMATE properties of SE-spatial ‘kinetic energy+entropy’ and TO- Temporal information, which ‘emerge’ in each new scale.

Few philosophers now believe in such a “Platonic heaven,” at least as Plato originally conceived it; the “copying” theory of exemplification is generally rejected. Nevertheless, many modern and contemporary philosophers, including Gottlob Frege, the early Bertrand Russell, Alonzo Church, and George Bealer are properly called “Platonic” realists because they believed in universals that are abstract or transcendent and that do not depend upon the existence of their instances.

They are closer to the truth, but they should substitute the word ‘transcendent’ for ‘EMERGENT’ in the parlance of general systems.

For that matter General Sytems (5D ST) reduces the meaning of ‘transcendence’ to its first semantic meaning:

Vb: L transcendere to climb across, transcend, fr. trans- + scandere to climb.

vt : to rise above or go beyond the limits.

Indeed, Universals are found beyond the limits of its finitesimals, in the next n+1 scale.

DIMENSIONAL GROWTH AS: REPRODUCTION OF SPATIAL FORM

Next along the simplest ∫∂eps of motion, S-T-S-T appeared in history the calculus of SSteps, that is reproduction of form from lineal to area

Area Finitesimals.

It must be noticed though that finitesimals were first found in space, as the means to quantify a simultaneous areas as the sum of ∆-1 discontinuous, fractal parts. Let us remember this concept, key philosophical discussion even with the greeks – it is the Universe continuous or discontinuous, made of Universal wholes or individual parts?.

This concept was the earlier idea of Leucipus and Democritus regarding the composition of physical systems; and Anaximander, regarding the composition of life systems, with its ‘homunculus’ concept (we were made of smaller beings)

Anaximenes’ assumption that aer is everlastingly in motion and his analogy between the divine air that sustains the universe and the human “air,” or soul, that animates people is a clear comparison between a macrocosm and a microcosm.

It also permit him to maintain a unity behind diversity as well as to reinforce the view of his contemporaries that there is an overarching principle regulating all life and behavior. So here there is a first bridge that merges universals and finitesimals.

And of earlier mystiques, regarding the composition of a superior God, as the subconscious collective of all its believers’ minds, fusion in a ‘bosonic’ way into the soul of the whole.

The 3 were right as finitesimals are clone beings with properties that transcend into the Universal, being the homunculus the ‘future cell’.

Mathematics

There was only at this stage a mathematical approach to the concept by Archimedes – the methods of exhaustion to calculate areas and ratios, notably the pi ratio.

The method of exhaustion, first was used by Eudoxus, as a generalization of the theory of proportions.

Eudoxus’ idea was to measure arbitrary objects by defining them as combinations of multiple polygons or polyhedral. In this way, he could compute volumes and areas of many objects with the help of a few shapes, such as triangles and triangular prisms, of known dimensions. For example, by using stacks of prisms (see figure), Eudoxus was able to prove that the volume of a pyramid is one-third of the area of its base B multiplied by its height h, or in modern notation Bh/3.

Loosely speaking, the volume of the pyramid is “exhausted” by stacks of prisms as the thickness of the prisms becomes progressively smaller. More precisely, what Eudoxus proved is that any volume less than Bh/3 may be exceeded by a stack of prisms inside the pyramid, and any volume greater than Bh/3 may be undercut by a stack of prisms containing the pyramid.

The greatest exponent of the method of exhaustion was Archimedes (c. 285–212/211 BC). Among his discoveries using exhaustion were the area of a parabolic segment, the volume of a paraboloid, the tangent to a spiral, and a proof that the volume of a sphere is two-thirds the volume of the circumscribing cylinder. His calculation of the area of the parabolic segment (see figure) involved the application of infinite series to geometry. In this case, the infinite geometric series:

1 + 1/4 + 1/16 +1/64 +… = 4/3

is obtained by successively adding a triangle with unit area, then triangles that total 1/4 unit area, then triangles of 1/16, and so forth, until the area is exhausted. Archimedes avoided actual contact with infinity, however, by showing that the series obtained by stopping after a finite number of terms could be made to exceed any number less than 4/3. In modern terms, 4/3 is the limit of the partial sums.

His paper, ‘Measurement of the Circle’ is a fragment of a longer work in which π (pi), the ratio of the circumference to the diameter of a circle, is shown to lie between the limits of 3 10/71 and 3 1/7.

Archimedes’ approach to determining π consists of inscribing and circumscribing regular polygons with a large number of sides. It was followed by everyone until the development of infinite series expansions in India during the 15th century and in Europe during the 17th century. This work also contains accurate approximations (expressed as ratios of integers) to the square roots of 3 and several large numbers.

It is then interesting to consider Archimedes’ main role on the perception of problems today forgotten after the absurd dogmatic germanic ‘foundations under the axiomatic method’ of analysis.

2 problems troubled him and indeed they were very important problems: the comparisons of different pis, (it is the pi square with 2 dimensions the same than the pi of the perimeter) and its proper calculus by approximation.

Approximations in geometry.

screen-shot-2017-01-21-at-13-40-48The transformation of a circular region into an approximately rectangular region.

In the graph we see how ∆ST theory immediately eliminates all those problems of infinitesimals as all infinities are limited, so are the 0s, which must be regarded as the +0 minimal quanta of the domain – the need for further infinities is an error of the mind, the dogmatic truth and the single space-time ‘continuum). In that regard pi is not INFINITE, but its calculus becomes ‘chaotic’ beyond a limit of ±40 decimals, which is really all what the human mind can conceive n its largest finitesimal analysis.

It is then when the ‘Greek Age’ becomes just as in the Archimedean calculus of pi by exhaustion the same concept, just with less detail.

Indeed, a simple geometric argument shows that both processes are similar with different degrees of approximation:

The idea is to slice the circle like a pie, into a large number of equal pieces, and to reassemble the pieces to form an approximate rectangle (see figure). Then the area of the “rectangle” is closely approximated by its height, which equals the circle’s radius, multiplied by the length of one set of curved sides—which together form one-half of the circle’s circumference. As the slices get very thin, the error in the approximation becomes very small.

Aproximations in algebra and analysis.

Now departing from the general rule that f(x) is a function of ‘time motions’ as all variables are by definition time motions, and the Y function its spatial views as ‘a whole’, we can take this as the rule of interpretation ƒ(x)=t=S=Y as a general rule, and as so often we have a function of the type ƒ (x)=ƒ(t)=0, we consider the polynomial a representation of a world cycle.  And from that we can differentiate factors through ∆ scales such as… ∆±1= 0-1 probability sphere (∆<0) and Polynomial (Xª=∆ª).

It is then obvious that one of the key equations of the Universe, the equation that relates polynomials and derivatives, space and time views of complex symmetric bundles must be reinterpreted on the light of those disomorphisms between the mathematical mirror, and the 5D³ Universe.

In that sense, Taylor’s formula resumes the main space-time symmetries and its development, left polynomials, right derivatives fills in the content of algebra in the measure of space-time systems.

In a given point it can then be understood as a differential value and then consider Dc the polynomial vs. ∫∂ep differential. Lineal functions in short distances’ view that become curved in larger more accurate spatial views, make us think that the ƒ(t) time function is step by step building the ƒ(y) spatial worldycycle, which dId all those step curvatures.

The problem of equivalences confused as identities between lines and areas.

It is absurd to talk about continuity of a real number, pi, e, and √2, beyond the 10 decimal. This is easily proved because those ratios are normally obtained by limits in which certain terms of the infinitesimal are despised, by postulating the falsity that there are infinite smallish parts, and so x/∆ can be throw out when ∆->∞. But since x/∆, the finitesimal has a limit, the pretentious exactitude does not happen.

This in turn leads to questions about the meaning of quantities that become infinitely large or infinitely small—concepts riddled with logical pitfalls in a simplified world of a single space-time continuum, where on top humans LOVE to consider ‘identities’ of the mind absolute identities in the larger information of the detailed Universe, which are never so, as d@st ≈ ∆ûst (the mind, world view is merely similar to the Universal view) .

In our example example, a circle of radius r has circumference 2πr and area πr2, where π is the famous constant 3.14159…. Establishing these two properties is not entirely straightforward, although an adequate approach was developed by the geometers of ancient Greece, especially Eudoxus and Archimedes. It is harder than one might expect to show that the circumference of a circle is proportional to its radius and that its area is proportional to the square of its radius. The really difficult problem, though, is to show that the constant of proportionality for the circumference is precisely twice the constant of proportionality for the area—that is, to show that the constant now called π really is the same in both formulas.

This boils down to proving a theorem (first proved by Archimedes) that does not mention π explicitly at all: the area of a circle is the same as that of a rectangle, one of whose sides is equal to the circle’s radius and the other to half the circle’s circumference.

However in GST theory, those 2 pis are not the same, because they belong to two discontinuous, ‘different species’ of topology, the St area, and the S-MEMBRANE.

An easy, immediate proof. If we make them identical, then we can find a circle, where: 2πr ≈ πr2. So 2r=r2 . Hence 2=r and we get to the conclusion that the thin membrane of an open ball is identical in area to the internal ST volume of the being, which is ‘conceptually absurd’ (the area intuitively has more surface, as it is bidimensional, the line, infinitely thin).

What’s the problem here? We cannot in true form, unless we deal always with less dogmatic concepts of relative similarities with ‘lines as if they were squares’. They are different realities. In the first equivalence, we compare a line radius with a circle perimeter, in an S>t structure.

In the second as we compare π2, a cyclical area with the square of the radius we are also in good footing. But when we do the S>ST comparison, we are in a Dynamic transformation of ∆-scales, from ∆, the world of lines, to ∆+1 the world of squares (as a polynomial square is obviously a growth from a complete ∆-entity the line, into an ∆2=∆+1 one, the area). It is then when we can do some ‘dynamic equivalence’ analysis, and the equivalence has meaning, stating that for a ‘perfect cycle’ of relative radius 2, the membrane absorption of bits an bites of energy and information, can fully, fill, the internal area, making equivalent, a ‘line and a surface’ integral. And finally state that all ‘dynamic vortices of force’ ruled by Newtonian/Coulombian equations on the ∆-1 and ∆+1 scales, are relative perfect systems of radius 2.

And here we find the ‘whys’ of the dualities of Maxwell’s laws which can be written both ways:

Or in simpler terms, we are talking when doing those equalities of properties that become dynamic and transcend the static mind of mathematics into the reality of physical systems.

Finally as we defined real numbers as non- existent (see |∞ posts), but approximations to a ±0 infinitesimal, in the measure of a square, uncertainty grows further, π2, thus have the square ‘error’ of pi.

All this of course is important to conceptualize reality, in praxis as we know we always work in an uncertain game with errors and deaths. So analysis does work, and all this ‘search for dogmatic proofs’ is just ‘absolute bull$hit’ for absolute ego-centered scholar huminds.

But on the other hand the graph also shows that both pis, the one of the ‘surface’ and the one of the ‘perimeter’ ARE NOT equal, as there will be a limit on the number of ‘bidimensional triangles’ we can cut.

As a triangle is indeed the bidimensional line, that is: |-Spe (one-dimension); ∆-Spe (2 dimension).

 So it is not the line.

And so as the approximation will find a finitesimal quanta or limit of detail, prove the theorem, this error, however tiny, remains an error. THIS MINIMAL QUANTA THUS EXIST IN ALL RELATIVE ∆>∆+1 measures of scales as the minimal uncertainty of all mathematical calculus, and justifies in physics (∆-1 quantum theory) that thee is always an uncertainty of a minimal quanta, which is precisely h/2; that is h/2π; the minimal quanta of our light space-time.

Only in the absolutist imagination of dogmatic axiomatic mathematicians it made sense to talk of the slices being infinitesimally thin, so the error would disappear altogether, or at least it would become infinitesimal.

As it happens quantum theory proved experimentally the case to be wrong. And as we stress (Lobachevski, Godel, Einstein) mathematics must be confronted with reality to realise what is ‘real’ in maths.

THE QUEST FOR FINITESIMALS.

 Rates of change. The stop and go motion

Now finitesimal changes are related to the fundamental beat of the Universe, the stop-form-space-perception, go-motion-time, beat of the Universe:

∆S->∆t->∆S-≥∆t. Thus if we consider a relative constant or function of the existence, œ, as a finitesimal of its larger whole, Œ, in either side we obtain 2 simple functions:

œ=∆s/∆t  and œ=∆t/∆s.

We shall call the first form a spatial finitesimal  or step in space, a quanta of constant speed that moves, reproduces the being in space.

And if we again change this quanta, a quanta of constant acceleration

And we shall call the second function a time finitesimal, a change in the density of information or cyclical speed of the being and a second change in relation sop

Newton and Leibniz.

NOW THERE has been much irrelevant argument about who was first Newton or Leibniz on the discovery of calculus. To me it has always been obvious at all levels the enormous superiority of Leibniz over Newton, ethically and intellectually. And it can be resumed in this: Newton is NOT really a modern 2nd age researcher of calculus but rather the culmination of the Arquimedes’ method of exhaustion of limits – he didn’t understand anything about the true meaning of calculus. And for that reason his notation is so convoluted (plus his nauseating treatment of Leibniz makes him a complete a$$hole). Leibniz on the other hand understood more than all what would come after him when he said ‘a point is a world in itself, defined the ‘finitesimal’ as 1/n – which latter abstract mathematicians forgot and build an entire philosophy of the Universe (monads), right in the spot, clear predecessor of all our work.

And as usual the a$$hole, making military instruments for the Navy, bullying and calumniating Leibniz carried the day. But we use Leibniz’s notation, the modern view…

So let us first close the ‘Greek era’ with the last of the Greek Alcibiades’, Mr. Newton.

In the second age of mathematics, the question of infinitesimals was resolved but not accepted by Leibniz, which used geometrical concepts on the Cartesian plane to understand them, as opposed to Newton which used algebraic concepts in his study of…

Infinite series

screen-shot-2017-01-21-at-13-41-18Graphical illustration of an infinite geometric series. Before understanding calculus mathematicians were concerned with relative infinitesimal series.

Since similar paradoxes occur in the manipulation of infinite series, such as: 1/2 + 1/4 + 1/8 +⋯

This particular series is relatively harmless, and its value is precisely 1, the whole, which is the conceptual meaning of infinity.

To see why this should be so, consider the partial sums formed by stopping after a finite number of terms. The more terms, the closer the partial sum is to 1. It can be made as close to 1 as desired by including enough terms. Yet once we arrive to the Minimal quanta of the physical reality we describe (cell, atom, individual, etc.) there is NO need to go beyond except in errors of the mind.

In the graph, 1/±10² is the limit considered the finitesimal of this particular ‘graph perception’. And also the error of our measure, as if we add another 1/±10², the series becomes a whole.

Thus most paradoxes of mathematics arise from not understanding those simple concepts, as well as the meaning of ‘inverse negative numbers’ .

For example an infinite series which are less well-behaved are the series: 1 − 1 + 1 − 1 + 1 − 1 + ⋯

If the terms are grouped one way: (1 − 1) + (1 − 1) + (1 − 1) +⋯,  then the sum appears to be: 0 + 0 + 0 +⋯ = 0.

But if the terms are grouped differently, 1 + (−1 + 1) + (−1 + 1) + (−1 + 1) +⋯,   then the sum appears to be 1 + 0 + 0 + 0 +⋯ = 1.

It would be foolish to conclude that 0 = 1. Instead, the conclusion is that the series has an internal structure, which obviously is the group:

‘a’: 1-1=0.   And so if we accept that internal ∆-1 unit for the series grouping and its ‘real value is:

a+a+…. = 0+0+0…=0.

So we can write it in terms of the generator as:

∑ Spe (+1) <≈> ∑Tiƒ (-1), which defines generically a feed-back ‘world cycle’ whose sum is zero.

In classic maths of a single space-time continuum, the difference between both series  is clear from their partial sums. The partial sums of 1/2+1/4… get closer and closer to a single fixed value—namely, 1. The partial sums of a+, without its internal ∆-1 (a) structure, alternate between 0 and 1, so the series never settles down.

A series that does settle down to some definite value, as more and more terms are added, is said to converge, and the value to which it converges is known as the limit of the partial sums; all other series are said to diverge. But in GST many diverging series become when considered also its internal structure, convergent and well-behaved.

Actually, without even experimental evidence, there exist subtle problems with such ‘infinite’ construction. It might justifiably be argued that if the slices are infinitesimally thin, then each has zero area; hence, joining them together produces a rectangle with zero total area since 0 + 0 + 0 +⋯ = 0. Indeed, the very idea of an infinitesimal quantity is paradoxical because the only number that is smaller than every positive number is 0 itself.

The same problem shows up in many different guises. When calculating the length of the circumference of a circle, it is attractive to think of the circle as a regular polygon with infinitely many straight sides, each infinitesimally long. (Indeed, a circle is the limiting case for a regular polygon as the number of its sides increases.) But while this picture makes sense for some purposes—illustrating that the circumference is proportional to the radius—for others it makes no sense at all. For example, the “sides” of the infinitely many-sided polygon must have length 0, which implies that the circumference is 0 + 0 + 0 + ⋯ = 0, clearly nonsense.

SO BY REDUCTIO AD ABSURDUM, the limits of infinitesimals are shown to be always an ∆-1 quanta. THIS of course also resolves all the Cantor’s nonsense of different infinities and its paradoxes. It is just ‘math-fiction’ and worthless to study.

The interest of those works for 5D maths, lies on the fact that THE EXHAUSTION METHOD DOES LIMIT the parts to finitesimals, as a realist method, which implies nature also limits its divisions. This concept would be lost in the 3rd formal age, also with the ‘lineal bias’ introduced on Dedekind’s concept of a real number NOT as a proportion/ratio, between quantitative parameters of the ‘parts’ of a whole, or the ‘actions’ of a system and its SE<STI>TO parameters, which is what IT IS, but as an ‘abstract cut’ in a lineal sequential order of ‘abstract numbers’.

To notice that in the classic STi balanced age, both the limits method and finitesimal method of Leibniz considered infinitesimals, finitesimals, that is with a ‘cut-off limit’ and real nature.

Those limits are minimal ‘steps’ of any scale (in time-motion), or minimal parts (in space-forms).

The limit of a sequence

In that regard we amend the work of the German mathematician Karl Weierstrass and its formal definition of the limit of a sequence as follows:

Consider a sequence (an) of real numbers, by which is meant an infinite list:  a0, a1, a2, ….

It is said that an converges to (or approaches) the limit a as n tends to infinity, if the following mathematical statement holds true: For every ε > 0, there exists a whole number N such that |an − a| < ε for all n > N. Intuitively, this statement says that, for any chosen degree of approximation (ε), there is some point in the sequence (N) such that, from that point onward (n > N), every number in the sequence (an) approximates a within an error less than the chosen amount (|an − a| < ε). Stated less formally, when n becomes large enough, an can be made as close to a as desired.

For example, consider the sequence in which an = 1/(n + 1), that is, the sequence: 1, 1/2, 1/3, 1/4, 1/5, …,  going on forever.

Every number in the sequence is greater than zero, but, the farther along the sequence goes, the closer the numbers get to zero. For example, all terms from the 10th onward are less than or equal to 0.1, all terms from the 100th onward are less than or equal to 0.01, and so on. Terms smaller than 0.000000001, for instance, are found from the 1,000,000,000th term onward. In Weierstrass’s terminology, this sequence converges to its limit 0 as n tends to infinity. The difference |an − 0| can be made smaller than any ε by choosing n sufficiently large. In fact, n > 1/ε suffices. So, in Weierstrass’s formal definition, N is taken to be the smallest integer > 1/ε

This example brings out several key features of Weierstrass’s idea. First, it does not involve any mystical notion of infinitesimals; all quantities involved are ordinary real numbers. Second, it is precise; if a sequence possesses a limit, then there is exactly one real number that satisfies the Weierstrass definition. Finally, although the numbers in the sequence tend to the limit 0, they need not actually reach that value.

Now this n > 1/ε is exactly what Leibniz without so much pedantic formalism considered the finitesimal, what we call the quanta of an ∆-1 scale and what physicists call in its study of different scales, the minimal ‘error-quanta’ h/2π, k-entropy, or ‘Planck mass’ (Black hole of a compton wavelength volume, or minimal quanta of gravitational ∆+1 scales).

Continuity of functions

The same basic approach makes it possible to understand the formal notion of continuity of a function.

Intuitively, a function f(t) approaches a limit L as t approaches a value p if, whatever size error can be tolerated, f(t) differs from L by less than the tolerable error for all t sufficiently close to p.

Just as for limits of sequences, the formalization of these ideas is achieved by assigning symbols to “tolerable error” (ε) and to “sufficiently close” (δ). Then the definition becomes: A function f(t) approaches a limit L as t approaches a value p if for all ε > 0 there exists δ > 0 such that |f(t) − L| < ε whenever |t − p| < δ. (Note carefully that first the size of the tolerable error must be decided upon; only then can it be determined what it means to be “sufficiently close.”)

But what exactly is meant by phrases such as “error,” “prepared to tolerate,” and “sufficiently close”?

Again it is the relative Spe-quanta of the system studied.

Having defined the notion of limit in this context, it is straightforward to define continuity of a function. Continuous functions preserve limits; that is, a function f is continuous at a point p if the limit of f(t) as t approaches p is equal to f(p). And f is continuous if it is continuous at every p for which f(p) is defined. Intuitively, continuity means that small changes in t produce small changes in f(t)—there are no sudden jumps.

But as that small change will always be in detail an ε-quanta, in great detail THERE ARE QUANTUM JUMPS. In fact, as there is always an ε-quanta, in any process in space or time, in form and motion (as we have shown when considering the nature of motion as reproduction of form in adjacent spaces) there will always be a quantum jump for all motions. And motion will be the reproduction of form in quantum jumps of  ε, nature:

∆-1: Finitesimals: 1/n

All those questions however BECOME IRRELEVANT when we accept Leibniz’s concept of a finitesimal, as ALL organic systems have a minimal cellular quanta and a maximal enclosure, which in mathematics can be represented in the o-1 finitesimal circle, closed above, as it becomes the 1 element in ∆-1 of the ∆º whole, which is represented by  the 1-∞ equivalent graph, WHICH is opened above into the wholeness of a larger Universe (but will have also a limit normally in the decametric logarithmic scale of the ∆º whole world embedded in the ∆+1 truly infinite Universe).

 Abstract. Perhaps the most fascinating part of number theory is the finitesimal, as infinitesimal do not exist – being space quantic, there will be always a limit, a micro-cycle of time or quanta of population in space, to signify the finitesimal point, as Leibniz rightly understood and defined it with a simple powerful form: 1/n.

And indeed in the Universe finitesimals tend to be structured as in a russian doll, such as the biggest wholes, n->∞ have the smallest finitesimals, 1/n->∞->0.

The absolute zero size is thus the finitesimal of the largest possible Universe. In praxis, we humans only observe a finitesimal from our mind-o perspective, of the planck scale, and accordingly we see a Universe of inverse relative size, being humans in the Ƽ middle view (at cellular level) as physicists wonder without realizing this is NOT a coincidence, but a natural law of the scalar, fractal organic structure of the Universe:

In “The View From the Center of the Universe,” Primack and his co-authore Nancy Ellen Abrams, make this key point:

The size of a human being is at the center of all the possible sizes in the universe. This amazing assertion challenges not only the centuries-old philosophical assumption that humans are insignificantly small compared to the vastness of the universe but also the logical assumption that there is no such thing as a central size. Both assumptions are false, but we have to reconsider the key words of the assertion—center, possible, size, and universe—to reveal the prejudices built into them that constrict and distort our picture of reality. In the modern universe there is a largest and a smallest size, and therefore a middle size.

The largest size is the universe itself. The smallest size is the: the Planck length. And guess what is just about in between those two? You, me. Us. Homo sapiens.
The Cosmic Uroboros represent the varying scales of the universe, from largest (the serpent’s head) to the smallest (its tail).
The serpent’s head represents the size of the visible universe. Going clockwise around the serpent from head to tail, you’ll find among a number things the size of a supercluster of galaxies (10ˆ25cm), a single galaxy, the solar system (10ˆ16cm), the sun (10ˆ10cm), a mountain (10ˆ5cm), humans, an E. coli bacterium (10ˆ-5cm), DNA, an atom, a nucleus, and on down through tiny scales of particle physics to the Planck length at the tail of the serpent.

And as you can see, the size of a human being is near the center of all possible sizes.

Primack and Abrams continue:

“This turns out to be the only size that conscious beings like us could be. Smaller creatures would not have enough atoms to be sufficiently complex, while larger ones would suffer from slow communication – which would mean that they would effectively be communities rather than individuals, like groups of communicating people, or supercomputers made up of many smaller processors.”

From there to an ‘anthropocentric, anthropic’ principle of reality of course, there is just a step. But the reality is different: our mind is in the middle of our perceived world-universe, which our mind constructs with maximal information (seeing it all as a supœganism in human surroundings and nearby scales). For an ant philosopher, a smaller scalar Universe of chemical states of matter, with her on the middle, Gaia as the whole Universe and an atom below could also be con structured.

For a black hole philosopher a larger one, crossing over more finitesimal and larger finite scales of galaxies-atoms and atoms-galaxies also could be observe.

The nesting though is real: For example, whales eat smaller 1/n finitesimal food – krill and plankton than humans, which are smaller wholes with larger 1/n feeding energy. And so on.

So theory of finitesimals is organic, connected to the wholes, though it starts as always with fractal, experimental mathematical ratios and constants. Let us then study the concept of finitesimals across different scales.

1/n: the (in)finitesimal (in)finite

With the convention that ƒ  (x) is normally a function of time frequencies, ƒ (t), of motions of time, whose synthonies of synchronicity in space are expressed by an algebraic equation, we bring the following understanding:

Infinitesimal quanta in any scale is the departure point to build any function, as such it must have a minimal size, and ƒ'(t) is normally a good measure.

The infinitesimal study as perceived from the finite point of view is the view of fractals, when in detail and observing the closed worldcycles that separate and make each infinitesimal a whole.

A derivative is the finitesimal of the function observed, and so when we go even further and study as enlarged into our scalar view tin maximal information we are in the fractal view of reality.

So as we expand our view the fractal view becomes more real, till finally the enclosures observed ∆-1 become fractal and we recognise its self-similarities: ∆-1 ≤ ∆º.

For each derivative thus a function shows its 1/n infinitesimal (not necessarily this function, which is the derivative of the logarithm).

It follows that functions which grow ginormously have a ‘quanta of time’ reproduced and so its minimal derivative finitesimal is the function itself, eª.

In the next graph we see inverse equations of exponentials and logarithms.

Exponentials express better decay than exponential growth, with the exponent “negative”.

Mathematics is a reflection of nature. A small mirror of its ∆º±i Structure and so we need for exponential growth that Nature provides unlimited energy for growth, which happens only in the 0-1 generational dimension of the being, or in its inverse decay/ in its 4D entropy age. of death.

On the other hand the limit of logarithmic growth maps out better in logistic curves real growth being a good function to express Ƥcales.

So numbers reflect those processes in their inverse exponential/logarithm mathematical graphs and numerical series.

ST: As the three coordinate systems, self-centred into an ∆º pov, which reflects each of the three ‘topologies of space-time’ (Cylindrical: lineal, polar: cyclical and cartesian: Hyperbolic); while the infinitesimal o-1 scale, and the infinite 1-∞ scale divided by the ‘1’ ∆º relative element, represent perfectly the ∆-scalar nature of super organisms.

∆º±1: Further on, we can ‘reduce’ each relative infinity to those 3 scales, and represent all timespace phenomena with the different families of numbers that close algebra  (entropic, positive numbers, informative, negative numbers, present space-time, complex bidimensional numbers, s/t ir-ratio-nal numbers, etc.), mathematics becomes essentially the more realist language to represent the scalar, organic, ternary Universe.

The 0-1 scale is equivalent to the 1-∞ scale for the lower ∆-1 Universe, where 1=∆º, the whole and 1-∞ is the ∆+1 eternal world.

And this is the symmetry to grasp the consequences of the o-1-∞ fundamental graph of the fifth dimension. Let us see how with a simple example:

Now the mirror symmetries between the 0-1 universe and the 1-∞ are interesting as they set two different ‘limits’, an upper uncertain bound for the 1-∞ universe, in which the 1-world, ∆º exists, and a lower uncertain bound for the 0-1 Universe, where the 1 does not see the limit of its lower bound. Are those unbounded limits truly infinite?

This is the next question where we can consider the homology of both the microscopic and macroscopic worlds.

Of course the axiomatic method ‘believes’ in infinity – we deal with the absurdities of Cantorian transinfinities in articles on numbers. But as we consider maths, after lobachevski, Godel and Einstein, an experimental science; we are more interested in the homologies of ∆±1. For one thing. While 0 can be approached by infinite infinitesimal ‘decimals’, so it seems it can never be reached, we know since the ‘violet catastrophe’ that the infinitesimal is a ‘quanta’, a ‘minimum’, a ‘limit’. And so we return to Leibniz’s rightful concept of an 1/n minimal part of ‘n’, the whole ‘1’.

This implies by symmetry that on the upper bound, the world-universe in which the 1 is inscribed will have also a limit, a discontinuity with ∆+2, which sets up all infinities in the upper bound also as finite quanta, ‘wholes of wholes’.

So the ‘rest’ of infinities, must be regarded within the rest of ‘theory of information languages’ and its inflationary nature, inflationary information. What is then the ‘practical limit’ for most infinities and infinitesimals? In GST, the standard limit is the perfect game of 3 x 3 + 0(±1) elements, where the o-mind triples as it is an ∆-1 ‘god of its infinitesimals it rules subconsciously, as you brain rules you cells’, ∆º, consciousness of the whole and ∆+1 infinitesimal of the larger world.

THE o-1 time mirrored quantum world of probabilities of existence, as indistinguishable infinitesimals through the surface limit of its statistical description in the thermodynamic scale of atomic beings end in the 1 unit of our human cellular space, where thermodynamic considerations are reduced to temperature gradient towards the homeostatic mass based forces of our human level of existence, Ƽ.

So we consider as usual the Kaleiodoscopic, multiple function of analysis, and the multiple meanings of its inverse, ∆±1 operations, derivatives and integrals; since as usual the potency of ∆st is on the search of whys, not on the discovery of new equations, which humans always exhaust by the Monkey Method of trials and errors, sweat and transpiration more then the inspiration of pure logic thought…

TIME DIMENSIONS BECOME SPACE DIMENSIONS BECOME TIME…

Physical equations in differential form, a general overview of its main species. History

Differential equations first came into existence with the invention of calculus by Newton and Leibniz. Newton listed three kinds of differential equations: those involving two derivatives one of space and time (or fluxions) and only one undifferentiated quantity (space or time parameter); those involving 2 derivatives and two quantities of space and time; and those involving more than two derivatives.

Its analysis thus was right in the spot as he referred changes to change in space or time, thus ∫∂ with ST-eps – a fact latter forgotten and today thoroughly missed with the ‘view’ of time as a single dimension of space (1D-lineal motion confused with 4D-entropy in philosophy of science)

It is still a good classification of partial differential equations as ‘time-like’ (∂x, ∂²x, ∂³x), or space like (∂²y, ∂y, ∂³y) or space-time like (∂x∂y, ∂y∂x) as the main variations that represent, T, TT, TTT; S, SS, SSS, ST, TS steps, which are the main 5D, 4D and 1,2,3D changes of the Universe.

And it speaks of the enormous range of real phenomena ∫∂ functions can describe as the essential operandi of mathematical physics and any ∆st phenomena.

What allow all those ∆st phenomena to enter the world of quantitative mathematics was the discovery of a pendulum clock to measure time in lineal fashion and a telescope to measure space. Both gave birth to the 2nd age of science, the mathematical/scientific method, added to the experimental Aristotelian method, which now the isomorphic GST age of stience completes.

In 1609 appeared the “New astronomy” of Kepler, containing his first and second laws for the motion of the planets around the sun.

In 1609 too Galileo directed his recently constructed telescope, though still small and imperfect, toward the night sky; the first glance in a telescope was enough to destroy the ideal celestial spheres of Aristotle and the dogma of the perfect form of celestial bodies. The surface of the moon was seen to be covered with mountains and pitted with craters. Venus displayed phases like the Moon, Jupiter was surrounded by four satellites and provided a miniature visual model of the solar system. The Milky Way fell apart into separate stars, and for the first time men felt the staggeringly immense distance of the stars. No other scientific discovery has ever made such an impression on the civilised world.

It also killed a method equally valid of thought represented by the Greeks and Leonardo: the idealised understanding of the canonical perfect GST game of existence, of which we were all impure platonic forms, bond to dissolve unlike the perfect game of the ∞ Universe, which is immortal.

Man never went back because alas! what really mattered was ballistics, mechanics and power. Idealism died away:

The further development of navigation, and consequently of astronomy, and also the new development of technology and mechanics necessitated the study of many new mathematical problems. The novelty of these problems consisted chiefly in the fact that they required mathematical study of the laws of motion in a broad sense of the word. And now we had machines to measure it better than the artistic Sp-eye-T=words of the human space-time mind.

Conclusion. 

The Universe is discontinuous. To differentiate a function we do NOT need absolute continuity but the existence of an infinitesimal 1/n, and no jump between ‘neighbourhoods’, which should be no further than 1/n distance either in the X or Y coordinates. ‘Adjacency’ of the function then is defined by discrete 1/n intervals, which suffice in Nature=reality, REGARDLESS of mathematical methods to define the…

RASHOMON EFFECT: ∫∂ INVERSIONS

Time evolution equations.

Time evolution is the change of state≈age brought about by the passage of time, applicable to systems with internal state≈age distribution (also called ageful systems).

In this formulation, time is not required to be a continuous parameter, but may be discrete or even finite. And so we can use frequencies and densities, fluxes and all the elements required for a real description of the ∆ST universe.

In classical physics, time evolution of a collection of rigid bodies is governed by the principles of classical mechanics. In their most rudimentary form, these principles express the relationship between forces acting on the bodies and their acceleration given by Newton’s laws of motion. These principles can also be equivalently expressed more abstractly by Hamiltonian mechanics or Lagrangian mechanics; which themselves use the ∫∂ jargon.

The concept of time evolution may be applicable to other stateful systems as well. For instance, the operation of a Turing machine can be regarded as the time evolution of the machine’s control state together with the state of the tape (or possibly multiple tapes) including the position of the machine’s read-write head (or heads). In this case, time is discrete.

Stateful systems often have dual descriptions in terms of states or in terms of observable values. In such systems, time evolution can also refer to the change in observable values. This is particularly relevant in quantum mechanics where the Schrödinger picture and Heisenberg picture are (mostly) equivalent descriptions of time evolution.

  OPERATIONS: ∫∂

Its symmetries on the cartesian plane: merging all the elements of ∆@s=t maths.

Descartes idea of representing solutions to equations with a larger dimension – the variable letter that represented all the ‘§ets’ of dual X, Y possible solutions; and to ‘imagine’ them in a graph to plot them, forming a visual ‘in-form-ative’ geometric figure, the new ‘scalar dimension‘ that gathered all the X(S)<≈>Y (t) pairs of possible ‘variations’ on the space-time construct.

Up to the time of Descartes, where an algebraic equation in two unknowns F(x, y) = 0 was given, it was said that the problem was indeterminate, since from the equation it was impossible to determine these unknowns; any value could be assigned to one of them, for example to x, and substituted in the equation; the result was an equation with only one unknown y, for which, in general, the equation could be solved.

Then this arbitrarily chosen x together with the so-obtained y would satisfy the given equation. Consequently, such an “indeterminate” equation was not considered interesting.
Descartes looked at the matter differently. He proposed that in an equation with two unknowns x be regarded as the abscissa of a point and the corresponding y as its ordinate. Then if we vary the unknown x, to every value of x the corresponding y is computed from the equation, so that we obtain, in general, a set of points which form a curve.

The deepest insight on what Descartes did is then evident:

HE GAVE MOTION=CHANGE TO GEOMETRY, ADDING ITS TIME-DIMENSION; AND SO its method could be used to study the actions/motions of a ‘fractal point’ whose inner geometry of social numbers was NOW ignored, in the ∆+1 scale of its world.. And so the graph would be a perfect graph to study all the ACTIONS=MOTIONS external to a given being, becoming for that reason the foundational structure of mathematical physics.

Thus analysis we will find that the curves DO represent key features of the ‘arrows of change’ of the Universe, specially the ‘standing points’ of change of parameters of Space=Information, ST=energy and Time=entropy (or any other kaleidoscopic combination of ST), in essence they represent the world cycle of the action or motion we study, with its 3 phases of starting motion, steady state, and 3rd informative age coming to a halt.

Historic view.

Particularly important here is the theorem of Newton and Leibnitz to the effect that the problem of quadratures is the inverse, in a well-known sense, of the problem of tangents.

For solving the problem of tangents, and problems that can be reduced to it, there was worked out a suitable algorithm, a completely general method leading directly to the solution, namely the method of derivatives or of differentiation.

It turned out that if the law for the formation of a given curve is not too complicated, then it is always possible to construct a tangent to it at an arbitrary point; it is only necessary to calculate, with the help of the rules of differential calculus, the so-called derivative, which in most cases requires a very short time. Up till then it had been possible to draw tangents only to the circle and to one or two other curves, and no one had suspected the existence of a general solution of the problem.

If we know the distance traversed by a moving point up to any desired instant of time, then by the same method we can at once find the velocity of the point at a given moment, and also its acceleration. Conversely, from the acceleration it is possible to find the velocity and the distance, by making use of the inverse of differentiation, namely integration. As a result, it was not very difficult, for example, to prove from the Newtonian laws of motion and the law of universal gravitation that the planets must move around the sun in ellipses according to the laws of Kepler.

Of the greatest importance in practical life is the problem of the greatest and least values of a magnitude, the so-called problem of maxima and minima.

A note of importance, specially in such calculus of variations, will then be the nature of that minimal fractal step, which is the point of a tangent, as a point has always parts (it is a fractal point) the finitesimal is the fractal point and it is not a single point but the point and a very ‘small’ surrounding (the previous or next points). So a maximum and minimum will be a dual point, so to speak, with a zero tangent (flat line), or in terms of motion a still moment in the summit of the function, which justifies such 0 value (or else if it was a single point and before it it was upwards and then downwards, or viceversa in a minimum, the value of the derivative in that point will be undetermined).

At various points of a curved line, if it is not a straight line or a circle, the curvature is in general different. How can we calculate the radius of a circle with the same curvature as the given line at the given point, the so-called radius of curvature of the curve at the point? It turns out that this is equally simple; it is only necessary to apply the operation of differentiation twice. The radius of curvature plays a great role in many questions of mechanics.

Now, we observe a curious duality between mathematical mind solutions vs. reality check: while classic science differentiates ‘twice’ to know if the point will ‘fall’ or ‘rise’ after the standing point, (∆nalytical solution) we obtain the same knowledge by ‘seeing’ in ‘reality’ how the ‘2 sequential points’ that surround the flat step behave in spacetimes. It is this ‘time interval’ of 4 sequential ‘steps’, what the derivative method, which can be considered a reduction of the curve to the essence of its time sequence solves. And we shall see often this inverse GST reduction, from reality and its complex actions to its sequential origin.

Indeed, in our study of sequential actions of world cycles we noticed that the steps of actions are always the same:

1D: ï-perception -> 2D: A: motion towards energy -> E: feeding -> 3D:wide storage of food, or 3Dx5D: O: reproduction and U: social evolution.

1D: As we move from the first ‘action’: to open you ‘eyes’ perceive and be perceived as a function in existence (with a quantitative parameter, which is a scalar ‘point’)…

2D: We then move into a motion, (with a more complex quantitative parameter, a bi-vector o 3-vector parameter).

Locomotion physics is thus a 2 Steps, ST action which we can measure as momentum  with 2 parameters;   the ‘tiƒ parameter, frequency, mass, temperature’ for the 1D point and the spatial location, which will break into 3 parameters for a vector x,y,z parameter (x,y,z + t). 

It is though still a simple S=T, though the time parameters are reduce to an external measure of space-otion.

We though will depart from this simplest I->A-nalysis of locomotion to include not necessarily in quantitative terms the description of the other ‘motions/actions’ of reality, Energy feeding, O & U.

The approximation of square space to cyclical points. Ratios and ir(ratio)nal numbers, its finitesimal limits.

Before the invention of the new methods of calculation, it had been possible to find the area only of polygons, of the circle, of a sector or a segment of the circle, and of two or three other figures. In addition, Archimedes had already invented a way to calculate the area of curves by exhaustion, leaving a sound error according to the minimal step he took, which raises the question, does have a circle a finitesimal minimum step? It is then pi and all other S>t constant transformations and ‘ir(ratio)nal numbers/ratios, limited by a finitesimal error?

THE ANSWER IS YES!, Normally a decametric limit define the ‘valid value of an ir(ratio)nal numbers, which is not a number in strict sense (a social number) but a ratio of an S/T action/function. The examples of the two fundamental ir(ratio)nals will suffice:

– pi is really the ratio of 3 diameters that form a closed curve, whose value depends on the lineal ‘step sizes’.

So pi has a minimal value of 3, which is the hexagon with its 6 steps of 1/2 value (triangulation in 6 immediately gives the result, as the triangle is the radius, so are the 6 triangular sides: 1/2 x 6 =3); which happens to be the value of pi in extreme gravitational fields on relativity, which brings another insight: black holes decompose the circle into ultimate lineal flows of pure ‘dark energy’ shot through the axis, by converting the curvature of a light circle on the event horizon in a 6-pi hexagon. But this is well beyond the scope of this intro.

So what is the ‘decimal limit’ of pi, before it breaks into meaningless (non-effective) decimal scales, with little influence on the whole?

While this is hypothetical I would say for different reasons explained in the article on number theory, as it is quite often the case it responds to the general ∆ ≈ S ≈ T ternary symmetries, so common in the perfect Universe.

So pi responds to the symmetry between its spatial minimal, 6 x 1/2=3 hexagonal steps, which means  it breaks in the 6th ∆-scaling decimals, 3,1415…9. So, 3,1416, which incidentally is basically what everybody uses is the ‘real value’ of pi, and why it is that value is studied elsewhere (deducing from it one of the most beautiful simple results of get-mathematics, the value of dark energy in any system, of the Universe, as the part not perceived through the apertures of a pi cycle: π/π-3 = 96% of ‘darkness’ which the singularity of a pi system cannot see as its apertures are only π-3= 0.14

Now, the other constant e, which is the ratio of decay ACTIONS, or death processes (ST<<S), is a longer two ‘scales’ down process, of self-destruction of a system, unlike the pi, single scaling process, S>T. So it breaks at 10 decimals:

2.718281828…459045

Indeed. Now, why 5 and not ten if the scales are 10¹º? Because 10 scales are in tames of space-time actions, the ‘whole’ dual game of two directions of time up and down, which happens only in reproductive actions. And this connects with the S>T<S Rhythms of motion go/stop/go back and forth between two arrows which happens both in st-single planes and ∆±motions.

Anyway, let us return to the rather simple thoughts of Newton, not to get too much carried away into the 4th line…

Mathematicians were greatly pleased when it turned out that the theorem of Newton and Leibnitz, to the effect that the inversion of the problem of tangents would solve the problem of quadrature, at once provided a method of calculating the areas bounded by curves of widely different kinds. It became clear that a general method exists, which is suitable for an infinite number of the most different figures. The same remark is true for the calculation of volumes, surfaces, the lengths of curves, the mass of inhomogeneous bodies, and so forth.

And this as most pure spatial questions is straight forward: you put finitesimal line-steps or square areas (which would also have the absolute limit of triangular Planck’s areas, which according to the bidimensional holographic principle, are the minimal area of information of a black hole; as indeed the black hole converts the spherical event horizon into ‘static hexagonal π=3 shrunk curvatures, incidentally the strongest most stable ‘buckminster domes’ and graphenes).

The new method accomplished even more in ‘time’ mechanics, because unlike easy-to-figure out approximations of areas, the staple food that started up mathematics in agricultural measure, time was NOT, it is still NOT understood – so alas, the ‘magic’ method of LN (ab. for Leibnewton, Leibniz first:), got solved questions of time-change without knowing much about time.

As it seemed that there was no problem of loco-motion and ratios of change the new calculations would not clarify and solve.

Not long before, Pascal had explained the increase in the size of the Torricelli vacuum with increasing altitude as a consequence of the decrease in atmospheric pressure. But exactly what is the law governing this decrease? The question is answered immediately by the investigation of a simple differential equation (the deep philosophical insights on S≈T transformations ignoramus – who cares would say Feynman ):

It is well known to sailors that they should take two or three turns of the mooring cable around the capstan if one man is to be able to keep a large vessel at its mooring. Why is this? Of course, you need two and better 3 elements for a ‘system’ to become a stable whole, so there are always a sailors said ‘3 saint marys… 3 huge waves, and 3-knots are best’… but alas, a similar differential equation to that of Torricelli solves it magically.

Thus, after the creation of analysis, there followed a period of tempestuous development of its applications to the most varied branches of technology and natural science. Since it is founded on abstraction from the special features of particular problems, mathematical analysis reflects the actual deep-lying properties of the material world; and this is the reason why it provides the means for investigation of such a wide range of practical questions. The mechanical motion of solid bodies, the motion of liquids and gases of their particular particles, their laws of flow in the mass, the conduction of heat and electricity, the course of chemical reactions, all these phenomena are studied in the corresponding sciences by means of mathematical analysis.

At the same time as its applications were being extended, the subject of analysis itself was being immeasurably enriched by the creation and development of various new branches, such as the theory of series, applications of geometry to analysis, and the theory of differential equations.

So among mathematicians of the 18th century, there was a widespread opinion that any problem of the natural sciences, provided only that one could find a correct mathematical description of it, could be solved by means of analytic geometry and the differential and integral calculus. And so the flurry of activity in the next centuries would be to extend its practical uses after solving…

ANALYSIS’ SPACE-TIME SYMMETRIES: MEANING OF ∫∂.

The Rashomon effect of Analysis: its multiple forms and functions.

In the graph, we can see the multiple perspectives and functions of Analysis; as it can be used to study from left to right, the role of the 3 elements of a T.œ, its membrane and Time standing points, maximal and minimal changes between ages, studied with derivatives, which also allow us to peer down in 2 ∆±1 planes (double derivatives), as they measure the tangential value of a quanta of an ∆º-curve and its ∆+1 whole (right), its internal volume of Spatial energy, studied with integrals, and its central @-singularity or point of view, the center O-of the reference frame. Further on as derivatives and integrals are inverse S=T symmetries, they combine canonically, in algebraic S≈t equations (left below) to cancel each other and giving us a ‘constant social number’ as an exact quantitative result to an ST 5D² PROBLEM, becoming the queen of Mathematical stiences.

So a Rashomon effect should consider different perspectives on those curves and forms found in analytic geometry, expressing algebraic equations:

-Temporal view: the curves are then meaningless in space. What matters is their ‘social dimension’ that resolves symmetries between time dimensions expressed by the two variables often a parameter of space that changes with a dynamic function/action/motion in time.

-Spatial view: It is still though possible to create meaningful closed forms, ∆+1 wholes of geometry, made of ∆-1 points, and then the geometry allows to resolve algebraically geometric spatial problems, with ‘a dual point of view’ that increases the easiness of solutions – as Descartes proved easily and Galois completed, showing the algebraic laws of solution of rule and compass geometrical problems.

∆: Scalar view, which will have to wait till Leibniz,

S=T view: when one of the parameters/dimensions is fixed, belonging to space and the other to a time motion.

PROPERTIES.

As we did with the other operandi, we need to consider the properties of calculus and its two operandi. This poises a problem, as there is not a ‘bottom operation, such as ±, x÷, directly related as with powers as the third dimension of calculus. But as calculus is a refined analysis of power laws, the direct connection is not exact.

Hence a certain discontinuity is established what implies that ∫∂ equations have been solved by the obvious method of applying the function h’ (x)= lim h->o  h (x+h) – h (x)/h.  We are not though to repeat here that procedure to get the results but merely analyse from ∆st perspective as we did with power laws, and x, the properties of derivatives, to see what they tell us in the higher T.œ language and then consider some specific functions and its integral and derivatives to learn more of it.

Properties.

Those key properties are expressed in its rules of calculus, starting from the ‘derivative’ of a polynomial:

Xª= a Xª‾¹

We are not interested on the steps of ≈, that is the axiomatic proof that we can move fro Xa through a cycle=algorithm of growth into aXˆâ-1 but it what tells us that cycle of growth about the vital properties of systems who undergo the cycle.

So we are NOT fully lineally diminishing a polynomial dimension despite being derivatives a reduction of dimensionality – the search for the finitesimal 1/n quanta. Why? Obviously because in the rough view from a quanta, xª into its whole xª+¹, we grow lineally (polynomial), but as we repeat ad nauseam, the lineal steps curve into geodesic closed wholes, in the ∆+1 scale (Non-E geometry), from the lineal spatial mind to the wholeness cycle of the closed being, and so as the ‘curve of a parabola’ diminishes the distance of a cannonball, growth is NEVER lineal but falls down as we approach the ‘(in)finite limit’.

IN THE GRAPH, the wholeness is curved upwards, the parts spread scattering entropically. The whole is a mind circle, @. So it curves/diminish the quantity of energy available, for the whole, as it really must be an addition of all the planes that share that vital energy to build ever slower, curved larger wholes.Or in terms of the integral function: 

And here we find the second surprise. There are ∞ integrals with the addition of a constant. As a constant by definition does not change (so as a kid know it goes out on the derivative). But ∆st gives new insight in things ‘children of thought’ (: all huminds 🙂 think they know.

Let us express this then in terms of past (∆-1: derivative )< Present (Function)  > future (∆+1: Integral)

The past is fixed, the infinitesimal enclosed, only one type of species, ‘happening already’, as the parts must exist before the wholes to sustain them.  But from the pov of the present function, the future integral into wholes is open, with ∞ variations on the same theme; unless we have already enclosed that whole, limiting its variations, which happens with the definite Integral.

So if the function f(x) is given on the interval [a, b] and, if F(x) is a primitive for f(x) and x is a point in the interval [a, b], then by the formula of Newton and Leibniz we may write:

Here the integral on the right side differs from the primitive F(x) only by the constant F(a). In such a case this integral, if we consider it as a function of its upper limit x (for variable x), is a completely determined primitive of f(x). That is the importance of the enclosure membrane to define a single organism, and establish its order, as opposed to the entropic, multiple open future of a non-enclosed vital function which will scatter away.

Consequently, an indefinite integral of f(x) may also be written as follows:

where C is an arbitrary constant, the enclosure will eliminate. 

 

Linearity: Yet and this seems to contradict the previous finding, when we operate derivatives with the ‘basic dimension’ of social herding, ± operators, linearity comes back, and so the minimal Rashomon effect give us two explanations:

Γ(st):  We are INDEED herding in the base dimension of a single plane, where each derivative will now be considered a fractal point of its own:

∆+1 perspective: Suppose f(x) and g(x) are differentiable functions and a and b are real numbers.

Then the function h (x) = aƒ(x) + bg (x) is differentiable and h’ (x) = a ƒ'(x) + b g'(x), which is really the distributive law already studied in algebra’s post for x and power  law. So the interpretation of the sum rule from ∆+1 is one of ‘control’:

WHEN operating from a whole perspective, the  whole breaks the ‘smaller’ parts and its simpler dimensional operandi, +, to treat each part with its ‘whole action’ (in this case ∂). In brief the whole totally control the parts.

The Product Rule  used to find the derivative of a product of two functions, however differs slightly (in polynomials also had the distributive property). So h'(x) = [ƒ(x) x g(x)]’ = ƒ(x) • g'(x) + ƒ'(x) • g(x).

Here we shall bring a little explained fact – derivatives act in the inverse fashion to power laws, searching the infinitesimal, while power wholes (integrals) search the wholeness, and as we know the two directions of space-time are different in curvature, quantity of information and entropic motions.

So an external operation that reduces a whole which is NOT integrated as such but a lineal product of two wholes, ƒ(x) and g(x), a COUPLE, is mixing the infinitesimals of one, with the other whole before herding them; in a process of ‘genetic mixing’ of the parts of the first shared with the second whole and the parts of the second shared with the first whole.

This law of existential algebra simplified ad maximal as usual in mathematical mirrors surprisingly enough is the origin also of genetic ‘reproduction’, which occurs at two levels, mixing the ‘parts’ – the genes of the whole – in both direction to rise then the mixing to the ∆º level of the G and F gender couple.

Finally the chain rule WHICH IS TRULY the one that encloses all others is used in the case of a function of a function, or composite function writes:

And this truly an organic rule, as we are not derivating on ‘parts’ loosely connected by ± and x÷ herds and lineal dimensional growth, but the ‘function’ is a function of a function – a functional, as all ∆+1 is made of ∆º which are also functions of xo fractal points.

So this is the most useful of all those rules to mirror better reality.  And we see how the derivative, the change process deeps in at the two levels, at the ∆º=g(xo) level, which becomes g'(xo) and at the whole level, which becomes ƒ'[g(xo)], which tell us we can indeed go deeper with ∫∂ between organic scales, which is what we shall learn in more depth when consider partial derivatives and second derivatives and multiple integrals.

We are getting so to speak into the infinitesimal of the parts of a whole from its ∆+2 perspective, and thsi rule encloses all others, because it breaks into the multiplication of its parts – DWINDLING TRULY A SCALE DOWN, AND SEPARATING THE WHOLE AND THE PARTS DERIVATED INTO LOOSE PARTS AND FINITESIMALS NOW MULTIPLIED. 

And what will the parts do when they see their previous finitesimals now camping by themselves but ‘at sight’ to get them to ‘produce’ an operative ‘action’ (a,e,i,o,u actions are ALL subject to the previous operandi), ON them.

AND WHAT WILL COME of that multiplication. Normally it will capture them all again and then normally will not re=produce on them (one of the operandi actions which are possible under the rashomon effect) but divide and feed on them the last operation to treat:

And its inverse, which is NOT a positive communicative act but often a perpendicular negative reducing game also consequently differs:

And we should notice here that the numerator, the victim, shared by the denominator the predator so to speak is first absorbed in its ƒ'(x) parts, g(x) ƒ'(x), subtracting the g'(x) parts that the prey has absorbed in the ‘fight’, ƒ(x) g'(x), and then shared by the bidimensional g(x)² whole as entropic feeding.

Of course, we love to bring vital interpretation to abstract math, but as we apply such rules to particular cases, the interpretations vary but in all cases will be able to be interpreted in terms of sub-equations of the fractal generator.

What might be notice in any case is that unlike in our rather ‘abstract’ dimensional explanation of the rules of power laws, here we are able to bring real vital analysis of those roles in terms even of biological processes, showing how much more sophisticated is the ∫∂ operandi, the king of the hill of mathematical mirrors on real st-ep motions and actions, reason why its use is so wide spread.

So those properties tell us new things about the meaning of ∫∂.

In that sense the MOST important ad on that ∆st will bring to the use of differentials in EXISTENTIAL ALGEBRA, is its temporal use as the ‘minimal action in time’, of a being, a far more expanded notion that the action of physics (which however will be related to the lineal actions of motion on 1D).

∫∂EPS AS MEASURE OF SIMPLEX ACTIONS.

Discovery of the calculus and errors in dogmatic foundations

Two ‘S≈t’ and ∆±1 major steps led to the creation of analysis:

S≈t: The first was the discovery of the surprising relationship, known as the fundamental theorem of calculus, between spatial problems involving the calculation of some total size or value, such as length, area, or volume (integration), and problems involving rates of change in time, such as slopes of tangents and velocities (differentiation). ( Gottfried Wilhelm Leibniz and Isaac Newton.)

  • While the utility of calculus in explaining physical phenomena was immediately apparent, its use of infinity in calculations (through the decomposition of curves, geometric bodies, and physical motions into infinitely many small parts) generated widespread unease… as only Leibniz got the understanding of a ‘fractal point which is a world in itself’ and the finitesimal nature of derivatives (1/n).

So the dogmatic zealot, an Anglican bishop George Berkeley published a famous pamphlet, ‘The Analyst; or A Discourse Addressed to an Infidel Mathematician’ (1734), pointing out that calculus—at least, as presented by Newton and Leibniz—possessed serious logical flaws on the arrogant pov of the human mind son of god, who must access absolute truths. LOL.

Analysis the grew out of the resulting painstakingly experimental close examination of concepts such as function and limit, which are still improperly defined with axiomatic zealots of the humind (ab. Human mind) rights to more than humid truths: ‘man is a mush over a lost rock of the Universe, departing from this (relative) principle, we can talk about him’ Schopenhauer.

As all entities have a causal development from a spatial, first entropic age into complex time analysis to end in the ‘awareness of an ∆±1 dimension to it’, the pioneers, Newton’s and Leibniz’s approach to calculus had been primarily geometric, involving ratios with “almost zero, +0” divisors—Newton’s “fluxions” and Leibniz’s “infinitesimals.”

During the 18th century calculus became increasingly temporal, algebraic, as mathematicians—most notably the Swiss Leonhard Euler and the Italian French Joseph-Louis Lagrange—began to generalize the concepts of continuity and limits from geometric curves and bodies to more abstract algebraic functions and began to extend these ideas to complex numbers, which are the ideal elements to study ∆spacetime-processes in its more complex interrelationships oftenreduced to the 0-1 ‘infinite/simal domain’.

Then in a useless attempt to show the humind absolute in its truth, as these developments were not entirely satisfactory from such deluded foundational standpoint, the so called ‘rigorous’ (: basis for calculus was ‘invented’ by the Augustin-Louis Cauchy, Bernhard Bolzano, and above all the idealist squared, usual suspect of total false truths – a cultural, simpleton German Karl Weierstrass in the 19th century.

In the regard, (see ∞|0 post on the meaning of numbers, infinites and infinitesimals), the logical difficulties involved in setting up calculus on a sound basis are all related to one central problem, the notion of continuity.

Finitesimals

The mother of all battles, the ‘great advance of mathematics, which ¬Æ will not really evolve in its details (as the great innovation is Non-E Geometry completed with its 5 postulates) but as in the case of non-A Algebra, with a better understanding of its operandi, 5D brings a better conceptual understanding of what Analysis truly is.

For starters the word to use is ‘finitesimals’, not infinitesimals. Infinite does not exist in a single continuum, but through multiple discontinuities as all systems in time and space are limited in space and time, both in a single membrane, and in within the scales of the 5th dimension (as information and energy doesn’t flux between those scales without loss of entropy).

It would be in that sense important to understand the need for a finite limit, solving the paradox of zeno, with the concept of a quanta or limit of a finitesimal.Now a finitesimal is an action of space-time of any of the 5 Dimensions, an action being a space-time cycle hence a bidimensional holographic quanta, which can be expressed in any of the graphs, lineal, cyclical, cylindrical or polynomial (complex).

What we will measure then is an action of space-time, which classifies time cycles in 5 subspecies by its complexity:

A-celerations, lineal motions, entropic motions, energy flows, informative vortices and Social evolutions (a,e,i,o,u).

IN THE GRAPH, the general 5 actions-dimensions of existence of different ∆±i species, from above down – a view of them all, one of the physical simplest light and electronic i<eye minds, and below the human being.

Now mathematical operations as we have repeated many times are better for simpler social numbers forming herds, moving=reproducing simple information in a few scales of existence, with topological evolution at the height on its capacity to describe complex simultaneous super organisms in more detail. This means the actions best explained by operators are those simpler, and quantitative operations will give then the simplest of the interactions between the elements of those smaller, ‘larger’ ensembles of beings.

Mathematically it is in that sense quite irrelevant to make derivatives in time of human complex actions, much better described with verbal i-logic languages, beyond some quantitative results.

But what appears all over the place is the quanta of minimal human time, the second is the minimal informative action for the 3 synchronous t-st-s parts, an eye glimpse of mind-perception, a limb step of motion and a heart-beat of the body. This of course is nothing to blame on Nature, rather as Lindau put it, a feature of the human ego: ‘what time uncertainty, I don’t see any time uncertainty in quantum physics, I look at my clock and I know what time is’ (:

So to escape the limits of huminds and mathematical reductionism, it will be important even for physical systems today only described quantitatively in abstract mathematical terms, to vitalise and explain the organic whys of its space-time events by adding its existential actions to those ‘analysis’.

The connection on qualitative terms though is self-evident, for all scales, as most actions of any being are extractions of motion, energy and form from lower ∆-i scales.

So we and all other beings perceive from ∆-3 quanta (light in our case), feed on amino acids, (∆-2 quanta for any ∆º system), seed with seminal ∆-1 cellular quanta (electrons also, with ∆-1 photon quanta).

So derivatives are the essential  quantitative action for the workings of any Tœ, space-time organism.

And so we study in depth the connection of the a,e,i,o,u actions between Planes (qualitative understanding) and its mathematical, analytic development (quantitative understanding of 1st second and 3rd derivatives – the late extracting ‘1D motion’  from the final invisible gravitational and light space-time scales).

SO THE FUNDAMENTAL LAW OF OPERATIONS TO VITALIZE THEM IS THIS:

‘BY THE RASHOMON EFFECT ALL differential OPERATIONS CAN BECOME AN ACTION IN ONE OF THE 5D DIMENSIONAL VOWELS (A,E,I,O,U) THAT DEFINE THE FIVE dimensions OF EXISTENCE, AS VITAL QUANTA-ACTIONS OF THE BEING.

THIS IS THE LOGIC CONCEPT THAT TRULY VITALIZES THE OPERANDI OF ALGEBRA.

 Derivatives allow us to integrate, a sum of the minimal quanta in space or actions in time of any being in existence, namely the fact that its sums tend to favor growth of information on the being and then signal the 3 st-ages and/or st-ates of the being through its world cycle of existence, which in its simplest physical equations is the origin of… ITS space-time beats.

Actions in timespace are the main finitesimal part of reality, its quantity of time or space if we consider tridimensional actions as combinations of S and T states, stt, tst, tss, sss and so on…

So how differential equations show us the different actions of the Universe?

To fully grasp that essential connection between ∆st and mathematical mirrors, we must first understand how species on one hand, and equations on the other, probe in the scales of reality to obtain its quanta of space-time converted either in motion steps or information pixels, to build up reality. 

So for each action of space-time we shall find a whole, ∆ø, which will enter in contact with another world, ∆±i, from where it will extract finitesimals of space or time, energy or information, entropy or motion, and this will be the finitesimal ∂ ƒ(x), which will be absorbed and used by the species to obtain a certain action, å.

So the correspondence to establish is between the final result, the åction, and the finitesimal quantas, the system has absorbed to perform the action, ∫∂x, such as: å= ∫ ∂x, whereas x is a quanta of time or space used by ∆ø, through the action, å to perform an event of acceleration, e-nergy feeding, information, offspring reproduction or universal social evolution.

It is then when we can establish how operations are performed to achieve each type of actions.

The first element to notice is the fact that the space between the actor and the observable quanta is relative, so even if there are multiple ∆-planes between them the actor will treat the quanta as a direct finitesimal, pixel, bit, or bite which it then will integrate with a polynomial derivative or sinusoidal function that reflects the changes produced.

We will consider in this introductory course only a few of the finitesimal ∫∂ actions where the space state is provided by the integral and the ∂ finitesimal action by the derivative.

SPACE FINITESIMALS VS. TIME FINITESIMALS

For what it follows we must ‘differentiate’ when differentiating (:

-Space finitesimals, which are the minimal quantity of a closed energy cycle or simultaneous form of space, easier to understand, as they are ‘quanta’ with an apparent ‘static form’, which can be ‘added’, if they are a lineal wave of motion-reproduction, along the path; or can be integrated (added through different areas and volumes), to give us a 3D reality.

-Time finitesimals, which are the minimal period for any action of the being and will trace a history of synchronicities as the actions find regular clocks, which interact between them to allow the being to perform ALL their 5D actions needed to survive. So we walk (A(a)), but then eat  energy (Å(e)), and we do not do them often together. Actions have different periodicities, for EACH species that perform 5 actions. So to ‘calculate’ all those periodicities in a single all-encompassing function we have to develop a 5D variable system of equations.

– Spacetime finitesimals. But more interesting is the fact that Nature works simultaneously integrating populations in space and synchronising their actions in time. So we observe also space-time finitesimals where the synchronicity consists in summoning the actions of multiple quanta that perform in the same moment the same ‘D-motion’, which is ‘reinforced’ becoming a resonant action. 

And for the calculus of those space-time finitesimals the best way to go around is by ‘gathering the sum of ∆-1 quanta’ into a ‘larger ∆º quanta’ treated as a new ‘1’ adding up its force. EVEN IF most of them are just complex ensembles of the simplest actions of  many cellular parts – steady state motions, reproduction of new dimensions and vortex of curvature and information absorption.

All functions of analysis thus can be considered operations on actions of space-time.

Groups of Finitesimals and their synchronous actions thus meet at ∆º in the mirror of mathematical operations, through the localisation of a ‘theoretical’ tangent≈ infinitesimal of the nano-scale (∂s/∂t proper) or an ‘observable’ differential, a larger finitesimal, which is the real element, as any finitesimal is a fractal micro points that have a fractal volume, expressed in the differential.

Then we gather them, in time or space and study their ‘inverse’ action in space or time.

So the first distinction we must do is between finitesimals expressed as functions of time frequencies and finitesimals expressed as areas of space. And the actions described on them. In practice though most finitesimals are spatial parts whose frequency of action is described by the ƒ(x)=t function.

Let us recall then how actions are added through frequency integrals of time instead of area integrals of space, through a mathematical method called…

Harmonic analysis

A mathematical procedure for describing and analyzing phenomena of a periodically recurrent nature, in an inverse fashion to how Nature ensembles them, from the top of the integrated seemingly chaotic addition of all the cycles of actions back to its finitesimals.

Many complex problems have been reduced to manageable terms by this technique of breaking complicated mathematical curves into sums of comparatively simple components.

Many physical phenomena, such as sound waves, alternating electric currents, tides, and machine motions and vibrations, may be periodic in character. Such motions can be measured at a number of successive values of the independent variable, usually the time, and these data or a curve plotted from them will represent a function of that independent variable. Generally, the mathematical expression for the function of all actions might seem chaotic.

However, with the periodic functions found in nature, the function can be expressed as the sum of a number of sine and cosine terms, which decompose the sum into its ‘individual recurrent actions’ with different periodicities; according to the ‘sequential order’ of frequencies, which as usual is related to the sequential world cycle:

Top frequency: 1D-information, Max. Frequency, 2D: motion; Med. Frequency, 3D: reproduction; Min. Frequency: 4D: entropic-death (1), Probabilistic Frequency (0-1): Social evolution.

i.e. you think every second your MINIMAL QUANTA OF TIME, which will always be defined by the fastest clocks of the 1D component of the being; then you can walk also with the same frequency, one step/heart beat for second, but you do not walk all the time, next comes reproduction, whose frequency is far slower, but normally between the minimal once in a life or the maximal for fungi in our scale, big-bang particles in the cosmos, etc. And finally comes the death event which is only once in a life (but certain, or probability 1) and the less probable even, the repetition of the ∆º>∆+1 collapse of the herd-wave of beings into a social superorganism/species of the upper world (and often happening in a larger period of a single ∆º life).

So we shall find functions of those frequencies, often split as the ∆-actions are either seen as ‘limits’ (death), or ignored (social evolution), when quantifying the being ‘in itself’, reduced to the analysis of its frequencies of perception/information, motion (lineal reproduction of information along a path) and reproduction, in ternary series or when reproduction is not accounted for, or it is accounted as motion (for particles/waves), to a dual rhythm which we can summon up in the concept of a stop-informing, go-moving, frequency which is really all what needs to be measured for simpler waves/particles states.

Thus, this simple rule , to order all phenomena following the natural order of the 5 Dimensions, thus simplifies enormously the understanding of the whys obtained by Harmonic analysis.

Such a sum is known as a Fourier series, after the French mathematician Joseph Fourier (1768–1830), and the determination of the coefficients of these terms is called harmonic analysis. One of the terms of a Fourier series has a period equal to that of the function, f(x), and is called the fundamental. Other terms have shortened periods that are integral submultiples of the fundamental; these are called harmonics.

The terminology derives from one of the earliest applications, the study of the sound waves created by a violin which showed how the mind does differentiate the order behind the wave as we find pleasant the harmonious music, even if at first ‘sight’ its mathematical form seemed confused, before we disintegrate its whole in its parts. This is what Fourier did stating that a function y = f(x) could be expressed between the limits x = 0 and x = 2π by an infinite series. So departing from an ∆(x:t) or an ∆(y:s), the finitesimal of time or space is integrated along a reproductive wave of space or informative wave of frequency time, and any of its polynomial/fourier expressions:

In the graph a ‘time polynomial’ or fourier series, which recurs adding the worldcycles of different actions, in the o-1 unit circle of time probabilities, where 1 is the happening of an event-form, when the cycle is closed at 2 π; and so we integrate the 2 dynamic paths, the cos clockwise and sin anticlockwise (as it moves towards its maximal perpendicularity) functions, which give us the repetitive coefficients:

In other posts (number theory, @nalytic geometry) we study the deep meaning of sin and cos NOT as fixed numbers but as dynamic inverse functions, moving in two different directions of time, in connection with its repetitive mapping on a complex graph, as ‘sins’ scale anticlockwise the -i axis and cos flattens and elongates clockwise into the real world axis any space-time function. Enough to say at this stage that both functions play the necessary inverse elements for any proper ‘mirroring’ of an S≈T duality; in this case we say that cosine are $-pace like (2D, 4D related) and sin is ð-ime like (0-1D, 5D related).

While the series is an infinite sum, as it integrates the (in)finite variations of  a ginormous number of space-quanta forming herds (not synchronous super organisms), moving with different step and go frequencies, absorbing energy or information with different time periods, etc. in praxis, the main sum is given over 3, 5 to 11 frequencies which gather most of the actions and social groups of the best synchronised parts of the whole, according to the ternary time and 11 space parameters of most time-space systems.

Indeed, earlier harmonics were related to music with Pythagoras and the violin and we know the chords of music ‘stop’ in the fifth.

On the other hand, in modern wave analysis it was found that the use of a larger number of terms will increase the accuracy of the approximation, and so the large amounts of calculations needed are best done by machines called harmonic (or spectrum) analyzers; these measure the relative amplitudes of sinusoidal components of a periodically recurrent function. Yet when the first such instrument was invented by  Kelvin for the harmonic analysis of tidal observations, it was enough to embody an ‘hendecagram’ of 11 sets of mechanical integrators, one for each harmonic to be measured.

The other fundamental time-measure: harmonic oscillators.

As usual in ∆st we apply dualities and ternary symmetries to everything, culminating in 5D Rashomon effects.

So the question is, if a Fourier transform is the best method to ‘differentiate’ the lineal motion of wave-patterns, hence best for 1D actions, what is the equivalent, to observe the cyclical patterns of ‘pure’ cyclical motions around a point of balance, which is strictu senso the purest form of worldcycles of time for physical systems? The ‘master’ function of them all – the harmonic oscillator.

As the key unifying direct translations of ∆st actions/motions/dimensions/forms into mathematical physics (Lagrangians, Hamiltonians, Fourier transforms, o-1 probability time emergence and its space symmetry with statistics, and so on), the oscillator is a system that can be exactly solved in classical and quantum theory as a system of ‘fundamental’ physical relevance.

Quantum physics INDEED in what truly matters objectively IS in NOTHING different to any other organic ensemble of parts into wholes.

3 considerations on this matter:

1.All what seems weird is either subjective ego-trips of Copenhagen that deny the sentient, organic properties of particles, which collapse into wholes, when colliding as herds come into tighter forms in danger, intrinsically… it is not the shark, Mr. Bohr who collapses but its presence what makes the photons or schools to come together. Or it is just an ill-understood philosophical interpretations of the excellent praxis of spatial maths that tries to ‘map together’ all in a single image using ‘multiple dimensions’ and functions of functions.

2. The reflection of the gathering of quasi infinite≈(in)finite populations in space, through long periods of world time cycles, with Hilbert multiple dimensional spaces, functionals of functions and operators does NOT mean there are ∞ dimensions, but that the mathematical space of the mind tries to put infinite parts into a single whole equation.

3.The absurd discussion between a probabilistic=time event vs. density=population description of it, are just another mind-reality s=t symmetry. The o-1 Dimension though is faster in time, smaller in space according to 5D ($ x ð=k) metric. So a probabilistic mirror fits easier for micro-fast particles, which however intrinsically at their scale will appear as herds of populations.

This said, the harmonic oscillator IS the key function to describe worldcycles of individual T.œs:  Any system fluctuating by small amounts near a configuration of stable equilibrium may be described either by an oscillator or by a collection of decoupled harmonic oscillators, which again as in Fourier transforms represent the ‘minimal Rashomon duality’ to describe a system, its S=T fluctuations in a single plane and its ∑∆-1>∆o decomposition in wholes and parts.

This bare minimum of a Γ: present ternary description and a ∆: scalar whole and part analysis IS thus the basic ‘analysis’ that happens else where in all stiences to fulfil the feeling of knowledge.

The oscillator then is to the Fourier what the Hamiltonian that ads the Active ∆-scalar magnitude is to the Lagrangian, focused in the pure s, t parameters.

Thus the oscillator we are actually IS the general problem of worldcycles of a T.œ, which tries to keep ALL its S≈T PARAMETERS in balance as it moves between two S-T extreme states:

  1. near equilibrium the active magnitude, or vortex door to its scalar dimensions is at rest (0-1 D motion) and its acceleration (the informative Dimension of time) is maximal.
  2. As opposed to the max. S displacement in space, further from equilibrium,  where its ð” acceleration is minimal but its 2Distance is maximal.

So the SHM ultimately expresses the inversion between those different dimensions of time and space.

In mathematical terms, the first found case of a single harmonic oscillator is of course a mass m coupled to a spring of force constant k.

For small deformations x, the spring will exert the force given by Hooke’s law, F = –kx, (k being its force constant) and produce a potential V = kx². The Hamiltonian for this system is:

where ω = (k/m)1/2 is the classical frequency of oscillation.

Any Hamiltonian of the above form, quadratic in the coordinate and momentum, will be called the harmonic oscillator Hamiltonian.

And we see incidentally how momenta is expressed also as a bidimensional form (holographic principle), and the clear Duality also in the energy elements between T, the lineal 2D kinetic motion energy and V the 1D potential-position energy . The kinetic IS the 2D lineal motion part of 3D energy and so it ‘reduces’ the importance of ‘mass’, the 1D active magnitude, which is the factor eliminated in the ratio; while the potential energy, due to position – to the active magnitude 1D is reinforced by the frequency time oscillation.

Now, the mass-spring system is just the simplest among an astounding huge family of systems described by the oscillator Hamiltonian. And its corollarium reinforces the tendency of all parts of the Universe to an S=T equilibrium (or symmetry in physical jargons, as the breaking of such symmetries are inversely, the main form of ‘creation’ and diversification of physical systems). Since a particle moving in a potential V(x), placed at one of its minima x0, will remain there in a state of stable, static equilibrium. While a maximum, will be a point of unstable static equilibrium.

RECAP. The fourier series maps then the speed of time of a system, which for repetitive motions with limiting domain increases its form constantly, while the inverse solution expands them, and so both together balance a zero sum of frequencies.

IF WE consider the unit circle, either in its probability version or mere mirror of all possible sinusoidal functions, as the full world cycle, then we can consider different speeds of time clocks, which increase with the growth of Ak, and its synchronous gathering into a single wave functions, the prove that in time also the laws of superposition work together, and the synchronisation of actions, brings a whole into existence.

Fourier inverse view of a time function as a sum of potential finitesimals of time (its decomposed frequency) opens up a whole symmetry of the concepts of timespace ∫∂teps.

Once calculus has found the finitesimal it can study according to the action, how many frequencies or populations of it will be integrated to reach the calculus of that specific action of space-time.

Let us consider on those terms the simplest of those actions – reproductive motions in time.

D1 ACTION ∫∂EPS:

MOTION AS REPRODUCTION OF INFORMATION

Screen Shot 2016-04-02 at 09.18.52

In the graph, both in space, by adjacency, and in time by frequency discontinuity as we measure consecutive adjacent reproductions of form in space, and returning lapses to measure time frequency, when the cycle is closed, there is always an error of ε, an unit of ƒ, in our measure (as the process of reproduction in adjacent places make us wonder where it is ‘the dynamic moment happening, hence the motion point, in SM1 or SM2; and the same measuring time lapses, has the cycle in its minimal space-time action happened yet or it is not concluded are we in TM1 or tM2)? We thus see always 2 moments together 2 adjacent beings to form the wave of space-time motion.

A quanta of time is then a sTep of frequency adding and superposing its elements in the volume constrained by the initial and final point-position of the being across which a wave of space reproduces lineally… and as such it is the simplest motion, a translation of space or ‘speed’. v=ƒ(t) = ∂S, whereas the ≈ operandi of all operations allowed as symmetries of space time is in this case the product x of a frequency, ƒ t and a space-step, λ: λ x ƒ = V.

How can then describe with ‘fractal points’ as the moving point of speed, this process? One of reproduction of a quanta of space, the step with a frequency of time, ƒ, which describes therefore a form with a certain radia, cyclical sin/cos ƒ(x) path.

screen-shot-2017-01-22-at-12-32-33

ONLY if infinitesimals are finitesimals, the paradoxes of Zeno can be solved.

The paradox concerns a race between the fleet-footed Achilles and a slow-moving tortoise. The two start moving at the same moment, but if the tortoise is initially given a head start and continues to move ahead, Achilles can run at any speed and will never catch up with it. Zeno’s argument rests on the presumption that Achilles must first reach the point where the tortoise started, by which time the tortoise will have moved ahead, even if but a small distance, to another point; by the time Achilles traverses the distance to this latter point, the tortoise will have moved ahead to another, and so on.

The solution is that in the plane of Achilles, the finitesimal is the ‘fractal step’ of Achilles SE system (its limbs). So the smallest distance per unit of time, Achilles crosses will be above one-meter step, longer than the maximal steps of the tortoise. As its ‘time quanta’ is also faster than the tortoise, Achilles’ Max. Se x Max. To > min. Se x Min. To(rtoise:). So in x To quanta, and x Se steps Achilles wins.

Velocity.

In the graph, the seemingly simplest of all equations of analysis, is an s=t lineal space-time motion, which still encodes some relevant information about reality; namely the Steps of stop and go, Ultimate Reason of reality.

In this case between $-lineal space-distance and t-lineal time; the simplest morphologies of time=space.

At the beginning of the present chapter we defined the velocity of a freely falling body. To do so we made use of a passage to the limit from the average velocity over short distances to the velocity at the given point and the given time. The same procedure may be used to define the instantaneous velocity for an arbitrary nonuniform motion. In fact, let the function:

S=ƒ(t)    express the dependence of the distance s covered by the material point in the time t.

It was to be noticed then that space-distance is the memory of a motion of time, and hence its past solidified as a memoir called distance, S=V x T

It is then that its calculus by Galileo was immediate with derivatives now this past could be calculated to the refinement of each minimal time motion.

To find the velocity at the moment t = t0, let us consider the interval of time from t0 to t0 + h(h ≠ 0). During this time the point will cover the distance:

∆S = ƒ (to+h) – ƒ(to)

The average velocity υav over this part of the path will depend on h:image229

and will represent the actual velocity at the point t0 with greater and greater accuracy as h becomes smaller. It follows that the true velocity at the time t0 is equal to the limit:

image231

of the ratio of the increase in the distance to the increase in the time, as the latter approaches zero without ever being actually equal to zero. In order to calculate the velocity for different forms of motion, we must discover how to find this limit for various functions f(t).

So the derivative of space respect to time, gives us the ‘infinitesimal speed’ or the minimal quanta of space-time measured, the tangent of the curve.

Now in ∆st we prefer the detailed fractal perception of speed, NOT AS A CONTINUOUS GRAPH, but as a series of cyclical steps, where the step is the finitesimal ‘fractal’ quanta of space and ƒ, the time frequency of its motion:

V= λ(s) x ƒ (ð)

In this case ‘h’, the quanta of space-time is λ.

2nd dimension line=sum of points:

Why both definitions work? In pure equations, T=1/ƒ=1/ð.

In depth, because both are isomorphic definitions, albeit in ‘different scale’:

The continuous definition focuses on the ∆-1 ‘potential field of forces’ over which the system reproduces its wave of form. So the ‘frequency steps’ are substituted by the external ‘nanoscopic’ continuous (indistinguishable) gravitational and electromagnetic fields over which the ‘being’ slides unaware of the invisible=indistinguishable field over which glides.  In the previous equation we adopt the ∆º point of view, internal to the being, where its quanta are much larger, and not subject to derivatives.

It is then important to notice that the need for a ‘function to be continuous’ implies to study S-Teps in which the actions happen in a lower ‘scale’ of being; hence we talk of the primary actions of motion (Max. S) and perception (Max. I), of minimal forms (∆Max. i) in relationship to the actor and/or death processes of entropy. We can hardly establish ∫∂ operations for the complex social actions of the 5th dimension, and many of the 3D reproductive actions of seminal ∆-1 seeds, for where a qualitative analysis of evolutionary topology is more proper.

Reason why the operation of ∫∂ is more proper of the 1D, 2D and 4Dimensions of existence.

Speed is important, on the other hand, as the ratio (s/t) continuous speed or s x t (discontinuous: step x frequency) it is one of the 3 fundamental ratios, s/t-speed, t/s-density and txs-momentum that define in its simplest form, the singularity vital energy and angular momentum of the 3 parts of an organism, which for the perfect being, s=t, are, 1, 1, and 1.

The 3 parts of T.œ.

 Every event and form must be analysed ternary, and so happens with integrals and derivatives, which often represent integrals of space-time quanta belonging to the vital energy of the system, constrained in time or space by the singularity and outer membrane.

If we call energy, e, then:

∑$p x ðƒ = ∆e  becomes the integral of the inner spatial quanta of the open ball, surrounded by the membrane of temporal cycles, which conserves its Energy and by the sum of all T.œs that of the Universe. Its calculus, after finding a ‘continuous derivative’, surrounded by the membrane is then an integral: ∫ Sp x ðƒ = Ke.

And inversely. If we consider a single quanta of space or a single frequency of time, a moment of lineal or angular momentum, the result is a derivative.

So Analysis becomes the fundamental method to study travels upwards and downwards of the 5th dimension.

In general if we call a spatial quanta a unit of lineal momentum of each scale and a time cycle a unit of angular momentum, the metric merely means the principle of conservation of lineal and angular momentum.

Thus analysis studies the process which allows by multiplication of ‘social numbers’ , either populations in space or frequencies of time, a system to ‘grow in size’; which is the ultimate meaning of travelling through the 5th dimension. For example, when a wave increases its frequency, it increases the quantity of time cycles of the system. When a wheel speeds up its increases the speed of its clocks. And vice versa, when a system increases its quanta, growing in mass, or increasing its entropy (degrees of motion in molecular space), it also grows through the 5th dimension.

And the integration along space and time, of those growths, is what we call the total Energy and information of the system

It is what physicists call the integral of momentum or total ‘Energy and information of the system ‘

So we shall only bring about here some examples of analysis concerned with the definitions of the fundamental parameters of the fractal Universe, that is the conservation principles and balances of systems which can be resumed in 2 fundamental laws:

Points of constrain, balance and limits of integrals.

Any equation with a real, determined solution must be a complete T.œ. Hence it will have limits either in space (membrane and singularity of the open ball), or in time, initial and final conditions, bridged by an action in the ‘least time’ possible.

This is the key ∆st law that applies to the search for solutions in both ODE and PDEs.

Maximise its ðƒ/Sp, density of information/mass, its Sp/ðƒ density of energy and hence, reach a balance at ðƒ=Sp

This simple set of equations: max. ðƒ x Sp -> Tƒ=Sp: max Tƒ/Sp and Max. Sp/Tƒ are therefore the fluctuation points of systems that constantly move between the two extremes of information and spatial states across a preferred point of balance Sp=Tƒ as this is the max. Sp x Tƒ place.

Thus integrals, Lagrangians and Hamiltonians are variations of those themes. The motion of springs; the law of least time etc. all are vibrations along a point of balance, Tƒ=Sp, and 2 maximal inverse limits.

Dimensional integration. Dimensions of form that become motions and vice versa.

Now the key to fully grasp the enormous variety of integral and derivative results obtained in all sciences, is to understand that all space forms can be treated as instants in time, or events of motion, and all motions in time can be seen as fixed present moments in space.

These series of combinations of time and space, S>T>S>T, which leaves a trace of steps and frequencies and its whole integration, which emerges as an ∆+1 new scale of reality is at the core of all fractal, reproductive processes of reality.

For example the s-T duality is at the core of the Galilean paradox of relativity (se mueve y no se mueve), of Einstein’s relativity, of zeno’s paradox.

So we can consider motion in time as reproduction of form in adjacent topologies of discontinuous space.

We can consider the stop and go motions of films, picture by picture, integrating those ‘spatial pictures’ into a time ‘motion picture’.

We consider the wave-particle paradox, as waves move by reproduction of form and particle collapse by integration of that form in space into a time-particle.

In those cases integration happens because a system that moves in time, reproduces in space. And vice versa, steps in space become a memory of time. 

Now it is important also to study case by case and distinguish properly what are we truly seeing population in space or events in time, as we can and often it happens that humans confuse in quantum physics where motion is so fast that time cycles appear as forms of space. We shall then unveil many errors, where a particle in time is seeing as a force in space (confusion of electroweak, transformative force as a spatial force,and so on).

All systems can be integrated, as populations in space to create synchronous super organisms  and as world cycles in time, creating existential cycles of life and death. The population integral will be however positive and the integral in time will be zero.

Since. systems of populations in space do have volume. Yet the whole motion in time, can be integrated as closed paths of time, or conservative motions that are zero sums, and this allows us to resolve what is time integration and space integration.

Consider to fully grasp this, the reproduction of a wave, which constantly reproduces its form as it advances in space, and cannot be localised (Heisenberg uncertainty) because it is a present wave of time, as light moves NOT in the least space but the least time. Now, consider a seminal wave – you, which reproduces in time, but becomes a herd of cells that integrated emerges into a larger scale. In both cases the final result is in space and so it is positive.

So as I said, for each case the process must be studied but the results will give us the conclusion that we are observing a time event or a spatial organism.

In that regard the most important and hence first view of the Rashomon Effect on ∫∂ is:

∫≈∂ ARE TIME=SPACE BEATS/STEPS IN ANY D²

We have further defined the Disomorphism between the 5 Dimensions of space-time and the 5 actions=motion=operators of mathematical space.

An operation or actor is thus a Disomophism of a language or form, which enacts through the operator mirrors of the language or form, which in mathematics are the operandi, ±, x ÷, xª log, ∫∂… but in other species as all of them encode, the social evolution, darwinian fight, decay, growth, functions of a super organism might be coded as 5D functions by genes, or words, or any other syntactic form.

It is the most fruitful ∫∂ symmetry, soon used by Leibniz and Newton to develop laws of lineal time motion in space, IN WHICH the full realisation of all other views BECAME THE BIGGEST MIRACLE OF magic mathematics.

Alas, the entire planet was astonished, when in a not yet fully understood scalar duality, derivatives in time turns out to be inverse to volumes of an integral of space?! This was the biggest surprise of mathematics since the finding by Pythagoras of the irrational pi. Why God had made coincide two different operations till then seemingly not related to each other; derivatives of time motions and volumes of spatial form?

Answer because according to the Galileo’s paradox, the first insight that prompted me 30 years ago to discover 5D², time and space are indeed the two sides of a holographic 2-manifold dimension.

So a motion in time decelerates by becoming a new dimension of space, and indeed a moving curve is equivalent to a surface, it generates when we measure as space. So lineal time-motions produces space surfaces:

In the graph, a dimension of space volume transforms into a dimension of angular time motion, and so we can apply a derivative and integral duality as there has been an S>T step-motion of space-time.

4D: S∂

ENTROPIC MOTIONS: DERIVATIVES

ðIME VIEW

NEXT, to this realisation we must wonder the question of causality, which is expressed in terms of independence.

It seems then that most spatial functions are dependent on time the independent factor:   $=ƒ(t)

Yet as we recall that time motions ‘stop’ into space we can interpret this independence in terms of order: as functions are first motions in time that stop and become ‘forms’ of space, leaving a past-memorial trace.  which if NOT erased becomes a population of space, which moves again and then becomes a population and in this manner reproduction of dimensions takes place, building a being of growing ∆Dst.

How this is expressed in ∫∂ terms becomes then clear: since time is discrete, discontinuous, made of a toe, moving, stopping (often perceiving), moving and stopping, we must first ‘encounter’ the minimal step of the time motion, and this is what we shall call dt, and then move, stop and move stop a number of steps, which we integrate, building in this manner a new dimension of space-time.

So the combination of ∫∂ is in fact a process of creation of an ST dual dimension of space-time.  And that is the ultimate meaning of it.

So when study those simplest equations of physics, we shall consider those in which we make a ‘ceteris paribus’ rhythm of considering it first from the point of view of ‘time’ steps and then from the point of view of ‘space’ integrating them as a simultaneous space, when we have ‘traced’ enough steps to make that simultaneity meaningful .

And this is the meaning of a definite integral.

It follows then that we can escape the memorial creation, step by step of the spatial form, as something which for us is no longer needed, when we are interested only in integrating the space, and for that reason the integral work merely as an integral of a volume, a surface – whose creation in time has already happened.

But we still have to find a quanta of that ‘creation’ now a mere ‘population in space’.

The different time-space beats.

This of course must be done because reality is bidimensional and a dimension of space goes accompanied by a dimension of time, generating as in the previous graphs, the motions=changes, S≈T≈S≈T that shape reality.

And it is the justification on why differential equations that make systems dependant of such pair of variables happen.

BUT then it follows we shall be able to APPLY THE RASHOMON EFFECT and find a use for the pair ∫∂ as expression of an inverse beating for each pair of dimensions of space-time. 

And decompose both space-time forms and time-space events in S>T<S beats.

And in the process of doing so, learn further insights about the symmetries between space and time.

The algebraic/graphic duality.

On view of our deeper departure from the ultimate essence of Analysis, which is to study steps of space-time.  That is to put algebraic S=T symmetries in motion; the algebraic vs. graphic interpretations of calculus responds to yet another symmetry of spatial vs. temporal methods, considered on our posts of @nalytic geometry and Algebra. 

It does show more clearly what we mean by those ‘steps’ as basically the ‘tangent’ of the curve is in most cases a space-time step expressed by the general function: X(s) = ƒ(t)

Obviously as s and t are ill defined, it was only understood for lineal space-distance and time-motion.  And so the ‘geometrical’ abstract concept remains, void of all experimental meaning… as a… Tangent… it was…

SPATIAL:GEOMETRIC VIEW.

We are led to investigate a precisely analogous limit by another problem, this time a geometric one, namely the problem of drawing a tangent to an arbitrary plane curve.
Let the curve C be the graph of a function y = f(x), and let A be the point on the curve C with abscissa x0 (figure 10). Which straight line shall we call the tangent to C at the point A? In elementary geometry this question does not arise. The only curve studied there, namely the circumference of a circle, allows us to define the tangent as a straight line which has only one point in common with the curve.

To define the tangent, let us consider on the curve C (figure up) another point A′, distinct from A, with abscissa x0 + h. Let us draw the secant AA′ and denote the angle which it forms with the x-axis by α. We now allow the point A′ to approach A along the curve C. If the secant AA′ correspondingly approaches a limiting position, then the straight line T which has this limiting position is called the tangent at the point A. Evidently the angle α formed by the straight line T with the x-axis, must be equal to the limiting value of the variable angle β.
The value of tan β is easily determined from the triangle ABA′ (figure up):

It is then clear that h is the frequency quanta of time, or if we are inversely using the ∫∂ method to measure space populations, the minimal unit.  And so the ultimate concept here is that h NEVER goes to 0. And the clear proof is that if it were arriving to zero, x/h=∞.

So infinitesimals do NOT exist, and it only bears witness of the intuitive intelligence of Leibniz that he so much insisted on a quantity for h=1/n… (and the lack of it of 7.5 billion infinitesimals of Humanity, our collective organism, which memorise this h->o that so much abstract pain gave me when a kid – one of those errors I annotated mentally with the absurd concept of a non-E point with no breath, or else how you fit many parallels, of the limit of c-speed, how Einstein proved that experimentally?, and other ‘errors’ that ∆st does solve in all sciences).

But for other curves such a definition will clearly not correspond to our intuitive picture of “tangency.”

Thus, of the two straight lines L and M in figure below, the first is obviously not tangent to the curve drawn there (a sinusoidal curve), although it has only one point in common with it; while the second straight line has many points in common with the curve, and yet it is tangent to the curve at each of these points.

And yet such a curve is ultimately the curve of a wave, and we know waves are differentiable. So the tangent IS NOT the ultimate meaning of the ∫∂ functions – time/space beats are. The question then is what kind of st beat shall we differentiate in such a transversal wave?

A DIFFERENT DIMENSION, NORMALLY as waves are the 2nd dimension of energy, as in the intensity of an electric flow… a mixture of a population and a motion; or ‘momentum’ (the derivative of energy)…

And so the next stage into the proper understanding of ∫∂ operations is what ‘kind of dimensional space-time change-steps’ we are measuring.

∆ VIEW

The inversion of the finitesimal calculus of ∆-1 is the integral calculus of 5D.

The transition to ∆nalysis: new operations

The mathematical method of limits was evolved as the result of the persistent labor of many generations on problems that could not be solved by the simple methods of arithmetic, algebra, and elementary geometry.

The inverse properties of ∫pace problems and ∂temporal problems

What were the problems whose solution led to the fundamental concepts of analysis, and what were the methods of solution that were set up for these problems ? Let us examine some of them.

The mathematicians of the 17th century gradually discovered that a large number of problems arising from various kinds of motion with consequent dependence of certain variables on others, and also from geometric problems which had not yielded to former methods, could be reduced to two ST types:

Temporal examples of problems of the first type are: find the velocity at any time of a given nonuniform motion (or more generally, find the rate of change of a given magnitude), and draw a tangent to a given curve. These problems (our first example is one of them) led to a branch of analysis that received the name “differential calculus.”

Spatial examples: The simplest examples of the second type of problem are: find the area of a curvilinear figure (the problem of quadrature), or the distance traversed in a nonuniform motion, or more generally the total effect of the action of a continuously changing magnitude (compare the second of our two examples). This group of problems led to another branch of analysis, the “integral calculus.”

Thus two fundamental problems were singled out: the temporal problem of tangents and the spatial problem of quadratures.

Now the reader would observe that unlike the age of Arithmetics and Algebra, which stays in the same ‘locus/form’; here we observe a key property of analysis: the transformation of a temporal cyclical question, into a lineal spatial solution.

I.e. the solution of acceleration/speed by a lineal tangent, through an approximation; and the calculus of a cyclical, spatial area by the addition of squares. And the deep philosophical truth behind it, which only Kepler seemed to have realized at the time:

‘All lines are approximations or parts of a larger worldcycle

And so we can consider in terms of modern fractal mathematics, that ‘the infinitesimal is the fractal unit, quanta or step’ of the larger world cycle, and as a general rule:

‘All physical processes are part of a conservative 0-sum world cycle’.

Which explains ultimately the conservation of energy and motion, as motions become ultimately world cycles, either closed paths in a single plane, or world cycles balanced through ∆±1 planes.

Such is the simple dual GST justification of Analysis, as always based in ∆…  finitesimals and St… the inverse properties of ∫∂.

 THE MAIN FUNCTIONS OF NATURE UNDER THE ∫∂ OPERATIONS.

Functions.

In simple terms, a function f is a mathematical rule that assigns to a number x (in some number system and possibly with certain limitations on its value) another number f(x). For example, the function “square” assigns to each number x its square x2.

The common functions  are thus definable by formulas, which are related to the ∆s ≈ ∆T duality, such as:

∆§: Polynomials of the type, f(x) = x2. The logarithmic function log (x); & the exponential function exp (x) or ex (where e = 2.71828…; and the square root function √x.

∆T: Trigonometric functions, sin (x), cos (x), tan (x), and so on.

Then there is the question of transformations between space and time and 5D a(nti)symmetries, which is an essential part of classic algebra and we resume in those terms:

  • Integral transforms make possible to convert a differential equation of 5D space-time within certain boundary values (time membrane, which limits the equation as a ‘real system’, not an infinity, into terms of an algebraic equation that can be easily solved (a polynomial which is a result in a single space-time plane). And this transformation obviously should be of two canonical forms. And as it happens there are 2 canonical transforms:
  • – A spatial, lineal transformation, and this is the Laplace transform: f(p), defined by the integral:

F(p)=∫∞0 e-pt F(t)dt .

The linear Laplace operator L thus transforms each function F(t) of a certain set of functions into some function f(p) and it is used most frequently by electrical engineers in the solution of various electronic circuit problems.

  • A temporal transformation and this is the Fourier analysis, which proved that a function y = f(x) could be expressed between the limits x = 0 and x = 2π by an infinite series of waves:

F(x)=1/2 α ∑a cos kx + b sin kx.

That is an equation could become a cyclical time dependent equation developed as a sum of harmonic waves.

And finally the inverse, the fact that a function could be converted into a 5D analytical equation between scales of the 5th dimension is proved by the third most used approximation of functions, the Taylor series, which expresses a function f—for which the derivatives of all orders exist—at a point a in the domain of f in the form of the power series:

∑∞∆=0 f (∆)(a)(z-a)∆/∆!

In which Σ denotes the addition of each element in the series as ∆ ranges from zero (0) to infinity (∞), f (∆) denotes the nth derivative of f, and ∆! is the standard factorial function.

So this 3 transformations a means – and its applications enlighten an infinite number of real equations that the different 5D scales of reality can transfer energy or information.

That is, a 5D flow of energy and information can travel into a single membrane with absolute accuracy (no loss of entropy, no need of transforms or groups to resolve them.

But there is a minimal loss of entropy when we transform between planes back and forth (of information or energy) as the transform is NOT absolutely exact – as for it to be exact the number of terms normally tend to infinity, which is not possible in the finite duration of any flow between ∆±1 scales of the 5th dimension.

Further on it is important to understand the meaning of the operandi and the laws of relative equality and dynamic transformations of ¬Æ where equality never fully exists, but we transform,  F(t)<=> F(s) as in E<=>Mc², or we approximate values through an evident property ≈.

We are not here extensive but just showing some ∆st insights to those inversions.

LET US do some comments of the main functions with fundamental roles in ∆st and its derivatives BY DIVIDING THEM IN 3 GREAT ∆st ‘groups’:

@: ∫∂ of IDENTITY ELEMENTS – FORMS THAT DO NOT CHANGE

The interest of those results refer to the concept of an identity number, as 0 is the identity of sum and 1 of product. But they also have a clear meaning as the interval 0-1 of the generation ‘seed’ dimension from ∆-1 to ∆º.

And indeed, the surprising result that ∫o dx = C does indeed suggest that the 0-point is a fractal point that ‘has volume’, or else how integrating the nothingness of existence shall we get a ‘constant’ which is a social number? But if we do start from a o-1 unit its ‘integral’ sum will give us a reproductive group, or ‘social number’.

And if we integrate the full ‘1 being’, we shall get a new dimension, the variable plus the constant, which suggest also a little understood process related to the operations of derivatives and integrals, the switching CAUSED by OPERATIONS on motions of sets (our definition of analysis), which CHANGE a spatial state into a time state and viceversa. So the spatial 1-form-whole becomes a time-variable X, while the variable X becomes a spatial derivative constant.

Since  constant number does NOT change. So a time variable gives us the spatial identity number.

Finally, the deepest thought on those seemingly well known operations regards the subtle difference between both operations: the derivative localises a single ‘finitesimal solution’, or minimal ∆-1 past part of the system…

But the inverse, ‘integral’ or ‘future 5th Dimensional arrow’ of social wholes opens up the possibility of multiple constant solutions to add to the variables, as the future is open to subtle variations (∫) but the past is fixed by the infinitesimal identity number (∂).

Of course if we instead consider the integral not in time but as a fixed spatial path, this concept of future vanishes and we get a determined single solution to the integral where the constant is just the starting point.

Other way of seeing it though is to consider the identity element @, the constant mind that does NOT change.

∫∂ of POLYNOMIAL GROWTH: 

The first result already considered are the polynomial ‘reduced’ dimension by means of searching its infinitesimal, which however is for simple polynomials quite larger, compared to a direct xˆn-1 reduction.

Further on, the logarithm IS clearly the 5D social scaling operation and its derivative is indeed the absolute finitesimal, 1/n.

And inversely the maximal growth is its inverse, the absolute decay of e¯ª.

It is worth to talk of those 3 co-related results from the philosophical pov: the maximal expansion of an event is an absolute future to past, ∆+1 <<∆-1 entropic death expressed by the exponential:

The minimal process of growth (Log) is an infinitesimal, the maximal process of decay (e¯ª) is equivalent to the whole, in a single quanta of time. We state in the general law that death happens in a single quanta of time, in which the entire network that pegged together the being, disappeared. 

Γst functions.

The third type of functions are concerned not WITH ∆±1 past to future to past d=evolutions but with present sinusoidal wave repetitions of the same time-cycle, hence change is cyclical repetitive:

Both functions thus are clearly inverse not only in Γst but also in the ∆±1 scales – being the negative symbol one of conventions regarding the chosen ± direction of the cyclical, sinusoidal motion.

Here though the interest resides in comparing both type of present vs. ∆ past-future functions: the present derivative is self repetitive, as we return to the sin after 4 quadrant derivatives; and ideed we return to the present considering also the generational cycle, after 4 ages of life. So we can model a sinusoidal function as a world cycle of existence in its 4 quadrants. 

THE 3 FIELDS OF ANALYSIS. 

Generally speaking the techniques of differentiation distinguish between ODE ordinary equations with a single ST variable, which probe in depth on either space or time consecutive derivatives, but have a limited use as reality only allows 3 multiple derivatives into the single time or space dimension (beyond 3 the results are not essentially not related to the direct experience of how space-time systems evolve through scales). Multiple derivatives though are the tool to approximate two of the 3 great fields of observance of the scalar Universe, through mathematical mirrors, which we can write as generator equation:

∆-i: Fractal Mathematics (discontinuous analysis of finitesimals) < Analysis – Integrals and differential equations (∆º±1: continuous=organic space) < ∆+i: Polynomials (diminishing information on wholes).

It is important in that sense to understand the difference focus of the 3 approaches of mathematical mirrors to observe reality. We shall study in the usual order in which they were born, first ODE then PDE and finally fractals.

Type of Differential equations.

A differential equation is a mathematical equation that relates some function with its derivatives.

In the applications of mathematics there often arise problems of a qualitatively different sort, in which the unknown is itself a function, a law expressing the dependence of certain variables on others. For example, in investigating the process of the cooling of a body, our task is to determine how its temperature will change in the course of time; to describe the motion of a planet or a star we must determine the dependence of their coordinates on time, and so forth.
We can quite often construct an equation for finding the required unknown functions, such equations being called functional equations. The nature of these may, generally speaking, be extremely varied; in fact, it may be said that we have already met the simplest and  most primitive functional equations when we were considering implicit functions.
The problem of finding unknown functions consider the most important class of equations serving to determine such functions, namely differential equations; that is, equations in which not only the unknown function occurs, but also its derivatives of various orders. The following equations may serve as examples:

In the graph, some of the key partial differential equations, which measure the laws of motion for particles and wave states and potential fields; their behaviour depends on the ‘topological $, §, States’ and number of T.œ components (membrane, singularity, vital space) involved. It is then possible to establish a one to one correspondence between the states of the Γ generator and the main equations of mathematical physics for its simple systems, ternary parts and basic actions of |-motion, O-vortex form, and ≈wave Complementary state.
In the first three of these, the unknown function is denoted by the letter x and the independent variable by t; in the last three, the unknown function is denoted by the letter u and it depends on two arguments, x and t, or x and y.
The great importance of differential equations in mathematics, and especially in its applications, is due chiefly to the fact that the investigation of many problems in physics and technology may be reduced to the solution of such equations.
Calculations involved in the construction of electrical machinery or of radiotechnical devices, computation of the trajectory of projectiles, investigation of the stability of an aircraft in flight, or of the course of a chemical reaction, all depend on the solution of differential equations.
It often happens that the physical laws governing a phenomenon are written in the form of differential equations, so that the differential equations themselves provide an exact quantitative (numerical)expression of these laws. The reader will see in the following chapters how the laws of conservation of mass and of heat energy are written in the form of differential equations. The laws of mechanics discovered by Newton allow one to investigate the behavior of any mechanical system by means of differential equations.

Let us illustrate by a simple example. Consider a material particle of mass m moving along an axis Ox, and let x denote its coordinate at the instant of time t. The coordinate x will vary with the time, and knowledge of the entire motion of the particle is equivalent to knowledge of the functional dependence of x on the time t. Let us assume that the motion is caused by some force F, the value of which depends on the position of the particle (as defined by the coordinate x), on the velocity of motion ν = dx/dt and on the time t, i.e., F = F(x, dx/dt, t).

According to the laws of mechanics, the action of the force F on the particle necessarily produces an acceleration w = d2x/dt2 such that the product of w and the mass m of the particle is equal to the force, and so at every instant of the motion we have the equation: m d²x/dt² = F (x, dx/dt,t).

This is the differential equation that must be satisfied by the function x(t) describing the behavior of the moving particle. It is simply a representation of laws of mechanics. Its significance lies in the fact that it enables us to reduce the mechanical problem of determining the motion of a particle to the mathematical problem of the solution of a differential equation.
Later in this chapter, the reader will find other examples showing how the study of various physical processes can be reduced to the investigation of differential equations.

To describe in general terms the problems in the theory of differential equations, we first remark that every differential equation has in general not one but infinitely many solutions; that is, there existsan infinite set of functions that satisfy it. For example, the equation of motion for a particle must be satisfied by any motion induced by the given force F(x, dx/dt, t), independently of the starting point or the initial velocity. To each separate motion of the particle there will correspond a particular dependence of x on time t. Since under a given force F there may be infinitely many motions the differential equation (2) will have an infinite set of solutions.

Every differential equation defines, in general, a whole class of functions that satisfy it. The basic problem of the theory is to investigate the functions that satisfy the differential equation. The theory of these equations must enable us to form a sufficiently broad notion of the properties of all functions satisfying the equation, a requirement which is particularly important in applying these equations to the natural sciences. Moreover, our theory must guarantee the means of finding numerical values of the functions.“If the unknown function depends on a single argument, the differential equation is called an ordinary differential equation. If the unknown function depends on several arguments and the equation contains derivatives with respect to some or all of these arguments, the differential equation is called a partial differential equation. The first three of the equations in (1) are ordinary and the last three are partial.
The theory of partial differential equations has many peculiar features as it describes more complex ST which make them essentially different from ordinary differential equations.

In applications, most of the ‘dominant’ functions usually represent physical quantities as perceived in a given simultaneous structure of space, which will often be an Œ-function (description of a super organism or the Tiƒ linguistic mapping of it).

Meanwhile the derivatives represent their rates of change, which is therefore a temporal function; usually NOT the whole world cycle of the system, but an ∆-1, minimal action – which makes this type of differential equations according to the spatial bias of physics in particular and human thought in general, more concerned with the changes on the spatial parameters and pov of the system; which the differential equation defines as a temporal relationship between two ‘fixed images’ of the spatial system, at two different moments in time of its world cycle.

In as much as those equations try to be specific to a certain case it will have to include more parameters to ‘define’ which of the three possible paths of the future the equation will take: a repetitive hyperbolic present; a parabolic, entropic path or an elliptic FORM.

In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.

The great importance of differential equations thus, is due chiefly to the fact that the investigation of many problems in physics and technology may be reduced to the solution of such equations; in as much as they reflect well the 2 fundamental elements of reality, its ∆-scales of finitesimals and wholes, and the S≈T symmetries of each plane.
Calculations involved in the construction of electrical machinery or of radiotechnical devices, computation of the trajectory of projectiles, investigation of the stability of an aircraft in flight, or of the course of a chemical reaction, all depend on the solution of differential equations.
It often happens that the physical laws governing a phenomenon are written in the form of differential equations, so that the differential equations themselves provide an exact quantitative (numerical) expression of these laws. The reader will see in the following chapters how the laws of conservation of mass and of heat energy are written in the form of differential equations. The laws of mechanics discovered by Newton allow one to investigate the behavior of any mechanical system by means of differential equations.

Now the mathematical elements of analysis are all well known and standard. Leibniz started them with the symbol ∫ that means summations.

Ordinary differential equations

An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term “ordinary” is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.

They are basically analysis of single ‘steps’/symmetries of S≈T systems, but ODE can go ‘deeper’ into the spatial or temporal structure of the system by establishing multiple derivatives on the original parameter, thus they are perfect systems to ‘study’ the ternary dimensions of ‘integral’ s pace (1Distance, 2D area and 3D volume) and ‘derivative’ time (steady motion, acceleration and deceleration).

And as the symmetries between those 3D of space and time are not clearly understood, ∆st can bring some insights in its analysis.

To notice finally that the best use of mathematical equations and its operations are the simplest actions of motion as reproductions of information in its 3 states/varieties (potentials, waves and particles); but for complex social and reproductive processes very few internal characteristics can be extracted with mathematical tools.

And yet even in those simple cases, exact solutions are not always possible, regardless of the dogmatic myths of mathematical accuracy. This happens as usual because humans measure ‘lineal distances’ and reality is curved, so we approximate lineal quanta/finitesimal and then ad them to find the whole curved state, making use of one of the 3 ‘primary Galilean dualities’ between continuity and discontinuity, linearity and cyclicality, large and small.

So what are the key elements for finding ‘solutions’, that is descriptions of the full T.œ, its state and simpler actions of 1D-motion/reproduction in space, and topological ≤≥ change from lineal to cyclical form? Basically to have enough data about the ‘boundary conditions’ of the vital energy open ball (that is a parameter for the singularity if it exists, and for the membrane that encloses the system). As both are 1D, 2D hence lineal forms of the type A+Bx, then it is possible to measure and find determined solutions.

Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.

ODEs are thus symmetric to simple Space-time steps, which correspond themselves to the simplex 3 actions of 1D, 2D and some possible 4D simple entropy deaths and some simple 3D reproductive steps (3D however when combining space and time parameters and most combined steps of several dimensions and 5D worlds will required PDEs. )

Let us illustrate by a simple example. Consider a material particle of mass m moving along an axis Ox, and let x denote its coordinate at the instant of time t. The coordinate x will vary with the time, and knowledge of the entire motion of the particle is equivalent to knowledge of the functional dependence of x on the time t. Let us assume that the motion is caused by some force F, the value of which depends on the position of the particle (as defined by the coordinate x), on the velocity of motion ν = dx/dt and on the time t, i.e., F = F(x, dx/dt, t).

According to the laws of mechanics, the action of the force F on the particle necessarily produces an acceleration ω = d²x/dt² such that the product of w and the mass m of the particle is equal to the force, and so at every instant of the motion we have the equation:

2. m d²x/dt² = F (x, dx/dt,t) 

Where we find the first key ‘second derivative’ for the dimension of time acceleration, which requires a first insight on the Nature of physical systems and its dimensions in space vs. time.

In space, the dimensions seem to us in an easy hierarchical system of growth, 1D lines (2D if we consider them waves of Non-E fractal points with a 0-1 unit circle dimension for each point), 2D areas and 3 D volumes, but in time the 3 arrows depart from 1D steady state motion and can be considered as opposite directions, when volumes of space grow, through the scattering arrow of entropy diminishing its speed, vs. the acceleration of speed that diminishes space as the system collapses into a singularity:

So the 3D of classic space, ‘volume’ actually belongs to the entropic arrow of decelerating time that creates space-volume, vs. the opposite arrow of imploding time vortices that diminish space volume and increases speed, Vo x Ro = k.

So what seems in space a natural growth of volume in space, in time has a different order:

Entropic ≈decelerating volume < steady state ≈ distance-motion > Informative, cyclical area ≈ accelerated motion.

This different ‘order’ of dimensions when perceived in simultaneous space and cyclical time is the main dislocation in the way the mind perceives both (which is sorely painful when we consider the order of a world cycle, always starting in the ∆-1 scale of maximal information to decline as it grows and reproduces into less perfect, more entropic volumes of iterative forms that finally decline and die in the arrow of entropy; which the mind that has a SPATIAL-VOLUME INCLINED nature of ever-growth, does not understand).

This is the differential equation that must be satisfied by the function x(t) describing the behavior of the moving particle. It is simply a representation of laws of mechanics. Its significance lies in the fact that it enables us to reduce the mechanical problem of determining the motion of a particle to the mathematical problem of the solution of a differential equation.

Later the reader will find other examples showing how the study of various physical processes can be reduced to the investigation of differential equations.

The theory of differential equations began to develop at the end of the 17th century, almost simultaneously with the appearance of the differential and integral calculus. At the present time, differential equations have become a powerful tool in the investigation of natural phenomena. In mechanics, astronomy, physics, and technology they have been the means of immense progress. From his study of the differential equations of the motion of heavenly bodies, Newton deduced the laws of planetary motion discovered empirically by Kepler. In 1846 Leverrier predicted the existence of the planet Neptune and determined its position in the sky on the basis of a numerical analysis of the same equations.

To describe in general terms the problems in the theory of differential equations, we first remark that every differential equation has in general not one but infinitely many solutions; that is, there existsan infinite set of functions that satisfy it. For example, the equation of motion for a particle must be satisfied by any motion induced by the given force F(x, dx/dt, t), independently of the starting point or the initial velocity. To each separate motion of the particle there will correspond a particular dependence of x on time t. Since under a given force F there may be infinitely many motions the differential equation (2) will have an infinite set of solutions.

Every differential equation defines, in general, a whole class of functions that satisfy it. The basic problem of the theory is to investigate the functions that satisfy the differential equation. The theory of these equations must enable us to form a sufficiently broad notion of the properties of all functions satisfying the equation, a requirement which is particularly important in applying these equations to the natural sciences. Moreover, our theory must guarantee the means of finding numerical values of the functions, if these are needed in the course of a computation. We will speak later about how these numerical values may be found.

If the unknown function depends on a single argument, the differential equation is called an ordinary differential equation. If the unknown function depends on several arguments and the equation contains derivatives with respect to some or all of these arguments, the differential equation is called a partial differential equation. The first three of the equations in (1) are ordinary and the last three are partial.

The theory of partial differential equations has many peculiar features which make them essentially different from ordinary differential equations.

SOME CLASSIC EXAMPLES

Example 1. The law of decay of radium says that the rate of decay is proportional to the initial amount of radium present. Suppose we know that a certain time t = t0 we had R0 grams of radium. We want to know the amount of radium present at any subsequent time t.

Let R(t) be the amount of undecayed radium at time t. The rate of decay is given by the value of – (dR/dt). Since this is proportional to R, we have:

-dR/dt=kR  where k is a constant. In order to solve our problem, it is necessary to determine a function from the differential equation. For this purpose we note that the function inverse to R(t) satisfies the equation: – dt/dR=1/kR,  since dt/dR = (1/dR)/dt. From the integral calculus it is known that equation is satisfied by any function of the form: T= – 1/k ln R+ C.

where C is an arbitrary constant. From this relation we determine R as a function of t. We have:
image2145

From the whole set of solutions (5) of equation (3) we must select one which for t = t0 has the value R0. This solution is obtained by setting C1 = R0eˆkt0.

From the mathematical point of view, equation (3) is the statement of a very simple law for the change with time of the function R; it says that the rate of decrease – (dR/dt) of the function is proportional to the value of the function R itself. Such a law for the rate of change of a function is satisfied not only by the phenomena of radioactive decay but also by many other physical phenomena.

We find exactly the same law for the rate of change of a function, for example, in the study of the cooling of a body, where the rate of decrease in the amount of heat in the body is proportional to the difference between the temperature of the body and the temperature of the surrounding medium, and the same law occurs in many other physical processes. Thus the range of application of equation (3) is vastly wider than the particular problem of the radioactive decay from which we obtained the equation.

Example 2. Let a material point of a mass m be moving along the horizontal axis Ox in a resisting medium, for example in a liquid or a gas, under the influence of the elastic force of two springs, acting under Hooke’s law (figure 1), which states that the elastic force acts toward the position of equilibrium and is proportional to the deviation from the equilibrium position. Let the equilibrium position occur at the point x = 0. Then the elastic force is equal to –bx where b > 0.

image2147

We will assume that the resistance of the medium is proportional to the velocity of motion, i.e., equal to –a(dx/dt), where a > 0 and the minus sign indicates that the resisting medium acts against the motion. Such an assumption about the resistance of the medium is confirmed by experiment.

From Newton’s basic law that the product of the mass of a material point and its acceleration is equal to the sum of the forces acting on it, we have:   md²x/dt²= – bx –a(dx/dt) (6)

Thus the function x(t), which describes the position of the moving point at any instant of time t, satisfies the differential equation (6). We will investigate the solutions of this equation in one of the later sections.

If, in addition to the forces mentioned, the inaterial point is acted upon by still another force, F outside of the system, then the equation of motion takes the form:   md²x/dt²= – bx –a(dx/dt) + F (6′)

Example 3. A mathematical pendulum is a material point of mass m, suspended on a string whose length will be denoted by l. We will assume that at all stages the pendulum stays in one plane, the plane of the drawing (figure 2). The force tending to restore the pendulum to the vertical position OA is the force of gravity mg, acting on the material point. The position of the pendulum at any time t is given by the angle ϕ by which it differs from the vertical OA. We take the positive direction of ϕ to be counterclockwise. The arc A A′ = lϕ is the distance moved by the material point from the position of equilibrium A. The velocity of motion ν will be directed along the tangent to the circle and will have the following numerical value:

v= l dΦ/dt.image2153

To establish the equation of motion, we decompose the force of gravity mg into two components Q and P, the first of which is directed along the radius OA′ and the second along the tangent to the circle. The component Q cannot affect the numerical value of the rate ν, since clearly it is balanced by the resistance of the suspension OA′. Only the component P can affect the value of the velocity ν. This component always acts toward the equilibrium position A, i.e., toward a decrease in ϕ, if the angle ϕ is positive, and toward an increase in ϕ, if ϕ is negative. The numerical value of P is equal to –mg sin ϕ, so that the equation of motion of the pendulum is:

m dv/dt= – mg sinΦ  or: d²Φ/dt² = – g/l sin Φ

It is interesting to note that the solutions of this equation cannot be expressed by a finite combination of elementary functions. The set of elementary functions is too small to give an exact description of even such a simple physical process as the oscillation of a mathematical pendulum. Later we will see that the differential equations that are solvable by elementary functions are not very numerous, so that it very frequently happens that investigation of a differential equation encountered in physics or mechanics leads us to introduce new classes of functions, to subject them to investigation, and thus to widen our arsenal of functions that may be used for the solution of applied problems.

Let us now restrict ourselves to small oscillations of the pendulum for which, with small error, we may assume that the arc AA′ is equal to its projection x on the horizontal axis Ox and sin4 is equal to ϕ. Then ϕ ≈ sin ϕ = x/l and the equation of motion of the pendulum will take on the simpler form:

(8) d²x/dt²=-g/l x

Later we will see that this equation is solvable by trigonometric functions and that by using them we may describe with sufficient exactness the “small oscillations” of a pendulum.

image2163

Example 4. Helmholtz’ acoustic resonator consists of an air-filled vessel V, the volume of which is equal to ν, with a cylindrical neck F. Approximately, we may consider the air in the neck of the container as cork of mass where ρ is the density of the air, s is the area of the cross section of the neck, and l is its length. If we assume that this mass of air is displaced from a position of equilibrium by an amount x, then the pressure of the air in the container with volume ρ is changed from the initial value p by some amount which we will call Δp.

m=ρ (9)

We will assume that the pressure p and the volume ν satisfy the adiabatic law pvk = C. Then, neglecting magnitudes of higher order, we have

image2167

and

image2169

(In our case, Δν = sx.) The equation of motion of the mass of air in the neck may be written as:

md²x/dt²= ∆p • s (11)

Here Δp · s is the force exerted by the gas within the container on the column of air in the neck. From 10 & 11 we get

image2173

where ρ, p, ν, l, k, and s are constants.

image2175

Example 5. An equation of the form 6 also arises in the study of electric oscillations in a simple oscillator circuit. The circuit diagram is given in figure. Here on the left we have a condenser of capacity C, in series with a coil of inductance L, and a resistance R. At some instant let the condenser have a voltage across its terminals. In the absence of inductance from the circuit, the current would flow until such time as the terminals of the condenser were at the same potential. The presence of an inductance alters the situation, since the circuit will now generate electric oscillations.

To find a law for these oscillations, we denote by ν(t), or simply by ν, the voltage across the condenser at the instant t, by I(t) the current at the instant t, and by R the resistance. From well-known laws of physics, I(t)R remains constantly equal to the total electromotive force, which is the sum of the voltage across the condenser and the inductance –L(dI/dt). Thus: IR= – v – L(dI/dt) 13.

We denote by Q(t) the charge on the condenser at time t. Then the current in the circuit will, at each instant, be equal to dQ/dt. The potential difference ν(t) across the condenser is equal to Q(t)/C. Thus I = dQ/dt = C(dν/dt) and equation (13) may be transformed into:

image2179

image2181

Example 6. The circuit diagram of an electron-tube generator of electromagnetic oscillations is shown in figure. The oscillator circuit consisting of a capacitance C, across a resistance R and an inductance L, represents the basic oscillator system.

The coil L′ and the tube shown in the center of figure 5 form a so-called “feedback.” They connect a source of energy, namely the battery B, with the L-R-C circuit; K is the cathode of the tube, A the plate, and S the grid. In such an L-R-C circuit “self-oscillations” will arise. For any actual system in an oscillatory state the energy is transformed into heat or is dissipated in some other form to the surrounding bodies, so that to maintain a stationary state of oscillation it is necessary to have an outside source of energy. Self-oscillations differ from other oscillatory processes in that to maintain a stationary oscillatory state of the system the outside source does not have to be periodic. A self-oscillatory system is constructed in such a way that a constant source of energy, in our case the battery B, will maintain a stationary oscillatory state. Examples of self-oscillatory systems are a clock, an electric bell, a string and bow moved by the hand of the musician, the human voice, and so forth.

image2183

The current I(t) in the oscillatory L-R-C circuit satisfies the equation:

image2185

Here ν = ν(t) is the voltage across the condenser at the instant t, Ia(t) is the plate current through the coil L′; M is the coupling coefficient between the coils L and L′. In comparison with equation (13), equation (15) contains the extra term M(dIa/dt).

We will assume that the plate current Ia(t) depends only on the voltage between the grid S and the cathode of the tube (i.e., we will neglect the reactance of the anode), so that this voltage is equal to the voltage ν(t) across the condenser C. The character of the functional dependence of Ia, on ν is given in figure. The curve as sketched is usually taken to be a cubical parabola and we write an approximate equation for it by:image2187

Substituting this into the right side of equation (15), and using the fact that  dv/dt=I we get for ν the equation:

image2191

In the examples considered, the search for certain physical quantities characteristic of a given physical process is reduced to the search for solutions of ordinary differential equations.

Let us now consider the ∫∂ operations for the different dimensions of reality, starting in this case with the simplest cyclical clock-motions, which as they do NOT move in space, and repeat its form in time, are in fact not operated by ∫∂ measures of change:

1D: cyclical clocks, angular momentum

In the graph, in the simplest physical systems 1D is merely the angular momentum of its cyclical clocks of time, maximised in the membrane that encloses the system. Strictly speaking it does not change but becomes the ‘present function’ of a repetitive frequency clock without a derivative of change as the time-space steps seem not to vary. When we introduce a torque, change happens, called ‘acceleration’, the second dimension of time motion in physics, which we shall latter study when analysing in 5D with the Galilean Px. Newton’s laws. Here we just shall briefly explain why in lineal time, as humans only use t to measure change, the 1D is the invariant one and its derivative is zero.

What about ‘higher’ more complex, cyclical, and scalar Dimensions? The answer is that as we change the form of the dimensions, we have to change the operandi we use; and specifically when we study the Dimensions of change, which is the one differential/integral equations quantify, those equations MUST ADAPT not the other way around as mirrors of reality to the FORM of the dimensions of space-time they describe.

So as 1D is  A STEADY STATE ROTARY MOTION, strictly speaking it does NOT change in space-time locomotion (which is what humans with its lineal single time express in derivatives). Hence basically the derivative of those angular momentums is zero. It is conserved. 

Let us recall briefly those classic definitions and maths:

Angular momentum is a vector that represents the product of a body’s rotational inertia and rotational velocity about a particular axis. In the simple case of revolution of a particle in a circle about a center of rotation, the particle remaining always in the same plane and having always the same distance from the center, we discard the vector nature of angular momentum, and treat it as a scalar proportional to moment of inertia, I and angular speed, ω:

L= Iω:   Angular momentum = moment of inertia × angular velocity, and its time derivative is

dL/dt =dI/dt ω +I dω/dt is zero, and dL/dt=0+I dω/dt, which reduces to dL/dt =Iα.

Therefore, angular momentum is constant,  dL/dt=0 when no torque is applied. And this is the essence of its conservation law, a specific case of the conservation of the 5Dimensions of space-time of the Universe:

‘In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque. Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant’.

But when a torque is applied in a single present plane, or much more relevant to our inquire: when a system is submitted to the organising or disorganising entropic force of a higher or lower plane of existence, and acceleration exists, a vortex of time-space happens and we enter into the social dimensions of evolution – the 5th Dimension of the mind.

Social Number = first dimension that defines regular ‘points’ which are undistinguishable, as societies in regular polygons, where prime polygons have the property of ‘increasing inwards’ its numbers through reproduction of vortex-points (n-grams), as the graph shows, studied in Theory of Numbers. So a number in its geometric interpretation is a ‘cyclical point’ of regular ‘unit-points’ of growing ‘inner dimensional density’ a point with a volume of vital energy and information, a fractal point.

The bottom line though of this brief analysis of a system with a single Dimension of time-space, either as a fractal point emerging through a parameter visible to the ∆+1 observer, notably angular momentum as in quantum physics (h), is that it IS A CONSTANT, not a differentiable parameter for which a first step in space-time, SS ˆST ˆ TS ˆTT is needed. Then we are in a 2D system, normally a TS motion, or ‘speed’ in which the quanta of space, ‘moves≈reproduces’ in a time trajectory, which allows to measure the change on one parameter, normally the spatial location, and accordingly ‘derivate’ the ‘ratio’ or ‘inverse product’ between them.

2D: LINEAL SPACE-TIME

Let us consider one example of each dual dimension, $t, the two samples mentioned – speed and area, which were the first 2 themes solved historically, with the classic notation, to keep the historical approach, to see how really the methods can be used equally for quanta=frequency=steps of time, or quanta=populations=finitesmals of space:

Speed and acceleration: 2D TT

The next possible steps or motions in space-time are given by a dual time-time motion, which is acceleration, or a similar dual motion in space which is volume. As such those 2D motions have diametral opposite consequences shrinking a system in time, towards a mind zero point (TT->5D) or expanding it in space towards a extension of free, entropic ∆-1 elements (SS->4D).

But they can be used in combined forms to extract the same equations of speed, density and momentum.  Let us put the TT example:

The velocity of a point for which the distance s is a given function of the time s = f(t) is equal to the derivative of this function: v = s’ = ƒ ‘ (t).

So as it was established experimentally by Galileo, the distance s covered in the time t by a body falling freely in a vacuum is expressed in terms of TT-acceleration by the formula:   s=gt²/2

Whereas g is a constant that measure the acceleration on Earth, equal to 9.81 m/sec².

What is the velocity of the falling body at each point in its path?

Here as we are in two TT variation already we must do exactly the inverse operation to that of searching for speed departing from space:

Let the body be passing through the point A at the time t and consider what happens in the short interval of time of length Δt; that is, in the time from t to t + Δt. The distance covered will be increased by a certain increment Δs. The original distance is s1 = gt²/2.

From the increased distance we find the increment:image073

This represents the distance covered in the time from t to t + Δt. To find the average velocity over the section of the path Δs, we divide Δs by Δt:image075

Letting Δt approach zero, we obtain an average velocity which approaches as close as we like to the true velocity at the point A. On the other hand, we see that the second summand on the right-hand side of the equation becomes vanishingly small with decreasing Δt, so that the average υav approaches the value gt, a fact which it is convenient to write as follows:image077

Consequently, gt is the true velocity at the time t, and so we can consider gt as yet another expression of the Sð equation of speed inn cyclical time, where now t is a ‘step’ and g the measure of its ‘feeding’ on gravitational space.

Let us make the following remark. The velocity of a nonuniform motion at a given time is a purely physical concept, arising from practical experience. Mankind arrived at it as the result of numerous observations on different concrete motions.

The study of nonuniform motion of a body on different parts of its path, the comparison of different motions of this sort taking place simultaneously, and in particular the study of the phenomena of collisions of bodies, all represented an accumulation of practical experience that led to the setting up of the physical concept of the velocity of a nonuniform motion at a given time. But the exact definition of velocity necessarily depended upon the method of defining its numerical value, and to define this value was possible only with the concept of the derivative.

In mechanics the velocity of a body moving according to the rule s = f(t) at the time t is defined as the derivative of the function f(t) for this value of t.

But now, as a result of our analysis, we have reached an exact definition of the value of the velocity at a given moment, namely the finitesimal, minimal action of a given time motion. This result is extremely important from a practical point of view, since our empirical knowledge of the velocity has been greatly enriched by the fact that we can now make an exact definition for the 5 different motions of time, greatly expanding our understanding in terms of analysis of  those motions and its variations.

And we have use the method departing from time-quanta, frequency of steps, ‘speed motion’…

And that’s good enough, keeping ∆t->o without reaching 0, as we shall always find a limiting ‘time unit’…

In the extreme of those limits (c-speed) the limit will be found on the gravitational field from where light extracts motion.

As it happens that limit can be in ∆-4, according to the decametric 10¹º-¹¹ scales between ∆planes and the 5D metric which accelerates clocks in smaller quanta, 4 planes down  x 10¹º-¹¹ time units between scales… faster in frequency, hence around ±10ˆ44  faster clocks/smaller bits of time.

So the gravitational infinitesimal is truly ∆t->o and hence irrelevant. (Incidentally physics discovered this value for the minimal clock of Nature, without knowing its scalar planes, social quanta, and 5 metric, by sheer chance. It was Planck and he call it the time of ‘God’ (:

How he did it? with the Universal constants; a fascinating theme treated further down this texts when done. In any case he was closer to the truth, as I always considered those numbers a solid quantitative proof along many other elements of the theory of grand numbers of ∆ST theory:screen-shot-2017-02-07-at-19-36-28

The theoretical importance of Tp in our argument over which type of continuous, S/t or discontinuous, λ(s) x ƒ (t) speed treat becomes now clear. As the absolute finitesimal of all the planes and scales in which time happens among human observers (regardless of possible ∆±|≥4| planes beyond human observance) in as much as those are the minimal scales in which gravitational fields might exercise its forces over our atomic substance, continuity happens and makes possible finitesimal/integral calculus because by all means this is undistinguishable from our pov/scale.

3D: POPULATIONS

Now when we get into 3D, which are combinations of 1D + 2D, the vibrations of different S-T combinations multiply our possibilities.

If there is only a type of 1D the fractal point or the invisible distance with no form, 2D GIVE us 4 fundamental variations, SS tt and st, ts. Now with 3D we can combine 2 and 1 varieties to give us the following orations:

1D + 4 2D, 2D + 4 1D and if they are not commutative as it seems the case, the inverse case, for 20 combinations of 3D populations.

It is then a whole encyclopaedia what you need to explain all the practical cases in which variations of 3 D integral Derivatives can be used to explain different vibrations of Sts motions of all kinds.

ADDING A NEW DIMENSION OF ‘WIDTH-ENERGY’ INTENSITY

Once this concept is fully understood we then need to deal with ‘finitesimal quantities’, either in time or in space, as the previous argument on ‘changes of speeds and frequencies of time motion steps’, ∆s/∆t , can be reversed to study changes of volumes of space and populations of simultaneous space-beings.  And so we apply all those concepts to the analysis of 2D populations. Let us put an example and resolve it in terms of space-quanta (method of limits) and in terms of its change with differentials.

Quanta of space.

Now a spatial use of the limit concept to calculate not a time but a space volume, forebear of differential calculus: 

image079

Example 2. A reservoir with a square base of side a and vertical walls of height h is full to the top with water (figure 1). With what force is the water acting on one of the walls of the reservoir?

We divide the surface of the wall into n horizontal strips of height h/n. The pressure exerted at each point of the vessel is equal, by a well-known law, to the weight of the column of water lying above it. So at the lower edge of each of the strips the pressure, expressed in suitable units, will be equal respectively to:image081

We obtain an approximate expression for the desired force P, if we assume that the pressure is constant over each strip. Thus the approximate value of P is equal to:image084

To find the true value of the force, we divide the side into narrower and narrower strips, increasing n without limit. With increasing n the magnitude 1/n in the above formula will become smaller and smaller and in the limit we obtain the exact formula:

P = ah²/2

 Leibniz rightly considered  1/n the ‘finitesimal unit’, whereas we consider 1 the whole, and n, its minimal fraction, usually 1 of its 10¹º elements (1/10¹º): the standard value of finitesimal units.

In the example again the finitesimal limit is extremely small. How much? We should consider statistical mechanics to find it is the size of molecules of water, which form bidimensional layers of liquid to shape the 3D volume, and are indeed about 10¹º times smaller than the whole.

Now the error ε is so small as to be P=(ah²/2) • 1.00000000001 (1 +1/n)

And this is a general rule in most cases: the finitesimal error is as small as 1/n, where n is the quanta of the scale. So when we do ∆+1 calculations as in most cases it is irrelevant. But theoretically it is important and in fact it will give us a ‘realist’ concept for the uncertainty principle of Heisenberg.

Hence indeed unnoticeable, truly infinitesimal but absolutely infinitesimal and certainly never proved by the axiomatic method as maths must be experimentally proved to avoid inflationary errors, and certainly as always in Γst (I should write gst as ∆st or Γst, the proper acronym, but i am lazy with wordpress 🙂 we do not accept a mathematical result without experimental proof (for me the fundamental use of mathematical physics), following Lobachevski, Godel and Einstein’s dictums.

The idea of the method of limits is thus simple, accurate and amounts to the following. In order to determine the exact value of a certain magnitude, we first determine not the magnitude itself but some approximation to it. However, we make not one approximation but a whole series of them, each more accurate than the last. Then from examination of this chain of approximations, that is from examination of the process of approximation itself, we uniquely determine the exact value of the magnitude. by ignoring the finitesimal error.

The same practical problem can be resolved with the differential used as an approximate value for the increment in the function. For example, suppose we have the problem of determining the volume of the walls of a similar closed cubical box whose interior dimensions are 10 × 10 × 10 cm and the thickness of whose walls is 0.05 cm. If great accuracy is not required, we may argue as follows. The volume of all the walls of the box represents the increment Δy of the function y = x3 for x = 10 and Δx = 0.1. So we find approximately: 

DIFFERENTIALS – ANY ST-DIMENSIONAL STEPS

The disquisition of which ‘minimalist finitesimal’ allow us to differentiate an S≈T algebraic symmetry, brings us to the ‘praxis’ of calculus techniques that overcome by ‘approximations’ the quest for the finitesimal quanta in space or time, susceptible of calculus manipulation, which gave birth to the praxis of finding differentials, which are the minimal F(Y) quanta to work with and obtain accurate results (hence normally an spatial finitesimal of change under a time dependent function). This was the origin of the calculus of differentials.

As always in praxis, the concept is based in the duality between huminds that measure with fixed rulers, lineal steps, over a cyclical, moving Universe. So Minds measure Aristotelian, short lines, in a long, curved Universe.

So the question comes to which minimalist lineal step of a mind is worthy to make accurate calculus of those long curved Universal paths.

It is then obvious that the derivative of a lineal motion has more subtle elements that its simplest algebraic form, the x ÷ lineal operation of ‘reproductive speed’ and so the concept of a differential to measure the difference between steady state lineal reproduction and the variations observed by a curve appeared as the strongest tool of approximation of both type of functions.

As we have considered that most differential equations will be of the form: F(s) ≈ g(t), where s and t are any of the 5 Dimensions of Space ($, S, §, ∫, •) or 5 Dimensions of time (t, T, ð, ∂, O), whose change respect to each other, we are bound to study…  showing how a spatial whole is dependent on the change and form of a world cycle, we shall consider generally that y->s and x->t…

The result of this change will be a much more GENERIC CONCEPT OF SPEED OF CHANGE in any OF THE DIMENSIONS OF ENTROPY, MOTION, ITERATION, INFORMATION OR FORM that defines the Universe, letting us introduce its 3 fundamental parameters, S/t=speed, t/s=density and s x t = momentum/force/energy… in a natural way with its multiple different meanings, Ðisomorphic to each other – as we repeat the s and t of the general

The differential of a function.

Let us then consider a function  S = ƒ(t) that has a derivative. The increment of this function: ∆s = ƒ (t+∆t) – ƒ(t) corresponding to the increment Δt, has the property that the ratio Δs/Δt, as Δt → 0, approaches a finite limit, equal to the derivative:

∆s/∆t->ƒ'(t)

This fact may be written as an equality:

∆s/∆t->ƒ'(t) +a

where the value of a depends on Δt in such a way that as Δt → 0, a also approaches zero; since in ∆st the minimal step of any entity always has a lineal form.

Thus the increment of a function may be represented in the form:

∆s=ƒ'(t)∆t + a∆t

where a → 0, if Δt → 0.
The first summand on the right side of this equality depends on Δt in a very simple way, namely it is proportional to Δt. It is called the differential of the function, at the point tn corresponding to the given increment Δt, and is denoted by:

ds=ƒ'(t)∆t

The second summand has the characteristic property that, as Δt → 0, it approaches zero more rapidly than Δt, as a result of the presence of the factor a.

It is therefore said to be an infinitesimal of higher order than Δt and, in case f′(t) ≠ 0, it is also of higher order than the first summand.

By this we mean that for sufficiently small Δt the second summand is small in itself and its ratio to Δt is also arbitrarily small.

In the graph, practical stience only needs to measure a differential either in space dy=BD+BC or  in time, as a fraction of the unit world cycle, ƒ(x)=cos²x+sin²x=1 which becomes a minimal lineal st-ep or action, ƒ(t)=S step.

In graph, decomposition of ΔS into two summands: the first (the principal part) depends linearly on ΔT and the second is negligible for small ΔS. The segment BC = ΔS, where BC = BD + DC, BD = tan β · ΔT = f′(t) Δt = dS, and DC is an infinitesimal of higher order than Δt.

For symmetry in the notation it is customary to denote the increment of the independent variable by dx, in our case dt, and to call it also a differential. With this notation the differential of the function  is:

ds= ƒ'(t) dt

Then the derivative is the ratio, f′(t) = ds/dt of the differential of the function, normally a ‘whole spatial view’ to the differential of the independent variable, normally a temporal step or minimal change-motion in time.
The differential of a function originated historically in the concept of an “indivisible”, similar to our concept of a finitesimal and so much more appropriate for ∆st than the abstraction of an infinitesimal with ∆t->0, since time is discrete and there is always a minimal step of change, or reproductive step in a motion of reproduction of information.

Rightly then the indivisible, and later the differential of a function, were represented as actual infinitesimals, as something in the nature of an extremely small constant magnitude, which however was not zero.

According to this definition the differential is a finite magnitude, measurable in space, for each increment Δt and is proportional to Δt. The other fundamental property of the differential is that it can ONLY be recognized in motion, so to speak: if we consider an increment Δt which is approaching its finitesimal limit then the difference between ds and Δs will be arbitrarily small even in comparison with Δt – till it becomes zero. The error of interpretation in classic calculus being that it is THE DIFFERENCE what approaches 0 as finally the function will be also lineal, not ∆t, which will become a ‘quanta’ – as quantum physicists would latter discover.

As this is the ‘real’ model, the substitution of the differential in place of small increments of the function forms the basis of most of the REAL applications of the so-called ‘infinitesimal analysis’ to the study of nature.

Continuity.  The concept of differential makes us possible to consider a different ‘finite view’ of continuity, as the region in which the function DO actually has a meaningful differential, meaning, the region where ∆t truly comes to zero, instead of provoking a huge gap, making DC very large, towards a hyperbolic form:

In those ‘verges’ of the Plane or the T.œ, (singularity center opening to 5D, still mind and membrane, opening to 4D entropic world) continuity breaks because the change in ∆S is huge for small increments of ∆t (time age discontinuity) in the simplest obvious case of 1D analysis or if we are measuring a different type of dimensional change, for example, that of topological form, we find a ‘change’ of state, or form, or region of the being – a topological tearing and transformation.

Continuity therefore is not always quantitative, but also topological, qualitative.

Differentials of calculus are practical infinitesimals and its knowledge for any function acts as an ∂st limit.

On the other hand, there is for any group that we ca take as vital space-time, finds us a middle point.

The mean value theorem.

The differential expresses the approximate value of the increment of the function in terms of the increment of the independent variable and of the derivative at the initial point. So for the increment from x = a to x = b, we have:

ƒ(b) – ƒ(a)≈ ƒ'(a) (b-a).

It is possible to obtain an exact equation of this sort if we replace the derivative f′(a) at the initial point by the derivative at some intermediate point, suitably chosen in the interval (a, b). More precisely: If y = f(x) is a function which is differentiable on the interval , then there exists a point ξ, strictly within this interval, such that the following exact equality holds:

ƒ(b)-ƒ(a)=ƒ'(ξ)(b-a)

The geometric interpretation of this “mean-value theorem” (also called Lagrange’s formula or the finitedifference formula) is extraordinarily simple. Let A, B be the points on the graph of the function f(x) which correspond to x = a and x = b, and let us join A and B by the chord AB.

Now let us move the straight line AB, keeping it constantly parallel to itself, up or down. At the moment when this straight line cuts the graph for the last time, it will be tangent to the graph at a certain point C. At this point (let the corresponding abscissa be x = ξ), the tangent line will form the same angle of inclination α as the chord AB. But for the chord we have:

tan α = ƒ(b) – ƒ (a) / b-a.       On the other hand at the point C: tan α = ƒ’ (ξ):

This equation is the mean-value theorem, which has the peculiar feature that the point ξ appearing in it is unknown to us; we know only that it lies somewhere in the interval (a, b).

Its interpretation in ∆st is that ƒ'(ξ) corresponds to the value of a finitesimal lying between both.

FIRST, the fact that ‘membranes must determine the beginning and end point of any function for it to be meaningful and solvable. And indeed, only because we know when it starts and ends the domain, we are sure to find a mean point.

If we consider then a T.œ mean value theorem, where ƒ(b) > ƒ(a) if we are deriving in space, where f(b)=Max. S represents the parameter of the membrane, ƒ(a) will represent the singularity and so we shall find in between a finitesimal part of the vital energy of the T.œ with a mean value within that of Max. S (membrane) and Min. S (singularity). And viceversa, if we are deriving in search of the minimal quanta of time, ƒ(a) > ƒ  (b), where ƒ(a) represents the time speed of the singularity and ƒ(b) the time speed of the membrane. And the mean value will be that of the infinitesimal. 

But in spite of this indeterminacy, the formula has great theoretical significance and is part of the proof of many theorems in analysis.

The immediate practical importance of this formula is also very great, since it enables us to estimate the increase in a function when we know the limits between which its derivative can vary. For example:

|sin b – sin a| = |cos  ξ| (b-a) ≤ b-a.

Here a, b and ξ are angles, expressed in radian measure; ξ is some value between a and b; ξ itself is unknown, but we know that |cos  ξ |≤1

Another immediate expression of the theorem which allow to derive a general method for calculating the limits and approximations of polynomials with derivatives is: 

For arbitrary functions ϕ(x) and ψ(x) differentiable in the interval [a, b], provided only that ψ′(x) ≠ 0 in (a, b), the equation, holds where ξ is some point in the interval (a, b).

From the mean value theorem it is also clear then that a function whose derivative is everywhere equal to zero must be a constant; at no part of the interval can it receive an increment different from zero. Analogously, it is easy to prove that a function whose derivative is everywhere positive must everywhere increase, and if its derivative is negative, the function must decrease.

And so the ‘classic function of mean-value theorem’ allow us to introduce an essential element of ∫∂ which will open up the ∆st calculus of worldcycles of existence, the standing points of a function.

The mean value sets for the region between the limiting points of the curve – which must be taken in higher step-timespace as two sections of a bi-podal spherical line, part of the membrane of a 3D form, gives us then a value for the vital energy to be expressed with a scalar. And the initial and final point of the segment become the maximal and minimal of the function in F(f)=x values.

It is then between those two limits a question of find points of the vital energy among them the singularity Max. S x t, to have a well-defined TOE in its membrane (maximal minimal values) volume of energy, mean value and Maximal point of the Singularity.

Maxima and minimum. The 3 standing points of a world cycle. 

The minimal reality is a 3D² form seen in a single plane, with a singularity @-mind a membrane and a vital energy within. When we make a holographic broken image of this reality the simplest way to do it is in four cartesian regions, TT, ST, ts, and ss, which correspond to the +1 +1, +1 -1, -1 +1 and -1 -1 quadrants of the plane.

We can then dissect the sphere in antipodal points related to the identity neutral number o-1 the sphere of time probabilities that the largest whole maximises in its antipodal points. If we consider the antipodal points the emergent and final death point, which imperfect motions still close to zero-sums, the maximal middle point will be the singularity, Max. S x Max t.

One of the simplest and most important applications of the derivative in that sense is in the theory of maxima and minima.

Let us suppose that on a certain interval a≤t≤b we are given a function S = f(t) which is not only continuous but also has a derivative at every point. Our ability to calculate the derivative enables us to form a clear picture of the graph of the function. On an interval on which the derivative is always positive the tangent to the graph will be directed upward. On such an interval the function will increase; that is, to a greater value of t will correspond a greater value of f(t). On the other hand, on an interval where the derivative is always negative, the function will decrease; the graph will run downward.

We have drawn the graph of an ∆st function of the general form, S (any dimension of a whole world cycle or T.Œ) = f(T) – Any time motion or action.

It is defined on the interval between a minimal quanta in space or time (t1) and its limit as a function (d).

And it can represent any S=T duality, or more complex 5Ds=5Dt forms or simpler ones. We can also change the s and t coordinates according to the Galilean paradox, etc. Hence the ginormous numbers of applications, but essentially it will define a process of change in space-time between the emergence of the phenomena at ST1 AND ITS DEATH mostly by scattering and entropic dissolution of form at d.

And in most cases will have a bell curved from of fast growth after emergence in its first age of maximal motion (youth, 1D) till a maximal point where it often will reproduce into a discontinuous parallel form (not shown in the graph at Max. S x Max. T; which will provoke its loss of energy and start its diminution till its extinction at point d.

Thus the best way to express quantitatively in terms of S-T parameters (mostly information and energy), for any world cycle of any time-space super organism is a curve where we can find those key standing points in which a change of age, st-ate or motion happens. 

Of a special interest thus are the points of this graph whose abcissas are t1,2, 3, 4, 5.
At the point t0 the function f(t) is said to have a local maximum; by this we mean that at this point f(t) is greater than at neighboring points; more precisely for every t in a certain interval around the point x0.
A local minimum is defined analogously. For our function a local maximum occurs at the points t0 and t3, and a local minimum at the point t1.
At every maximum or minimum point, if it is inside the interval [a, b], i.e., if it does not coincide with one of the end points a or b, the derivative must be equal to zero.
This last statement, a very important one, follows immediately from the definition of the derivative as the limit of the ratio ΔS/ΔT. In fact, if we move a short distance from the maximum point, then ∆S≤0.

Thus for positive ΔT the ratio ΔS/ΔT is nonpositive, and for negative ΔT the ratio ΔS/ΔT is nonnegative. The limit of this ratio, which exists by hypothesis, can therefore be neither positive nor negative and there remains only the possibility that it is zero.

By inspection of the diagram it is seen that this means that at maximum or minimum points (it is customary to leave out the word “local,” although it is understood) the tangent to the graph is horizontal.

In the figure we should remark that at the points t2, and t4 also the tangent is horizontal, just as it is at the points t1, t3, although at these points the function has neither maximum nor minimum. In general, there may be more points at which the derivative of the function is equal to zero (stationary points) than there are maximum or minimum points.
Determination of the greatest and least values of a function.

In numerous technical questions it is necessary to find the point t at which a given function f(t) attains its greatest or its least value on a given interval.
In case we are interested in the greatest value, we must find x0 on the interval [a, b] for which among all x on [a, b] the inequality ƒ(to)≥ƒ(t) is fulfilled.
But now the fundamental question arises, whether in general there exists such a point. By the methods of modern analysis it is possible to prove the following existence theorem:

If the function f(t) is continuous on a finite interval, then there exists at least one point on the interval for which the function attains its maximum (minimum) value on the interval [a, b].
From what has been said already, it follows that these maximum or minimum points must be sought among the “stationary” points. This fact is the basis for the following well-known method for finding maxima and minima.
First we find the derivative of, f(t) and then solve the equation obtained by setting it equal to zero.

If t1, t2, ···, tn, are the roots of this equation, we then compare the numbers f(t1, f(t2), ···, f(tn) with one another. Of course, it is necessary to take into account that the maximum or minimum of the function may be found not within the interval but at the end (as is the case with the minimum in figure) or at a point where the function has no derivative.

Thus to the points t1, t2, ···, tn, we must add the ends a and b of the interval and also those points, if they exist, at which there is no derivative. It only remains to compare the values of the function at all these points and to choose among them the greatest or the least.

With respect to the stated existence theorem, it is important to add that this theorem ceases, in general, to hold in the case that the function f(t) is continuous only on the interval (a, b); that is, on the set of points x satisfying the inequalities a <t < b.

It is then necessary to consider an initial time point and a final time point, birth and death, emergence and extinction to have a determined solution.

 Derivatives of higher orders.

We have just seen how, for closer study of the graph of a function, we must examine the changes in its derivative f′(x). This derivative is a function of x, so that we may in turn find its derivative.
The derivative of the derivative is called the second derivative and is denoted by y”=ƒ”(x)

Analogously, we may calculate the third derivative y”‘=ƒ”‘(x) and more generally the nth derivative or, as it is also called, the derivative of nth order. But as there are not more than 3 ‘similar derivatives, with meaning’ in time (speed, acceleration, jerk) or space (distance, area and volume), beyond the 3rd derivative the use of derivatives is only as an approximation to polynomial equations, whose solvability itself is not possible by radicals beyond the 3rd power.

So it must be kept in mind that, for a certain value of x (or even for all values of x) this sequence may break off at the derivative of some order, say the kth; it may happen that f(k)(x) exists but not f(k + 1)(x). Derivatives of arbitrary order are therefore connected to the symmetry between power laws and ∫∂ operations in the 4th and inverse 5th Dimension, through the Taylor formula. For the moment we confine ourselves to the second and third derivatives for ‘real parameters’ of the 3 space volumes and time accelerations.

The second derivative has then as we have seen a simple significance in mechanics. Let s = f(t) be a law of motion along a straight line; then s′ is the velocity and s″ is the “velocity of the change in the velocity” or more simply the “acceleration” of the point at time t. For example, for a falling body under the force of gravity: That is, the acceleration of falling bodies is constant.

Significance of the second derivative; convexity and concavity.

The second derivative also has a simple geometric meaning. Just as the sign of the first derivative determines whether the function is increasing or decreasing, so the sign of the second derivative determines the side toward which the graph of the function will be curved.

“Suppose, for example, that on a given interval the second derivative is everywhere positive. Then the first derivative increases and therefore f′(x) = tan α increases and the angle a of inclination of the tangent line itself increases (figure 17). Thus as we move along the curve it keeps turning constantly to the same side, namely upward, and is thus, as they say, “convex downward.”

On the other hand, in a part of a curve where the second derivative is constantly negative  the graph of the function is “convex upward.

Criteria for maxima and minima; study of the graphs of curves.

If throughout the whole interval over which x varies the curve is convex upward and if at a certain point x0 of this interval the derivative is equal to zero, then at this point the function necessarily attains its maximum; and its minimum in the case of convexity downward. This simple consideration often allows us, after finding a point at which the derivative is equal to zero, to decide thereupon whether at this point the function has a local maximum or minimum.

Now, the apparently equal nature on a first derivative of the minimal and maximal points of a being, have also deep philosophical implications, as it makes at ‘first sight’ indistinguishable often the processes of ‘reproductive expansion’ towards a maximal and explosive decay into death, the ‘two reversal’ points of the 5D (maximal) and 4D (minimal) states of a cycle of existence, for which we have to make a second assessment (second derivative) to know if we are in the point of maximal life (5D) or maximal death (4D) of a world cycle.

And further on to know if the cycle will cease in a continuous flat encephalogram or will restart a new upwards trend.

Or in other words is any scalar, e>cc>m big-bang both the death and the birth of matter?

Finitesimal Quanta, as the limit of populations in space and the minimal action in time.

So there is behind the duality between the concept of limits and differentials (Newton’s vs. Leibniz’s approach), the concept of a minimal quanta in space or in time, which has been hardly explored by classic mathematics in its experimental meaning but will be the key to understand ‘Planckton’ (H-planck constants) and its role in the vital physics of atomic scales.

It is then essential to the workings of the Universe to fully grasp the relationship between scales and analysis. Both in the down direction of derivatives and the up dimension of integrals; in its parallelism with polynomials, which rise dimensional scales of a system in a different ‘more lineal social inter planar way’.

So polynomials and limits are what algebra is to calculus; space to time and lineal algebra to curved geometries.

The vital interpretation though of that amazing growth of polynomials is far more scary.

Power laws by the very fact of ‘being lineal’, and maximise the growth of a function ARE NOT REAL in the positive sense of infinite growth, a fantasy only taken seriously by our economists of greed and infinite usury debt interest… where the eª exponential function first appeared.

The fact is that in reality such exponentials only portrait the decay destruction of a mass of cellular/atomic beings ALREADY created by the much smaller processes of ‘re=product-ion’ which is the second dimension mostly operated with multiplication (of scalars or anti commutative cross vectors).

So the third dimension of operandi is a backwards motion –  a lineal motion into death, because it only reverses the growth of sums and multiplications polynomials makes sense of its properties.

Let us then see how the operations mimic the five dimensions, beyond the simplest ST, SS and TT steps, namely reproductive and 4D-5D inverted arrows.

In general we can establish as the main parameter of the singularity, its time frequency, which will be synchronised to the rotary motion or angular momentum of the cyclical membrane. They will appear as the initial conditions and boundary conditions of a derivative/integral function, which often will be able to define the values of the vital energy within, as the law of superposition should work between the 3 elements, such as:

1D (singularity) + 2D (Holographic principle) = 3D (vital energy).

In practice this means the ‘synchronicity in time of the clocks of the 3 parts of the being’ and the superposition of the solutions that belong to each of the 3 elements of any T.œ

4th dimension: Entropy: S∂ polynomial death dimension of decay.

POLYNOMIALS DO NOT EVOLVE REALITY towards an impossible  infinite growth. THEY ARE the inverse decay process; of exponential extinction, eˆ-x.

5th dimension: ∫T…

This is understood better observing that the inverse function does in fact model growth in the different models of biology and physics, limited by a carrying capacity straight flat line:

screen-shot-2016-12-30-at-12-06-38

The logarithmic function has as derivative an infinitesimal, 1/x, which makes it interesting as it models better the curve of growth from o to 1 in the emergent fast explosive ∆-1 seed state, while the inverse eˆ-x model the decay death process.

Integrals and derivatives which have a much slower growth, than polynomials on the other hand do model much better as they integrate the ‘indivisible’ finitesimal quanta of a system, its organic growth and ‘wholeness’ integrated in space.

Thus integrals do move up a social growth in new ∆+1 5D planes. And its graphs are a curved geometry, which takes each lineal step (differential) upwards, but as it creates a new whole, part of its energy growth sinks and curves to give birth to the mind-singularity @, the wholeness that warps the whole, and converts that energy into still, shrunk mind-mappings of information, often within the 3D particle-head.

We will retake the analysis of the more complex st-eps on 3, 4 and 5D, since most of the complex process related to the 3rd dimension, as a mixture of S and T inner scales, will require a more complex double or triple derivative and integrals – only the 4D decay entropic explosion can be satisfied as the decay of the single ∆-1 finitesimal with a single variable. 

Let us now move to the inverse 5D function:

5D ∫∆-1 INTEGRALS

5D: VORTEX OF INFORMATION: ∫T@. The culmination of the process of dimensional growth so far is the state of absolute stillness of the mind. It is the integral function of wholes, made of finitesimal 4D parts. And so as we integrate them we reconstruct the being in i=ts-elf:

All what we have said changes though in 5D, where a force is exerted by a 5D SINGULARITY at the center of the vortex.  How the Galilean paradox observes this ‘change’? Establishing a second dimension of ‘dynamic time motion’, which we call acceleration, or inversely establishing a new dimension of space as the angular momentum decelerates, creating a volume of space from a present flat space-time sheet. And as both do imply change in ST volume, we find again relevant derivatives (finitesimal steps either in time frequency or space volume, and integrals (to bring the whole spatial static view of the phenomena  to calculate that change.

Galileo discovered something essential to ∆ST: Relativity of motion, which is also a distance:

The state of rest and motionlessness is unknown in nature, but a construct of the mind (@-view). The whole of nature, from the smallest particles up to the most massive bodies, is in a state of eternal creation and annihilation, in a perpetual flux, in unceasing motion and change. In the final analysis, every natural science studies some aspect of this temporal motion vs. spatial form duality. This is the ST question of analysis.

Next Newton and Leibniz studied the ∆-question of analysis: how small instants of time or pieces of space gather into larger wholes and vice versa, how to extract the finitesimal quanta from the larger whole.

Both questions put together gave birth of analysis. Hence the classic book text definition:

“∆st Mathematical analysis is that branch of mathematics that provides methods for the quantitative investigation of various processes of change, motion, and dependence of one magnitude on another. ”

The name “infinitesimal analysis” says nothing about the subject matter under discussion but emphasizes the ∆-method.

We are dealing here with the special mathematical method of infinitesimals, or in its modern form, of limits.

The error of CLASSIC science being to consider there are NO LIMITS to infintesimals. Yet ∆-scales introduce a limit in the minimal quanta, or single frequency in space or time of the parameter study. So we shall talk of finitesimals and quanta, frequencies and bits≈minimal time cycles.

This was far more evident in the beginning, through the calculus of limits and Leibniz’s concept of an infinitesimal as the inverse, 1/n of a quantity,  before the axiomatic method stubbornly decided to go ‘inflationary’ with the language of information (as all are) in its 3rd age and talk about bizarre infinities (Cantor).  

In fact, analysis is just in its theory an inflationary extension (classic error of all languages from money to fiction words) of the method of limits.

Integrals

Thus integrals, tend to represent the growth of a space population, till it reaches a wholeness in a closed domain. So we can do ‘line integrals’, ‘surface integrals’, ‘volume integrals’, in simultaneous space.

Integrals though are also related to a world cycle, specially the motion if time, closer to the action of reproduction in space, as nature is  ‘constantly building integrated wholes by the accumulation of single time actions of reproduction that become ‘clone’ cells-atoms-citizens of an integrated supœrganism. 

It is precisely in those more complex games of integration of a ‘flow’ of time actions of reproduction, and ‘constrains’ to time actions by an integral line membrane, where we find the more subtle use of both functions. Let us consider it in more detail.

But they can also portray the growth or diminution of populations of space, and then as space is symmetric, we can use inverse functions, normally related to ‘e’.

They can give birth unlike and when a system decreases, the space is dying when it grows it does so slower, so we find also different speeds on the two time arrows of space through the 5th dimension.

Space is symmetric; in its directions and they co-exist together. Time is not symmetric and it is experienced as a sequential pattern of single Time cycles. So Time parameters are shorter in form, space is a more extended system. Of time we see only an instant, of space we integrate instants/cycles of time and sum them as frequencies which all play the same world cycle.

Time though often is just the reproduction of a new unit of space. Thus, time cycles become populations of a spatial herd due to its reproduction of a ‘seed’ form.

Space thus is the ‘mirror reproductive symmetry’ of ‘frequencies in time’, its tail of memories, by reproduction, expansion, and radiation along the path of the singular timeline of the wave.

So in broad strokes derivative and integrals cover a wide range of 5D themes: the infinitesimal units of  time frequencies and complex herds of space populations.

Whereas given the simultaneity properties of space, integrals tend to be used to calculate space populations, and given the individual sequential structure of time frequencies, derivatives are most often used to calculate time motions.

HOLOGRAPHIC INTEGRALS: Area.

The area of a function has different meanings, but generally speaking is a measure of its vital energy between the singularity at 0 point or initial condition and its membrane, the function itself, between the limits of the domain. So it is an operation constantly performed by membrane and singularity as the initial point and boundary condition, by ‘ex-foliating’ in layers the vital space and adding it up piece by piece, finitesimal by finitesimal.

The way mathematics treats integrals deals mostly with the obsession with perfect measure achieved by reducing the sections of the being to minimal infinitesimals. We have discussed the limits and irrelevance of such approach. It is of more interesting to discuss the different type of time-like, space-like and combined space-time dimensional functions integrated through this procedure.

And to consider the question of the ‘boundary conditions’, in which the membrane determines the volume which is integrated as the Space-time area, surrounded by the being

Let us suppose that a curve above the x-axis forms the graph of the function y = f(x). We attempt to find the area S of the segment bounded by the line y = f(x), by the x-axis and by the straight lines drawn through the points x = a and x = b parallel to the y-axis.

To solve this problem we proceed as follows. We divide the interval [a, b] into n parts, not necessarily equal. We denote the length of the first part by Δx1, of the second by Δx2, and so forth up to the final part Δxn. In each segment we choose points ξ1, ξ2, ···, ξn and set up the sum:image591

The magnitude Sn is obviously equal to the sum of the areas of the rectangles shaded in figure:

image593

The finer we make the subdivision of the segment [a, b], the closer Sn will be to the area S. If we carry out a sequence of such constructions, dividing the interval [a, b] into successively smaller and smaller parts, then the sums Sn will approach S.

The possibility of dividing [a, b] into unequal parts makes it necessary for us to define what we mean by “successively smaller” subdivisions. We assume not only that n increases beyond all bounds but also that the length of the greatest Δxi in the nth subdivision approaches zero. Thus the calculation of the desired area has in this way been reduced to finding the limit:

image597 image601

We note that when we first set up the problem, we had only an empirical idea of what we mean by the area of our curvilinear figure, but we had no precise definition. But now we have obtained an exact definition of the concept of area: It is the limit.:

image603

We now have not only an intuitive notion of area but also a mathematical definition, on the basis of which we can calculate the area numerically .

 

We have assumed that: ƒ(x)≥0. If f(x) changes sign, then in figure, the limit will give us the algebraic sum of the areas of the segments lying between the curve y = f(x) and the x-axis, where the segments above the x-axis are taken with a plus sign and those below with a minus sign.

Definite integral.

The need to calculate the integral Sum limit arises in many other problems in which a new dimension is reached by the sum of finitesimal paths. For example, suppose that a point is moving along a straight line with variable velocity v = f(t). How are we to determine the distance s covered by the point in the time from t = a to t = b?

Let us assume that the function f(t) is continuous; that is, in small intervals of time the velocity changes only slightly. We divide the interval [a, b] into n parts, of length Δt1, Δt2, ···, Δtn. To calculate an approximate value for the distance covered in each interval Δti, we will suppose that the velocity in this period of time is constant, equal throughout to its actual value at some intermediate point ξ1. The whole distance covered will then be expressed approximately by the sum:image601

and the exact value of the distance s covered in the time from a to b, will be the limit of such sums for finer and finer subdivisions; that is, it will be the limit:

image603

It would be easy to give many examples of practical problems leading to the calculation of such a limit. We will discuss some of them later, but for the moment the examples already given will sufficiently indicate the importance of this idea. The limit is called the definite integral of the function f(x) taken over the interval [a, b], and it is denoted by

image605

The expression f(x)dx is called the integrand, a and b are the limits of integration; a is the lower limit, b is the upper limit.

The connection between differential and integral calculus.

The problem considered theN reduces to calculation of the definite integral:

image607

Another example IS the problem of finding the area bounded by the parabola y = x².

Here the problem reduces to calculation of the integral:

image609

We were able to calculate both these integrals directly, because we have simple formulas for the sum of the first n natural numbers and for the sum of their squares. But for an arbitrary function f(x), we are far from being able to add up the sum  (that is, to express the result in a simple formula) if the points ξi, and the increments Δxi are given to suit some particular problem. Moreover, even when such a summation is possible, there is no general method for carrying it out; various methods, each of a quite special character, must be used in the various cases.

So we are confronted by the problem of finding a general method for the calculation of defiqite integrals. Historically this question interested mathematicians for a long period of time, since there were many practical aspects involved in a general method for finding the area of curvilinear figures, the volume of bodies bounded by a curved surface, and so forth.

We have already noted that Archimedes was able to calculate the area of a segment and of certain other figures. The number of special problems that could be solved, involving areas, volumes, centers of gravity of solids, and so forth, gradually increased, but progress in finding a general method was at first extremely slow. The general method could not be discovered until sufficient theoretical and computational material had been accumulated through the demands of practical life.

The work of gathering and generalizing this material proceeded very gradually until the end of the Middle Ages; and its subsequent energetic development was a direct consequence of the rapid growth in the productive powers of Europe resulting from the breakup of the former (feudal) methods of manufacturing and the creation of new ones (capitalistic).

The accumulation of facts connected with definite integrals proceeded alongside of the corresponding investigations of problems related to the derivative of a function. The reader already knows from that this immense preparatory labor was crowned with success in the 17th century by the work of Newton and Leibnitz. It is in this sense that Newton and Leibnitz are the creators of the differential and integral calculus.

One of the fundamental contributions of Newton and Leibnitz consists of the fact that they finally cleared up the profound connection between differential and integral calculus, which provides us, in particular, with a general method of calculating definite integrals for an extremely wide class of functions.

To explain this connection, we turn tq an example from mechanics.

We suppose that a material point is moving along a straight line with velocity v = f(t), where t is the time. We already know that the distance a covered by our point in the time between t = t1 and t = t2 is given by the definite integral:

image611Now let us assume that the law of motion of the point is known to us; that is, we know the function s = F(t) expressing the dependence on the time t of the distance s calculated from some initial point A on the straight line. The distance σ covered in the interval of time [t1, t2] is obviously equal to the difference: σ= F(t2) – F(t1)

In this way we are led by physical considerations to the equality:image615

which expresses the connection between the law of motion of our point and its velocity.

From a mathematical point of view the function F(t), may be defined as a function whose derivative for all values oft in the given interval is equal to f(t), that is:

F'(t)= ƒ(t).    Such a function is called a primitive for f(t).

We must keep in mind that if the function f(t) has at least one primitive, then along with this one it will have an infinite number of others; for if F(t) is a primitive for f(t), then F(t) + C, where C is an arbitrary constant, is also a primitive. Moreover, in this way we exhaust the whole set of primitives for f(t), since if F1(t) and F2(t) are primitives for the same function f(t), then their difference ϕ(t) = F1(t) − F2(t) has a derivative ϕ(t) that is equal to zero at every point in a given interval so that ϕ(t) is a constant.*

From a physical point of view the various values of the constant C determine laws of motion which differ from one another only in the fact that they correspond to all possible choices for the initial point of the motion.

We are thus led to the result that for an extremely wide class of functions f(x), including all cases where the function f(x) may be considered as the velocity of a point at the time x, we have the following equality:

where F(x) is an arbitrary primitive for f(x).

This equality is the famous formula of Newton and Leibnitz, which reduces the problem of calculating the definite integral of a function to finding a primitive for the function and in this way forms a link between the differential and the integral calculus.

Many particular problems that were studied by the greatest mathematicians are automatically solved by this formula, stating that the definite integral of the function. f(x) on the interval [a, b] is equal to the difference between the values of any primitive at the left and right ends of the interval. It is customary to write the difference (30) thus:
image621

Example 1. The equality: (x³/3)’=x² shows that the function x³/3 is a primitive for the function x2. Thus, by the formula of Newton and Leibnitz:

image625

Example 2. Let c and c′ be two electric charges, on a straight line at distance r from each other. The attraction F between them is directed along this straight line and is equal to:

F=a/r²   (a = kcc′, where k is a constant). The work W done by this force, when the charge c remains fixed but c′ moves along the interval [R1, R2], may be calculated by dividing the interval [R1, R2] into parts Δri.

On each of these parts we may consider the force to be approximately constant, so that the work done on each part is equal to:image629. Making the parts smaller and smaller, we see that the work W is equal to the integral:
image631

The value of this integral can be calculated at once, if we recall that:
image633

So that:
image635

In particular, the work done by a force F as the charge c′, initially at a distance R1, from c, moves out to infinity, is equal to:
image637

From the arguments given above for the formula of Newton and Leibnitz, it is clear that this formula gives mathematical expression to an actual tie existing in the objective world. It is a beautiful and important example of how mathematics gives expression to objective laws.

We should remark that in his mathematical investigations, Newton always took a physical point of view. His work on the foundations of differential and integral calculus cannot be separated from his work on the foundations of mechanics.

The concepts of mathematical analysis, such as the derivative or the integral, as they presented themselves to Newton and his contemporaries, had not yet completely “broken away” from their physical and geometric origins, such as velocity and area. In fact, they were half mathematical in character and half physical. The conditions existing at that time were not yet suitable for producing a purely mathematical definition of these concepts. Consequently, the investigator could handle them correctly in complicated situations only if he remained in close contact with the practical aspects of his problem even during the intermediate (mathematical) stages of his argument.

From this point of view the creative work of Newton was different in character from that of Leibnitz.* Newton was guided at all stages by a physical way of looking at the problem. But the investigations of Leibnitz do not have such an immediate connection with physics, a fact that in the absence of clear-cut mathematical definitions sometimes led him to mistaken conclusions. On the other hand, the most characteristic feature of the creative activity of Leibnitz was his striving for generality, his efforts to find the most general methods for the problems of mathematical analysis.

The greatest merit of Leibnitz was his creation of a mathematical symbolism expressing the essence of the matter. The notations for such fundamental concepts of mathematical analysis as the differential dx, the second differential d2x, the integral ∫y dx, and the derivative d/dx were proposed by Leibnitz. The fact that these notations are still used shows how well they were chosen.

One advantage of a well-chosen symbolism is that it makes our proofs and calculations shorter and easier; also, it sometimes protects us against mistaken conclusions. Leibnitz, who was well aware of this, paid especial attention in all his work to the choice of notation.

The evolution of the concepts of mathematical analysis (derivative, integral, and so forth) continued, of course, after Newton and Leibnitz and is still continuing in our day; but there is one stage in this evolution that should be mentioned especially. It took place at the beginning of the last century and is related particularly to the work of Cauchy.

Cauchy gave a clear-cut formal definition of the concept of a limit and used it as the basis for his definitions of continuity, derivative, differential, and integral. These definitions have been introduced at the corresponding places in the present chapter. They are widely used in present-day analysis.

The great importance of these achievements lies in the fact that it is now possible to operate in a purely formal way not only in arithmetic, algebra, and elementary geometry, but also in this new and very extensive branch of mathematics, in mathematical analysis, and to obtain correct results in so doing.

Regarding practical application of the results of mathematical analysis, it is now possible to say: If the original data are verified in the actual world, then the results of our mathematical arguments will also be verified there. If we are properly assured of the accuracy of the original data, then there is no need to make a practical check of the correctness of the mathematical results; it is sufficient to check only the correctness of the formal arguments.

This statement naturally requires the following limitation. In mathematical arguments the original data, which we take from the actual world, are true only up to a certain accuracy. This means that at every step of our mathematical argument the results obtained will contain certain errors, which may accumulate as the number of steps in the argument increases.*

Returning now to the definite integral, let us consider a question of fundamental importance. For what functions f(x), defined on the interval [a, b], is it possible to guarantee the existence of the definite integral:image639Namely a number to which the sum:image641

Tends as limit as max Δxi, → 0? It must be kept in view that this number is to be the same for all subdivisions of the interval [a, b] and all choices of the points ξi.

Functions for which the definite integral, namely the limit (29), exists are said to be integrable on the interval [a, b]. Investigations carried out in the last century show that all continuous functions are integrable.

But there are also discontinuous functions which are integrable. Among them, for example, are those functions which are bounded and either increasing or decreasing on the interval [a, b].

The function that is equal to zero at the rational points in [a, b] and equal to unity at the irrational points, may serve as an example of a nonintegrable function, since for an arbitrary subdivision the integral sum sn, will be equal to zero or unity, depending on whether we choose the points ξi, as rational numbers or irrational.

Let us note that in many cases the formula of Newton and Leibnitz provides an answer to the practical question of calculating a definite integral. But here arises the problem of finding a primitive for a given function; that is, of finding a function that has the given function for its derivative. We now proceed to discuss this problem. Let us note by the way that the problem of finding a primitive has great importance in other branches of mathematics also, particularly in the solution of differential equations.

As we stated before integrals are mostly useful when we are studying a ‘defined’ full Spe<ST>Tiƒ system with a membrane or contour closing the surface. As integrals are more concerned with ‘space’ and ‘derivatives’ with time.  And further on, those which integrate space-time systems, or double and triple integrals.

Calculus of ALL type of vital spaces, enclosed by time functions, with a ‘scalar’ point of view, parameter that measured what the point of view extracted in symbiosis with the membrane, from the vital space it enclosed. Alas, this quantity absorbed and ab=used by the point of view, on the vital space would be called ‘Energy’, the vital space ‘field’, the membrane ‘frequency’, the finitesimal ‘quanta or Universal constant’, and the scalar point of view ‘active magnitude.

The fundamental language of physics are differential equations, which allow to measure the content of vital space of a system. The richness and varieties of ‘world species’ will define many variations on the theme. Sometimes there will not be a central point of view, and we talk of a liquid state’, where volumes will not have a ‘gradient’, but ‘Pressure’, the controlling parameter of the time membrane will be equal, or related to the gradient of the eternal world p.o.v. of the Earth (gravitational field).

Then we shall integrate along 3 parameters, the density that defines the liquid, the height that defines the gradient and the volume enclosed. Liquids, due to the simplicity of lacking an internal POV, would be the first physical application of Leibniz’s findings by his students, the Bernoulli family. Next a violin player would find the differential equation of waves – the essential equation of the membranes of present time of all systems. The 3rd type of equations, those of the central point of view, will have to wait a mathematician, Poisson – latter refined by Einstein in his General Relativity.

This is the error of Newton. All cycles are finite, as they close into themselves. All worldcycles of life and death are finite as they end as they begun in the dissolution of death. All entropic motions stop. All time vortices once they have absorbed all the entropy of their territory become wrinkled, and die. Newton died, his ‘time duration’ did not extend to infinity.

But those minds measure from their self-centered point of view, only a part of the Universe, and the rest remains obscure. So all of them display the paradox of the ego, as they confuse the whole Universe with their world, and see themselves larger than all what they don’t perceive. Hence as Descartes wittingly warned the reader in his first sentences ‘every human being thinks he is gifted with intelligence.

The ternary parts of a T.œ: its calculus.

We have already studied the process of integration for functions of one variable defined on a one-dimensional region, namely an interval. But the analogous process may be extended to functions of two, three, or more variables, defined on corresponding regions.

For example, let us consider a surface:z= ƒ (x,y)

defined in a rectangular system of coordinates, and on the plane Oxy let there be given a region G bounded by a closed curve Γ. It is required to find the volume bounded by the surface, by the plane Oxy and by the cylindrical surface passing through the curve Γ with generators parallel to the Oz axis (figure 33). To solve this problem we divide the plane region G into subregions by a network of straight lines parallel to the axes Ox and Oy and denote by: G1, G2… Gn.

image849

those subregions which consist of complete rectangles. If the net is sufficiently fine, then practically the whole of the region G will be covered by the enumerated rectangles. In each of them we choose at will a point:

image853

and, assuming for simplicity that Gi denotes not only the rectangle but also its area, we set up the sum:image855

It is clear that, if the surface is continuous and the net is sufficiently fine, this sum may be brought as near as we like to the desired volume V. We will obtain the desired volume exactly if we take the limit of the sum (47) for finer and finer subdivisions (that is, for subdivisions such that the greatest of the diagonals of our rectangles approaches zero):image857
From the point of view of analysis it is therefore necessary, in order to determine the volume V, to carry out a certain mathematical operation on the function f(x, y) and its domain of definition G, an operation indicated by the left side of equality (48). This operation is called the integration of the function f over the region G, and its result is the integral of f over G. It is customary to denote this result in the following way:image859

Similarly, we may define the integral of a function of three variables over a three-dimensional region G, representing a certain body in space. Again we divide the region G into parts, this time by planes parallel to the coordinate planes. Among these parts we choose the ones which represent complete parallelepipeds and enumerate them:G1, G2… Gn.

In each of these we choose an arbitrary point:image863

and set up the sum:image865

where Gi denotes the volume of the parallelepiped Gi. Finally we define the integral of f(x, y, z) over the region G as the limit:image867

to which the sum (50) tends when the greatest diagonal d(Gi) approaches zero.

Let us consider an example. We imagine the region G is filled with a nonhomogeneous mass whose density at each point in G is given by a known function ρ(x, y, z). The density ρ(x, y, z) of the mass at the point (x, y, z) is defined as the limit approached by the ratio of the mass of an arbitrary small region containing the point (x, y, z) to the volume of this region as its diameter approaches zero.* To determine the mass of the body G it is natural to proceed as follows. We divide the region G into parts by planes parallel to the coordinate planes and enumerate the complete parallelepipeds formed in this way:  G1, G2, …, Gn

Assuming that the dividing planes are sufficiently close to one another, we will make only a small error if we neglect the irregular regions of the body and define the mass of each of the regular regions Gi (the complete parallelepipeds) as the product:image871

where (ξi, ηi, ζi) is an arbitrary point Gi. As a result the approximate value of the mass M will be expressed by the sum:image873

and its exact value will clearly be the limit of this sum as the greatest diagonal Gi approaches zero; that is:image875

The integrals, 49 and 51 are called double and triple integrals respectively.

image877

Let us examine a problem which leads to a double integral. We imagine that water is flowing over a plane surface. Also, on this surface the underground water is seeping through (or soaking back into the ground) with an intensity f(x, y) which is different at different points. We consider a region G bounded by a closed contour (figure 34) and assume that at every point of G we know the intensity f(x, y), namely the amount of underground water seeping through per minute per cm2 of surface; we will have f(x, y) > 0 where the water is seeping through and f(x, y) < 0 where it is soaking into the ground. How much water will accumulate on the surface G per minute ?

If we divide G into small parts, consider the rate of seepage as approximately constant in each part and then pass to the limit for finer and finer subdivisions, we will obtain an expression for the whole amount of accumulated water in the form of an integral:image879 Double (two-fold) integrals were first introduced by Euler. Multiple integrals form an instrument which is used everyday in calculations and investigations of the most varied kind.

It would also be possible to show, though we will not do it here, that calculation of multiple integrals may be reduced, as a rule, to iterated calculation of ordinary one-dimensional integrals.

Contour and surface integrals. Finally, we must mention that still other generalizations of the integral are possible. For example, the problem of defining the work done by a variable force applied to a material point, as the latter moves along a given curve, naturally leads to a so-called curvilinear integral, and the problem of finding the general charge on a surface on which electricity is continuously distributed with a given surface density leads to another new concept, an integral over a curved surface:image881

For example, suppose that a liquid is flowing through space ( and that the velocity of a particle of the liquid at the point (x, y)is given by a function P(x, y), not depending on z. If we wish to determine the amount of liquid flowing per minute through the contour Γ, we may reason in the following way. Let us divide Γ up into segments Δsi. The amount of water flowing through one segment Δsi is approximately equal to the column of liquid shaded in figure 35; this column may be considered as the amount of liquid forcing its way per minute through that segment of the contour. But the area of the shaded parallelogram is equal to:

P i (x,y) • ∆Si • cos α i  where αi is the angle between the direction, ‾x of the x-axis and the outward normal of the surface bounded by the contour Γ; this normal is the perpendicular ñ to the tangent, which we may consider as defining the direction of the segment Δsi. By summing up the areas of such parallelograms and passing to the limit for finer and finer subdivisions of the contour Γ, we determine the amount of water flowing per minute through the contour Γ; it is denoted thus:image889

and is called a curvilinear integral. If the flow is not everywhere parallel, then its velocity at each point (x, y) will have a component P(x, y) along the x-axis and a component Q(x, y) along the y-axis. In this case we can show by an analogous argument that the quantity of water flowing through the contour will be equal to:image891

When we speak of an integral over a curved surface G for a function f(M) of its points M(x, y, z), we mean the limit of sums of the form:image893

for finer and finer subdivisions of the region G into segments whose areas are equal to Δσi.

General methods exist for transforming multiple, curvilinear, and surface integrals into other forms and for calculating their values, either exactly or approximately.

Ostrogradskiĭ Formula.

Several important and very general formulas relating an integral over a volume to an integral over its surface (and also an integral over a surface, curved or plane, to an integral around its boundary) have a very wide application, and are yet another striking proof on the constant trans-form-ations of S≈T DIMENSIONS, and interaction of the parts of the system, in this case between the membrane that encircles the vital space, whose parameters ARE ALWAYS CLOSELY RELATED, as we can consider the membrane, just the last ‘cover’ of maximal size of that inner 3D vital energy (unlike the quite distinct singularity, which ‘moves’ across ∆±i scales and tends to be quite different in form, parameters and substance)

Let us put an example: imagine, as we did before, that over a plane surface there is a horizontal flow of water that is also soaking into the ground or seeping out again from it. We mark off a region G, bounded by a curve Γ, and assume that for each point of the region we know the components P(x, y) and Q(x, y) of the velocity of the water in the direction of the x-axis and of the y-axis respectively.

Let us calculate the rate at which the water is seeping from the ground at a point with coordinates (x, y). For this purpose we consider a small rectangle with sides Δx and Δy situated at the point (x, y).

As a result of the velocity P(x, y) through the left vertical edge of this rectangle, there will flow approximately P(x, y)Δy units of water per minute into the rectangle, and through the right side in the same time will flow out approximately P(x + Δx, y)Δy units. In general, the net amount of water leaving a square unit of surface as a result of the flow through its left and right vertical sides will be approximately:

image895

If we let Δx approach zero, we obtain in the limit: ∂P/∂x.

Correspondingly, the net rate of flow of water per unit area in the direction of the y-axis will be given by: ∂Q/∂y.

This means that the intensity of the seepage of ground water at the point with coordinates (x, y) will be equal to: ∂P/∂x + ∂Q/∂y

But in general, as we saw earlier, the quantity of water coming out from the ground will be given by the double integral of the function expressing the intensity of the seepage of ground water at each point, namely:image903

But, since the water is incompressible, this entire quantity must flow out during the same time through the boundaries of the contour Γ. The quantity of water flowing out through the contour Γ is expressed, as we saw earlier, by the curvilinear integral over Γ:image905

The equality of the magnitudes (52) and (53) gives in its simplest two-dimensional case:image907

A key formula to mirror a widespread phenomenon in the external world, which in our example we interpreted in a readily visualized way as preservation of the volume of an incompressible fluid.

Which can be generalise to express the connection between an integral over a multidimensional volume and an integral over its surface. In particular, for a three-dimensional body G, bounded by the surface Γ:

image909

where dσ is the element of surface.

It is interesting to note that the fundamental formula of the integral calculus:

image911

may be considered as a one-dimensional case. The equation (54) connects the integral over an interval with the “integral” over its “null-dimensional” boundary, consisting of the two end points.

Formula (54) may be illustrated by the following analogy. Let us imagine that in a straight pipe with constant cross section s = 1 water is flowing with velocity F(x), which is different for different cross sections (figure 36). Through the porous walls of the pipe, water is seeping into it (or out of it) at a rate which is also different for different cross sections:

image913

If we consider a segment of the pipe from x to x + Δx, the quantity of water seeping into it in unit time must be compensated by the difference F(x + Δx) – F(x) between the quantity flowing out of this segment and the quantity flowing into it along the pipe.

So the quantity seeping into the segment is equal to the difference F(x + Δx) – F(x), and consequently the rate of seepage per unit length of pipe (the ratio of the seepage over an infinitesimal segment to the length of the segment) will be equal to:

image915

More generally, the quantity of water seeping into the pipe over the whole section [a, b] must be equal to the amount lost by flow through the ends of the pipe. But the amount seeping through the walls is equal to:image917and the amount lost by flow through the ends is F(b) – F(a). The equality of these two magnitudes produces formula.

GREEN’S THEOREM

Then there is of course the fact that a system in space-time, in which there is a displacement in time, will be equivalent to a system in which this time motion is seen as fixed space. Such cases mean that we can integrate lines with motion into planes, and surfaces with motion into volumes.

The result is:

-Green’s theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green[1] and is the two-dimensional special case of the more general Kelvin–Stokes theorem.

-Stoke’s theorem, which says that the integral of a differential form ω over the boundary of some orientable manifold Ω is equal to the integral of its exterior derivative dω over the whole of Ω:   ∫∂Ω ω = ∫Ω d ω

In general such Integrals also follow the geometrical structure of a system built with an external membrane, which absorbs the information of the system, and internal 0-point of scalar that focus it and a vital space between them. The result of these relationships allow to define the basic laws of integrals in time and space that relate line integrals to surface integrals to volume integrals, of which the best known example are the laws of electromagnetism, written not as Maxwell did, in terms of derivatives (curls and gradients) but of integrals.

So we can in this manner represent the laws of electromagnetism and fully grasp the meaning of magnetism, the external membrane and charge, the central point of view, with interactions between both, the electromagnetic waves and induced currents and magneto fluxes.

Thus the best examples in physics of this relationship are the 4 equations of Maxwell:

Screen Shot 2016-06-09 at 16.32.11

While the other 2 define the membrane of an electromagnetic field, the magnetic field and the central point of view or charge:

Screen Shot 2016-06-09 at 16.34.25

So we can consider that the Tƒ element of the electromagnetic field, the charge or o-point and the membrane, or closed outer path either in its integral or inverse differential equations, and the wave interaction between them, easily deduced from the stokes theorem or expressed inversely in differential form, give us the full description of an electromagnetic system in terms of the generator:

Sp-membrane (magnetic field-gauss’ law of magnetism) < ST (Faraday/Ampere’s laws of interaction between Sp and Tƒ) >Tƒ (Gauss Law of the central point).

And that those interactions are integrals of the quanta of the ∆-1 field in which the electric charge and magnetic field that integrates them arouses.

Density integrals. The meaning of Tƒ/Sp and Sp/Tƒ: information and energy densities:Screen Shot 2016-05-20 at 17.46.44Screen Shot 2016-05-20 at 17.46.51

Some General Remarks on the Formation and Solution of Differential Equations

As we have expressed many times, equations do NOT have solutions till the whole information on its ternary TΠare given. Which in time means to know an initial condition and an end, through which the function will run under the principle of completing its action in the least possible time: Max. S T (min. t), which for any combination of Space and Time dimensions implies to complete the action in the minimal possible time.

Conditions of this type arise in all equations of all stiences.

In the symmetry of space the boundary of the T.œ though must be expressed as lineal conditions of the 2D membrane and 1D singularity, which can be superposed 1D+2D to give the 3D solution of the vital space both enclose, normally through a product operator, 1 D x 2D=3D.

In any case each equation once determined by its space or time constrictions, can be found certain solutions, which form a sœT of possible frequencies or areas that are efficient parameters for the MIND equation to describe real T.œs (expressed here in the semantic inversion of sets and toes).

The key then to understand those solutions and its approximations is the fact that singularity and membrane conditions are expressed as scalars and lineal functions, while ternary vital energy solutions have cyclical form. 

 

MORE COMPLEX DERIVATIVES: CURVATURE, TENSORS – ITS LIMIT OF UTILITY

Physical quantities may be of 3, s, t, st kinds.

NOW BEYOND 2 planes of existence, the utility of derivatives diminish, as organisms become invisible and do not organise further, and so because in the same plane we use a first derivative, in relationships between two any planes we use derivatives of the 2nd degree, the maximal use possible for derivatives are derivatives COMES from third degree derivatives, which give us the limit of information, in the form of a single parameter, 1/r², curvature.

Beyond that planes of pure space and pure time are not perceivable, so departing from the fact that all is S with some t (energy) or T with some S(information), we can still broadly talk of dominate space-like parameters, time-like parameters and use the Spacetime parameters only for those in which S≈t (e=i) holds quite exactly.

Pure space and pure time.

Now the closest thing to the description of pure space as it emerges from ∆-1 and influences a higher ∆ scale as a ‘field of forces’.

And the closest thing to pure time, is a process of ‘implosion’ that ‘forces down’ or ‘depresses’ (inverse of emergence), a system from an ∆+1, time-like implossive force. And that is the meaning of mass, the force down, in/form/ative force coming from the ∆+1 scale.

Since pure, implosive time and pure expansive, entropic space are not observable, the best way to ‘get it’, is when the implosive time process is felt by something which is smaller inside the vortex. So we feel mass, from the ∆+3 galactic scale as Mach and Einstein mused, because inward implosive in-formative forces affect mostly the internal parts,  NOT the external ones. And we field inversely a field of expansive entropy, from smaller parts, exploding us from inside out.

Then we come to energy-like (max. Se x min Ti) and informative like (max. Ti x min. Se) parameters.

Some are completely characterized by their numerical values, e.g., temperature, density, and the like, and are called scalars. Scalars are then to be considered parameters of closed informative functions. In the case of density is evident.

Temperature is not so clear a time parameter. But temperature, when properly constrain it to what and where we truly measure as temperature (Not the frequency of a wave), that is the vibrating motions of molecules of the ∆-1 scale, in a closed space, hence a time parameter. So goes for mass-energy, as energy becomes added to mass, always that we can measure it in an enclosed region of space, belonging therefore to a time-closed world. So gluons of motion-energy enclosed in a nucleus do store most of the mass of atoms; as they are to be understood in terms of closed time-parameters from a potential point of view.

So goes from potential energy, which is stored in time cycles. So as a rule, regardless of how ‘distorted’ is conceptually current science and how unlikely will be a change of paradigm for centuries to come (we still drag, for example the – sign in electrons since Franklin chose it), the non-distorted truth can classify all parameters and physical quantities in terms of time and space.

On the other hand energy-like parameters will have direction as vector quantities: velocity, acceleration, the strength of an electric field, etc. The simpler those parameters, with less ‘scalar’ elements the more space-like, entropy-like, field-like they will be.  So again as a rule, the less dimensions we need to define a system the more space-entropy-field like it will be.

Thus space-like Vector quantities may be expressed just by the length of the vector and its direction or its space-dimensional “components” if we decompose it into the sum of three mutually perpendicular vectors parallel to the coordinate axes.

While a space-time balanced process will have more ‘parameters’ to study than a simple vector, growing in dimensions till becoming 4-vectors and finally a ‘tensor’, which will be the final ‘growth into a 5D ∆-event

So it is easy just looking at a equation to know what kind of s, t, or st (e, i exi) process we study.

For example, a classic st process, which is, let us remember an S≈T process is one which tends to a dynamic balance between both parameters

So it is an oscillatory system in any ∆-scale or ‘space-time medium’. In such oscillations every point of the medium, occupying in equilibrium the position (x, y, z), will at time t be displaced along a vector u(x, y, z, t), depending on the initial position of the point (x, y, z) and on the time t.

In this case the process in question will be described by a vector field. But it is easy to see that knowledge of this vector field, namely the field of displacements of points of the medium, is not sufficient in itself for a full description of the oscillation.

It is also necessary to know, for example, the density ρ(x, y, z, t) at each point of the medium, the temperature T(x, y, z, t).

So we ad to the Spe-vector field, some T-PARAMETERS, (closed T-vibration, density, and stress, which connects the system in a time-network like fashion to all other points of the whole).

∆-events. Finally in processes which require the study of interactions between ∆-scales, hence 5D processes, we need even more complex elements.

For example a classic ∆-event is the internal stress, i.e., the forces exerted on an arbitrarily chosen volume of the body by the entire remaining part of it.

screen-shot-2017-01-17-at-16-37-04

And so we arrive to systems defined by tensors, often with 6 dimensions (we forget the final r=evolution of thought that would make it all less distorted of working on bidimensional space and time, as to simplify understanding so the idea of a tensor is that the whole works dynamically into the ‘point’, described as a ‘cube’, with 6 faces, or ± elements on the point-particle-being from the 3 classic space-dimensions:

Examples of it are  the mentioned stress, shown in the graph, or in the relativity description of how the ∆+3 scale of gravitational space-time, influences the lower scales with the effect of implosive mass.

Thus in addition to S-vector and T-scalar quantities, more complicated entities occur in SPACE-time events, often characterized everywhere by a set of functions of the same four independent variables; where the function is a description of the ∆-1 scale, which come together into the upper new ‘function’ of functions, or ‘functional equation’.

And so we can classify in this manner according to the ∆•ST ternary method all the parameters of mathematical physics.

And they will reveal us the real structure and ∆ST symmetries they analyse according to the number of dimensions of complexity they use. 

Yet beyond the range of ‘tensors’, which study relationships between 2 or at best a maximal of 3 scales of reality, there is nothing to be found. So happens when we consider the number of differential equations, we want to study. Nothing really goes on beyond the 3rd derivative of a system that scales down 2 scales (entropy-death events, dual emergence upwards from seed to world).

So one of the most fascinating events of the relationship of i-logic and the real world is to properly interpret what Einstein said he could not resolve:

‘I know when mathematics is truth but not when it is real’

By the fact that as the Soviet school of maths from Lobachevski to Aleksandrov rightly explained, we NEED an experimental method inserted in Mathematics to know when mathematics is both, LOGICALLY CONSISTENT as a language in its inner syntax but NOT fiction of beauty but REAL.

A bit on the ‘numbers’

Now the fundamental concept behind analysis is the ∂∫ duality of ‘derivatives’ and ‘integrals’, related at first sights to the concepts of ‘time’ and ‘space’ (you derivate in time, you integrate in space), and to the concept of scalar ‘evolution’ from parts into wholes (you derivate to obtain a higher scalar wholeness.

i.e. you derivate a past space into present speed, adding a time-motion, and then derivate into acceleration – future time – to obtain the ‘most thorough single parameter of the being in time’: its acceleration that encodes also its speed and distance.

On the other hand you integrate in space, and so it is also customary to consider the first and second integral, which will bring also the ternary scale of volume of a system.

And it is a tribute to the simplicity of the Universe that further ‘derivatives’ are not really needed, to perceive the system in its full time and space parameters. As further derivations and integrations are not needed (they happen in the search of curvature and tensions and jerks, rates of acceleration, which are really menial details and in some combined space-time multiple systems).

Approximations.

We have though already commented on Algebra, that the third derivative, or higher derivatives however are used to improve the accuracy of an approximation to the function:

f(Xo+h)=f(xo)+f′(Xo)h+f″(Xo)h²/2!+f‴(ξ)h³/3!

Thus Taylor’s expansion of a function around a point involves higher order derivatives, and the more derivatives you consider, the higher the accuracy. This also translates to higher order finite difference methods when considering numerical approximations.

Now what this means is obvious: beyond the accuracy of the three derivatives canonical to an ∆º±1 supœrganism, information as it passes the potential barrier between scales of the 5th≈∆-dimension, suffers a loss of precision so beyond the third derivative, we can only obtain approximations by using higher derivatives or in a likely less focused=exact procedure the equivalent polynomials, more clear expressions of ‘dimensional growth.

So their similitude first of all proves that both high derivatives and polynomials are representations of growth across planes and scales, albeit loosing accuracy.

However in the fifth dimensional correct perspective is more accurate the derivative-integral game; as it ‘looks at the infinitesimal’ to integrate then the proper quanta.

Taylor’s Formula

The function: where the coefficients ak are constants, is a polynomial of degree=dimension n. In particular, y = ax + b is a polynomial of the first degree and y = ax² + bx + c is a polynomial of the second dimension. Dimensional polynomials have the particularity that are mostly 2-manifolds symmetric in that x=y, that is both dimension square, D1 x D2.

To achieve this feat, S=t, tt or ss STEPS MUST BE CONSIDERED.

Polynomials may be considered IN THAT SENSE as the simplest of all poli-DIMENSIONAL functions. In order to calculate their value for a given x, we require only the operations of addition, subtraction, and multiplication; not even division is needed. Polynomials are continuous for all x and have derivatives of arbitrary order. Also, the derivative of a polynomial is again a polynomial, of degree lower by one, and the derivatives of order n + 1 and higher of a polynomial of degree n are equal to zero. Yet the derivative diminishes slower than the simple square of the function; so if we consider the derivative the parts of the polynomial, the product of those parts would be more than the whole.

It is then when we can increase the complexity establishing for example, ratios of polynomials.
If to the polynomials we adjoin functions of the form:

for the calculation of which we also need division, and also the functions √x and ³√X, finally, arithmetical combinations of these functions, we obtain essentially all the functions whose values can be calculated by such methods.

But what a polynomial describes?  All others are easier to get through approximations:

On an interval containing the point a, let there be given a function f(x) with derivatives of every order. The polynomial of first degree:

p1(x) = ƒ(a) + ƒ'(a) (x-a)   has the same value as f(x) at the point x = a and also, as is easily verified, has the same derivative as f(x) as this point. Its graph is a straight line, which is tangent to the graph of f(x) to the point a. It is possible to choose a polynomial of the second degree, namely: which at the point of x = a has with f(x) a common value and a common first and second derivative. Its graph at the point a will follow that of f(x) even more closely. It is natural to expect that if we construct a polynomial which at x = a has the same first n derivatives as f(x) at the same point, then this polynomial will be a still better approximation to f(x) at points x near a. Thus we obtain the following approximate equality, which is Taylor’s formula:The right side of this formula is a polynomial of degree n in (x − a). For each x the value of this polynomial can be calculated if we know the values of f(a), f′(a), ···, f(n)(a).

For functions which have an (n + 1)th derivative, the right side of this formula, as is easy to show, differs from the left side by a small quantity which approaches zero more rapidly than (x − a)n. Moreover, it is the only possible polynomial of degree n that differs from f(x), for x close to a, by a quantity that approaches zero, as x → a, more rapidly than (x − a)n. If f(x) itself is an algebraic polynomial of degree n, then the approximate equality (25) becomes an exact one.
Finally, and this is particularly important, we can give a simple expression for the difference between the right side of formula (25) and the actual value of f(x). To make the approximate equality (25) exact, we must add to the right side a further term, called the “remainder term”

has the peculiarity that the derivative appearing in it is to be calculated in each case not at the point a but at a suitably chosen point ξ, which is unknown but lies somewhere in the interval between a and x.
So we cab make use of the generalized mean-value theorem quoted earlier:

Differentiating the functions ϕ(u) and ψ(u) with respect to u (it must be recalled that the value of x has been fixed) we find that: The equality of this last expression with the original quantity (27) gives Taylor’s formula in the form (26).
In the form (26) Taylor’s formula not only provides a means of approximate calculation of f(x) but also allows us to estimate the error.

And so with Taylor we close this introduction to derivatives and differentials, enlightened with the basic elements that relate them to the 5 dimensions of space-time, specifically to the ∆-1 finitesimals.

DETERMINISM IN SOLUTION TO ODEs.

Lineal vs cyclic; dis≈continuity; 1st vs 2nd order, partial vs ordinary,∂ vs ∫; 3 states of matter & its freedoms.

In the philosophy of science of mathematical physics some concepts come back once and again, based in dualities

And so the qualitative description of all those entropic, reproductive/moving and informative/Tiƒ time-like vortices became ‘only’ mathematical.

It is interesting at this stage, to consider that the whole world of ∫∂ mathematics has two approaches which humans as always being one-dimensional did not fully find complementary but argue, the method of newton based in infinite series (arithmetic, temporal pov) and that of Leibniz using spatial, geometric concepts (tangents); which is the one, being more evident and simpler, that stood.

First, trivially speaking the existence of such 2 canonical, time-space ways to come to ∫∂ is a clear proof that both researchers found ∫∂ independently. Next, their argument about who was right and better shows how one-minded is humanity, and third, the dominance of Leibniz’s methods for being visual, geometrical, spatial tells a lot about the difficulty humans have to understand time, causality and the concepts of infinity, limit, discontinuity, continuity, and other key elements of ∆nalysis, which we shall argue in our mathematical posts on… ∆nalysis.

All this said, mathematical physics moved to the geometric, continental school with the help of Leibniz’s disciples, the Bernoulli.

And it is interesting to consider a diachronic concept to analyse the enormous flourishing of those equation…

 

PDE

INTRODUCTION: PHILOSOPHICAL CONSIDERATIONS

Physical equations are also equations related to the 3 elements of al the existential entities of the Universe, which we will develop in detail on a future post on physics, accompanying this one. It must then be understood that within the general ƒ(x)≈f(t) and y=S isomorphism between mathematical equations and ST-eps (not always te case as symmetric steps can repeat itself with the same parameters in SSS and TTT derivatives as we have seen in our intro to ODE), partial differential equations, will be combinations of analysis of systems in its ‘primary’ differential finitesimals of space and time then aggregated in more complex St SYSTEMS, giving as an enormous range of possible PDE studies, which we shall strive to order according to the concept that there is a geometric symmetry between algebra  (s≈t symmetries) and geometry (S-wholes sum of t-dynamic points) and analysis (st-eps). 

So it is a good guidance for all algebraic, analytic equations to make a comment of its significance in the vital ternary geometry of a T.œ or complex event between T.œs ACROSS different planes, ∆§ studied with those equations.

Partial Differential equations as ƥst-equations.

Physical events and processes occuring in a space- time system always consist of the changes, during the passage of its finite time, of certain physical magnitudes related to its points of vital space.

This simple definition of space-time processes is at the heart of the whole differential calculus, which with slight changes of interpretation apply to all GST.

Any of those ST processes can be described by functions with four ST, independent variables, S(x, y), and (z, ƒ), where x, y  are the coordinates of a point of the space, and , and z  and ƒ of time.

So ideally in a world in which humans had not distorted bidimensional time cycles, the way we work around mathematical equations would be slightly changed. As we are not reinventing the human mind of 7 billion people – we are not that arrogant, we just will feel happy trying to explain a few of those processes of bidimensional space and time here.

In the study of the phenomena of nature, partial differential equations are encountered just as often as ordinary ones. As a rule this happens in cases where an event is described by a function of several variables. From the study of nature there arose that class of partial differential equations that is at the present time the most thoroughly investigated and probably the most important in the general structure of human knowledge, namely the equations of mathematical physics.

Let us first consider oscillations in any kind of medium. In such oscillations every point of the medium, occupying in equilibrium the position (x, y, z), will at time t be displaced along a vector u(x, y, z, t), depending on the initial position of the point (x, y, z) and on the time t. In this case the process in question will be described by a vector field. But it is easy to see that knowledge of this vector field, namely the field of displacements of points of the medium, is not sufficient in itself for a full description of the oscillation. It is also necessary to know, for example, the density ρ(x, y, z, t) at each point of the medium, the temperature T(x, y, z, t), and the internal stress, i.e., the forces exerted on an arbitrarily chosen volume of the body by the entire remaining part of it.

Physical events and processes occuring in space and time always consist of the changes, during the passage of time, of certain physical magnitudes related to the points of the space. As we saw in Chapter II these quantities can be described by functions with four independent variables, x, y, z, and t, where x, y, and z are the coordinates of a point of the space, and and t is the time.

Physical quantities may be of different kinds. Some are completely characterized by their numerical values, e.g., temperature, density, and the like, and are called scalars. Others have direction and are therefore vector quantities: velocity, acceleration, the strength of an electric field, etc. Vector quantities may be expressed not only by the length of the vector and its direction but also by its “components” if we decompose it into the sum of three mutually perpendicular vectors, for example parallel to the coordinate axes.

In mathematical physics a scalar quantity or a scalar field is presented by one function of four independent variables, whereas a vector quantity defined on the whole space or, as it is called, a vector field is described by three functions of these variables. We can write such a quantity either in the form:

U (x,y,z,t) where the bold face type indicates the u is a vector, or in the form of three functions:Ux (x,y,z,t), U y(x,y,z,t), Uz (x,y,z,t)

where ux, uy, and uz denote the projections of the vector on the coordinate axes.

In addition to vector and scalar quantities, still more complicated entities occur in physics, for example the state of stress of a body at a given point. Such quantities are called tensors; after a fixed choice of coordinate axes, they may be characterized everywhere by a set of functions of the same four independent variables.

In this manner, the description of widely different kinds of physical phenomena is usually given by means of several functions of several variables. Of course, such a description cannot be absolutely exact.

For example, when we describe the density of a medium by means of one function of our independent variables, we ignore the fact that at a given point we cannot have any density whatsoever. The bodies we are investigating have a molecular structure, and the molecules are not contiguous but occur at finite distances from one another. The distances between molecules are for the most part considerably larger than the dimensions of the molecules themselves. Thus the density in question is the ratio of the mass contained in some small, but not extremely small, volume to this volume itself. The density at a point we usually think of as the limit of such ratios for decreasing volumes. A still greater simplification and idealization is introduced in the concept of the temperature of a medium. The heat in a body is due to the random motion of its molecules. The energy of the molecules differs, but if we consider a volume containing a large collection of molecules, then the average energy of their random motions will define what is called temperature.

Similarly, when we speak of the pressure of a gas or a liquid on the wall of a container, we should not think of the pressure as though a particle of the liquid or gas were actually pressing against the wall of the container. In fact, these particles, in their random motion, hit the wall of the container and bounce off it. So what we describe as pressure against the wall is actually made up of a very large number of impulses received by a section of the wall that is small from an everyday point of view but extremely large in comparison with the distances between the molecules of the liquid or gas. It would be easy to give dozens of examples of a similar nature. The majority of the quantities studied in physics have exactly the same character. Mathematical physics deals with idealized quantities, abstracting them from the concrete properties of the corresponding physical entities and considering only the average values of these quantities.

Such an idealization may appear somewhat coarse but, as we will see, it is very useful, since it enables us to make an excellent analysis of many complicated matters, in which we consider only the essential elements and omit those features which are secondary from our point of view.

The object of mathematical physics is to study the relations existing among these idealized elements, these relations being described by sets of functions of several independent variables.

The Simplest Equations of Mathematical Physics

The elementary connections and relations among physical quantities are expressed by the laws of mechanics and physics. Although these relations are extremely varied in character, they give rise to more complicated ones, which are derived from them by mathematical argument and are even more varied. The laws of mechanics and physics may be written in mathematical language in the form of partial differential equations, or perhaps integral equations, relating unknown functions to one another. To understand what is meant here, let us consider some examples of the equations of mathematical physics.

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.

PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

Both ordinary and partial differential equations are broadly classified as linear and nonlinear.

  • A differential equation is linear if the unknown function and its derivatives appear to the power 1 (products of the unknown function and its derivatives are not allowed) and nonlinear. The characteristic property of linear equations is that their solutions form an affine subspace of an appropriate function space, which results in much more developed theory of linear differential equations. Homogeneous linear differential equations are a further subclass for which the space of solutions is a linear subspace i.e. the sum of any set of solutions or multiples of solutions is also a solution. The coefficients of the unknown function and its derivatives in a linear differential equation are allowed to be (known) functions of the independent variable or variables; if these coefficients are constants then one speaks of a constant coefficient linear differential equation.
  • There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below).

Examples

In the first group of examples, let u be an unknown function of x, and c and ω are known constants.

  • Inhomogeneous first-order linear constant coefficient ordinary differential equation:
  • Homogeneous second-order linear ordinary differential equation:
  • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
  • Inhomogeneous first-order nonlinear ordinary differential equation:
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:

In the next group of examples, the unknown function u depends on two variables x and t or x and y.

  • Homogeneous first-order linear partial differential equation:
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
  • Third-order nonlinear partial differential equation, the Korteweg–de Vries equation:

Existence of solutions

Solving differential equations is not like solving algebraic equations. Not only are their solutions often times unclear, but whether solutions are unique or exist at all are also notable subjects of interest.

For first order initial value problems, it is easy to tell whether a unique solution exists. Given any point in the xy-plane, define some rectangular region , such that and is in . If we are given a differential equation and an initial condition , then there is a unique solution to this initial value problem if and are both continuous on . This unique solution exists on some interval with its center at .

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order such that For any nonzero , if and are continuous on some interval containing , is unique and exists.

  • A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.
  • A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations.
  • A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form.

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve approximation of the solution of a differential equation by the solution of a corresponding difference equation.

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena.

As an example, consider propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

In physics:

Classical mechanics:

So long as the force acting on a particle is known, Newton’s second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton’s second law to obtain an ordinary differential equation, which is called the equation of motion.

Electrodynamics:

Maxwell’s equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell’s equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.

General relativity:

The Einstein field equations (EFE; also known as “Einstein’s equations”) are a set of ten partial differential equations in Albert Einstein’s general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).

Quantum mechanics:

In quantum mechanics, the analogue of Newton’s law is Schrödinger’s equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system’s wave function (also called a “state function”).

Other important equations:

  • Euler–Lagrange equation in classical mechanics
  • Hamilton’s equations in classical mechanics
  • Radioactive decay in nuclear physics
  • Newton’s law of cooling in thermodynamics
  • The wave equation
  • The heat equation in thermodynamics
  • Laplace’s equation, which defines harmonic functions
  • Poisson’s equation
  • The geodesic equation
  • The Navier–Stokes equations in fluid dynamics
  • The Diffusion equation in stochastic processes
  • The Convection–diffusion equation in fluid dynamics
  • The Cauchy–Riemann equations in complex analysis
  • The Poisson–Boltzmann equation in molecular dynamics
  • The shallow water equations
  • Universal differential equation

The Lorenz equations whose solutions exhibit chaotic flow.

Simple examples.

In ∆st are therefore Mathematical statement containing one or more derivatives—that is, terms representing the rates of change of continuously varying quantities. Differential equations are very common in science and engineering, as well as in many other fields of quantitative study, because what can be directly observed and measured for systems undergoing changes are their rates of change. The solution of a differential equation is, in general, an equation expressing the functional dependence of one variable upon one or more others; it ordinarily contains constant terms that are not present in the original differential equation. Another way of saying this is that the solution of a differential equation produces a function that can be used to predict the behaviour of the original system, at least within certain constraints.

Differential equations are classified into several broad categories, and these are in turn further divided into many subcategories. The most important categories are ordinary differential equations and partial differential equations. When the function involved in the equation depends on only a single variable, its derivatives are ordinary derivatives and the differential equation is classed as an ordinary differential equation. On the other hand, if the function depends on several independent variables, so that its derivatives are partial derivatives, the differential equation is classed as a partial differential equation. The following are examples of ordinary differential equations:screen-shot-2016-12-22-at-09-37-45

In these, y stands for the function, and either t or x is the independent variable. The symbols k and m are used here to stand for specific constants.

Whichever the type may be, a differential equation is said to be of the nth order if it involves a derivative of the nth order but no derivative of an order higher than this.

The equation:

screen-shot-2016-12-22-at-09-37-52

is an example of a partial differential equation of the second order. The theories of ordinary and partial differential equations are markedly different, and for this reason the two categories are treated separately.

Instead of a single differential equation, the object of study may be a simultaneous system of such equations. The formulation of the laws of dynamics frequently leads to such systems. In many cases, a single differential equation of the nth order is advantageously replaceable by a system of n simultaneous equations, each of which is of the first order, so that techniques from linear algebra can be applied.

An ordinary differential equation in which, for example, the function and the independent variable are denoted by y and x is in effect an implicit summary of the essential characteristics of y as a function of x.

These characteristics would presumably be more accessible to analysis if an explicit formula for y could be produced. Such a formula, or at least an equation in x and y (involving no derivatives) that is deducible from the differential equation, is called a solution of the differential equation. The process of deducing a solution from the equation by the applications of algebra and calculus is called solving or integrating the equation.

It should be noted, however, that the differential equations that can be explicitly solved form but a small minority. Thus, most functions must be studied by indirect methods. Even its existence must be proved when there is no possibility of producing it for inspection. In practice, methods from numerical analysis, involving computers, are employed to obtain useful approximate solutions.

Problems in the theory of differential equations.

We now give exact definitions. An ordinary differential equation of order n in one unknown function y is a relation of the form

image2193

between the independent variable x and the quantities

image2195

The order of a diflerential equation is the order of the highest derivative of the unknown function appearing in the differential equation. Thus the equation in example 1 is of the first order, and those in examples 2, 3, 4, 5, and 6, are of the second order.

A function ϕ(x) is called a solution of the differential equation (17) if substitution of ϕ(x) for y, ϕ′(x) for y′, · · ·, ϕ(n) (x) for y(n) produces an identity.

Problems in physics and technology often lead to a system of ordinary differential equations with several unknown functions, all depending on the same argument and on their derivatives with respect to that argument.

For greater concreteness, the explanations that follow will deal chiefly with one ordinary differential equation of order not higher than the second and with one unknown function. With this example one may explain the essential properties of all ordinary differential equations and of systems of such equations in which the number of unknown functions is equal to the number of equations.

We have spoken earlier of the fact that, as a rule, every differential equation has not one but an infinite set of solutions. Let us illustrate this first of all by intuitive considerations based on the examples given in equations (2-6). In each of these, the corresponding differential equation is already fully defined by the physical arrangement of the system. But in each of these systems there can be many different motions. For example, it is perfectly clear that the pendulum described by equation (8) may oscillate with many different amplitudes. To each of these different oscillations of the pendulum there corresponds a different solution of equation (8), so that infinitely many such solutions must exist. It may be shown that equation (8) is satisfied by any function of the form

image2197

where C1, and C2, are arbitrary constants.

It is also physically clear that the motion of the pendulum will be completely determined only in case we are given, at some instant t0, the (initial) value x0 of x (the initial displacement of the material point from the equilibrium position) and the initial rate of motion:

X’o=(dx/dt)|t=0 These initial conditions determine the constants C1, and C2, in formula (18).

In exactly the same way, the differential equations we have found in other examples will have infinitely many solutions.

In general, it can be proved, under very broad assumptions concerning the given differential equation (17) of order n in one unknown function that it has infinitely many solutions. More precisely: If for some “initial value” of the argument, we assign an “initial value” to the unknown function and to all of its derivatives through order n – 1, then one can find a solution of equation (17) which takes on these preassigned initial values. It may also be shown that such initial conditions completely determine the solution, so that there exists only one solution satisfying the initial conditions given earlier. We will discuss this question later in more detail. For our present aims, it is essential to note that the initial values of the function and the first n – 1 derivatives may be given arbitrarily. We have the right to make any choice of n values which define an “initial state” for the desired solution.

If we wish to construct a formula that will if possible include all solutions of a differential equation of order n, then such a formula must contain n independent arbitrary constants, which will allow us to impose n initial conditions. Such solutions of a differential equation of order n, containing n independent arbitrary constants, are usually called general solutions of the equation. For example, a general solution of (8) is given by formula (18) containing two arbitrary constants; a general solution of equation (3) given by formula (5).

We will now try to formulate in very general outline the problems confronting the theory of differential equations. These are many and varied, and we will indicate only the most important ones.

If the differential equation is given together with its initial conditions, then its solution is completely determined. The construction of formulas giving the solution in explicit form is one of the first problems of the theory. Such formulas may be constructed only in simple cases, but if they are found, they are of great help in the computation and investigation of the solution.

The theory should provide a way to obtain some notion of the behavior of a solution: whether it is monotonic or oscillatory, whether it is periodic or approaches a periodic function, and so forth.

Suppose we change the initial values for the unknown function and its derivatives; that is, we change the initial state of the physical system. Then we will also change the solution, since the whole physical process will now run differently. The theory should provide the possibility of judging what this change will be. In particular, for small changes in the initial values will the solution also change by a small amount and will it therefore be stable in this respect, or may it be that small changes in the initial conditions will give rise to large changes in the solution so that the latter will be unstable ?

We must also be able to set up a qualitative, and where possible, quantitative picture of the behavior not only of the separate solutions of an equation, but also of all of the solutions taken together.

In machine construction there often arises the question of making a choice of parameters characterizing an apparatus or machine that will guarantee satisfactory operation. The parameters of an apparatus appear in the form of certain magnitudes in the corresponding differential equation. The theory must help us make clear what will happen to the solutions of the equation (to the working of the apparatus) if we change the differential equation (change the parameters of the apparatus).

Finally, when it is necessary to carry out a computation, we will need to find the solution of an equation numerically. and here the theory will be obliged to provide the engineer and the physicist with the most rapid and economical methods for calculating. the solutions.

Partial differential equations

In mathematics, equation relating a function of several variables to its partial derivatives. A partial derivative of a function of several variables expresses how fast the function changes when one of its variables is changed, the others being held constant (compare ordinary differential equation). The partial derivative of a function is again a function, and, if f(x, y) denotes the original function of the variables x and y, the partial derivative with respect to x—i.e., when only x is allowed to vary—is typically written as fx(x, y) or ∂f/∂x. The operation of finding a partial derivative can be applied to a function that is itself a partial derivative of another function to get what is called a second-order partial derivative. For example, taking the partial derivative of fx(x, y) with respect to y produces a new function fxy(x, y), or ∂2f/∂yx. The order and degree of partial differential equations are defined the same as for ordinary differential equations.

In general, partial differential equations are difficult to solve, but techniques have been developed for simpler classes of equations called linear, and for classes known loosely as “almost” linear, in which all derivatives of an order higher than one occur to the first power and their coefficients involve only the independent variables.

MAIN EQUATIONS OF PHYSICS STUDIED THROUGH ∫∂

Physical systems reduce the number of space-time parameters we measure essentially to 3 corresponding to scale, space and time, Mass-density (mass-energy ratio per volume), Space-Length (space ratio to time frequency) and time (frequency of steps).

So the number of symmetries of space-time to find is relatively limited: ∆ρ ≈ Sl ≈ Tƒ

Where  ∆ρ codes for any scalar active magnitude, which can be mass (gravitational scale) or charge (quantum scale) even Temperature (thermodynamic scale) So in principle the final reduction of  the equations of physics deal with only those 3 elements and yet it has a ginormous volume of information. Let us then consider the key equations that we can elaborate with the 3 elements, first noticing this parallelism with the ∆, s and t elements of any REAL tœ and its symmetry between the 3 parts of its simultaneous space and its limits of duration in time.

What mass, heat or charge measures then is the potential capacity of the internal vital energy to move expand, and as the result of being ‘enclosed’ by the membrane of the higher T.œ system. It also follows that solutions to systems without ‘membrane constrains’ or ‘singularity’ centres for the active magnitude, to define either a closed o-1 or a 1-∞ relative ‘equal’ regions of measure will not normally have meaningful solutions.

In the graph, mathematical physics deals with 3 type of parameters to define the cyclical membrane, vital energy and singularity force that constrain the system, and establish when interpreted those forms as functions of time-variables in a Graph, the parameters to operate over them.

In general it is then possible to ‘reconstruct’ from classic mathematical physics the laws and Disomorphisms of our systems by considering also how subjective humans perceive them. In general an ‘angular momentum, mrv and its equivalent in other active magnitudes-scales is the best measure of a membrane value. The internal regions of the being though often hidden, they require a simpler unit of measure; an ‘scalar’, which is often the best way to value as an active magnitude the force of the singularity, as an attractive pole that holds the whole, vital energy and membrane together. But if we do have a minimal detail of perception, better than a scalar is to use ratios, between space and time that define the 3 values with similar ‘1 units’, S x ð = Speed = (S/t); ∆@/S = Density; and S x ∆@ = Momentum.

Equations of conservation of mass and of heat energy.

Let us express in mathematical form the basic physical laws governing the motions of a medium.

First of all we express the law of conservation of the matter contained in any volume Ω which we mentally mark off in a space and keep fixed. For this purpose we must calculate the mass of the matter contained in this volume. The mass MΩ(t) is expressed by the integral:image2455

This mass will not, of course, be constant; in an oscillatory process the density at each point will be changing in view of the fact that the particles of matter in their oscillations will at one time enter this volume and at another leave it. The rate of change of the mass can be found by differentiation with respect to time and is given by the integral:image2457

image2463This rate of change of the mass contained in the volume may also be calculated in another way. We may express the amount of matter which passes through the surface S, bounding our volume Ω, at each second of time, where the matter leaving Ω must be taken with a minus sign. To this end we consider an element ds of the surface S sufficiently small that it may be assumed to be plane and have the same displacement for all its points. We will follow the displacement of points on this segment of the surface during the interval of time from t to t + dt. First of all we comDute the vector: v = du/dt.

which represents the velocity of each particle. In the time dt the particles on ds move along the vector υ dt, and take up a position ds1, while the position ds will now be occupied by the particles which were formerly at the position ds2. So during this time the column of matter leaving the volume Ω will be that which was earlier contained between ds2 and ds1. The altitude of this small column is equal to υ dt cos (n, υ), where n denotes the exterior normal to the surface; the volume of the small column will thus be equal to: v cos (n, v) ds dt

and the mass equal to:  ρv cos (n, v) ds dt.

Adding together all these small pieces, we get for the amount of matter leaving the volume during the time dt the expression: ∫∫ ρv cos (n, v) ds dt.

At those points where the velocity is directed toward the interior of Ω the sign of the cosine will be negative, which means that in this integral the matter entering Ω is taken with a minus sign. The product of the velocity of motion of the medium with its density is called its flux. The flux vector of the mass is q = ρυ.

In order to find the rate of flow of matter out of the volume Ω it is sufficient to divide this expression by dt, so that for the rate of flow we have:image2469

where: Vn = V cos (n,v), Qn = q cos (n, q). The normal component of the vector υ may be replaced by its expression in terms of the components of the vectors υ and n along the coordinate axes. From analytic geometry we know that:

Vn = V cos (n,v) = Vx cos (n, x)+ Vy cos (n,y) +  Vz cos (n,z) 

Hence we can rewrite the expression for the rate of flow in the form:

image2475

From the law of conservation of matter, these two methods of computing the change in the amount of matter must give the same resuIt, since all change in the mass included in Ω can occur only as a result of the entering or leaving of mass through the surface S.

Hence, equating the rate of change of the amount of matter contained in the volume with the rate of flow of matter into the volume, we get:

image2477

This integral relation, as we have said, is true for any volume Ω. It is called “the equation of continuity.”

The integral occurring on the right side of the last equation may be transformed into a volume integral by using Hamilton’s formula:

image2479
Hence it follows that:

image2481

So we get the following result; the irttegral of the function:

image2483

over any volume Ω is equal to zero. But this is possible only if the function is identically zero. We thus obtain the equation of continuity in differential form:image2485Equation (1) is a typical example of the formulation of a physical law in the language of partial differential equations.

Heat Conduction.

Let us consider another such problem, namely the problem of heat conduction.

In any medium whose particles are in motion on account of heat, the heat flows from some points to others. This flow of heat will occur through every element of surface ds lying in the given medium. It can be shown that the process may be described numerically by a single vector quantity, the heat-conduction vector, which we denote by τ. Then the amount of heat flowing per second through an element of area ds will be expressed by τn, ds, in the same way as q, ds earlier expressed the amount of material passing per second through an area ds. In place of the flux of liquid q = ρυ we have the heat flow vector τ.

In the same way as we obtained the equation of continuity, which for the motion of a liquid expresses the law of conservation of mass, we may obtain a new partial differential equation expressing the law of conservation of energy, as follows.

The volume density of heat energy Q at a given point may be expressed by the formula: Q = CT, where C is the heat capacity and T is the temperature.  Here it is easy to establish the equation:

image2489

The derivation of this equation is identical with the derivation of the equation of continuity, if we replace “density” by “density of heat energy” and flow of mass by flow of heat. Here we have assumed that the heat energy in the medium never increases. But if there is a source of heat present in the medium, equation (2) for the balance of heat energy must be modified. If q is the productivity density of the source, that is the amount of heat energy produced per unit of volume in one second, then the equation of conservation of heat energy has the following more complicated form:

image2491

Still another equation of the same type as the equation of continuity may be derived by differentiating equation (1) with respect to time. Let us do this for the equation of small oscillations of a gas near a position of equilibrium. We will assume that for such oscillations changes of the density are not great and the quantities ∂ρ/∂x, ∂ρ/∂y, ∂ρ/∂z, and ∂ρ/∂t are sufficiently small that their products with υx, υy, and υz, may be ignored. Then:image2493
Differentiating this equation with respect to time and ignoring the products of ∂ρ/∂t with ∂υx/∂x, ∂υy/∂y, and ∂υy/∂z, we obtain

image2495

Equation of motion.

An important example of the expression of a physical law by a differential equation occurs in the equations of equilibrium or of motion of a medium. Let the medium consist of material particles, moving with various velocities. As in the first example, we mentally mark off in space a volume Ω, bounded by the surface S and filled with particles of matter of the medium, and write Newton’s second law for the particles in this volume. This law states that for every motion of the medium the rate of change of momentum, summed up for all particles, in the volume is equal to the sum of all the forces acting on the volume. The momentum, as is known from mechanics, is represented by the vector quantity:image2497

The particles occupying a small volume dΩ with density ρ will, after time Δt, fill a new volume dΩ′ with density ρ′ although the mass will be unchanged: ρ’ d Ω’=ρ d Ω.

If velocity υ changes during this time to a new value υ′, i.e., by the amount Δυ = υ′ − υ, the corresponding change of momentum will be

image2501

or in the unit of time:

image2503

Adding over all particles in the volume Ω, we find that the rate of change of momentum is equal to:

image2505

or, in other words:

image2507

Here the derivatives dυx/dt, dυy/dt, and dυz/dt denote the rate of change of the components of υ not at a given point of the space but for a given particle. This is what is meant by the notation d/dt instead of ∂/∂t. As is well known, d/dt = ∂/∂t + υx(∂/∂x) + υy(∂/∂y) + υ(∂/∂z).

The forces acting on the volume may be of two kinds: volume forces acting on every particle of the body, and surface forces or stresses on the surface S bounding the volume. The former are long-range forces, while the latter are short-range.

To illustrate these remarks, let us assume that the medium under consideration is a fluid. The surface forces acting on an element of the surface ds will in this case have the value p ds, where p is the pressure on the fluid, and will be exerted in a direction opposite to that of the exterior normal.

If we denote the unit vector in the direction of the normal to the surface S by n, then the forces acting on the section ds will be equal to: – pn ds. If we let F denote the vector of the external forces acting on a unit of volume, our equation takes the form:

image2511

This is the equation of motion in integral form. Like the equation of continuity, this equation also may be transformed into differential form. We obtain the system:
image2513
This system is the differential form of Newton’s second law.

2. Another characteristic example of the application of the laws of mechanics in differential form is the equation of a vibrating string. A string is a long, very slender body of elastic material that is flexible because of its extreme thinness, and is usually tightly stretched. If we imagine the string divided at any point x into two parts, then on each of the parts there is exerted a force equal to the tension in the direction of the tangent to the curve of the string.

Let us examine a short segment of the string. We will denote by u(x, t) the displacement of a point of the string from its position of equilibrium. We assume that the oscillation of the string occurs in one plane and consists of displacements perpendicular to the axis Ox, and we represent the displacement u(x, t) graphically at some instant of time (figure 2). We will investigate the behavior of the segment of the string between the points x1 and x2. At these points there are two forces acting, which are equal to the tension T in the direction of the corresponding tangent to u(x, t):image2515

If the segment is curved, the resolvent of these two forces will not be equal to zero. This resolvent, from the laws of mechanics, must be equal to the rate of change of momentum of the segment.

Let the mass contained in each centimeter of length of the string be equal to ρ. Then the rate of change of momentum will be:

image2517

If the angle between the tangent to the string and the axis Ox is denoted by ϕ, we will have:

image2519

This is the usual equation expressing the second law of mechanics in integral form. It is easy to transform it into differential form. We have obviously:image2521

From well-known theorems of differential calculus, it is easy to relate T sin ϕ to the unknown function u. We get:

image2523

and under the assumption that (∂u/∂x) is small, we have: sinΦ ≈∂u/∂x. Then:image2527

This last equation is the equation of the vibrating string in differential form. It is an essential equation of the Universe, despite or precisely because of its simplicity, as it ultimately represents an S=T symmetry in the realm of ∆-1 derivatives, of space and time, parametrised by the spatial force (tension) and density of the Active time magnitude, in each side. 

Basic forms of equations of motion mathematical physics.

In that regard, whenever we find a fundamental equation of physics it will respond to a basic S≈T symmetry of the 5D² Universe… or a breaking of symmetry that splits and seems to create from an ∑∏ system, two ‘parting S vs. T’ elements.

Specifically as most physicists are only interested in $t, lineal time motion, the fundamental use of analysis have been in the study of equations of motion, which we shall review with the usual insights and Disomorphisms with GST.

The goal then of most physical analysis is to ‘reduce’ all the parameters to those which allow us to determine the motion of the physical system, which by dogma is then reduced to such single time-dimension, and this is the bolts and knots of most of mathematical physics.

Indeed, as mentioned previously, the various partial differentia1 equations describing physical phenomena usually form a system of equations in several unknown variables. But in the great majority of cases it is possible to replace this system by one equation, as may easily be shown by very simple examples.

For instance, let us turn to the equations of motion considered in the preceding paragraph. It is required to solve these equations along with the equation of continuity. The actual methods of solution we will consider somewhat later.

We begin with the equation for steady flow of an idealized fluid.

All possible motions of a fluid can be divided into rotational and irrotational, the latter also being called potential. Although irrotational motions are only special cases of motion and, generally speaking, the motion of a liquid or a gas is always more or less rotational, nevertheless experience shows that in many cases the motion is irrotational to a high degree of exactness. Moreover, it may be shown from theoretical considerations that in a fluid with viscosity equal to zero a motion which is initially irrotational will remain so.

For a potential motion of a fluid, there exists a scalar function U(x, y, z, t), called the velocity potential, such that the velocity vector υ is expressed in terms of this function by the formulas:

Vx = ∂U/∂x, Vy= ∂U/∂y, Vz= ∂U/∂z

In all the cases we have studied up to now, we have had to deal with systems of four equations in four unknown functions or, in other words, with one scalar and one vector equation, containing one unknown scalar function and one unknown vector field. Usually these equations may be combined into one equation with one unknown function, but this equation will be of the second order. Let us do this, beginning with the simplest case.

For potential motion of an incompressible fluid, for which ∂ρ/∂t = 0, we have two systems of equations: the equation of continuity:image2531

and the equations of potential motion:  Vx = ∂U/∂x, Vy= ∂U/∂y, Vz= ∂U/∂z

Substituting in the first equation the values of the velocity as given in the second we have:image2535

The vector field of “heat flow” can also be expressed, by means of differential equations, in terms of one scalar quantity, the temperature. It is well known that heat “flows” in the direction from a hot body to a cold one. Thus the vector of the flow of heat lies in the direction opposite to that of the so-called temperature-gradient vector. It is also natural to assume, as is justified by experience, tliai to a first approximation the length of this vector is directly proportional to the temperature gradient.

The components of the temperature gradient are: ∂T/∂x, ∂T/∂y, ∂T,∂z.

Taking the coefficient of proportionality to be k, we get three equations:

τx= -k ∂T/∂x, τy= -k ∂T/∂y, τz=-k ∂T,∂z.

These are to be solved, together with the equation for the conservation of heat energy:image2541

Replacing τx, τy, and τz, by their values in terms of T, we get:image2543

Finally, for small vibrations in a gaseous medium, for example the vibrations of sound, the equation:image2545

and the equations of dynamics (5), give:image2547

and. assuming the absence of external forces (Fx = Fy = Fz = 0) we get:
image2549

(to obtain this equation it is sufficient to substitute the expression for the accelerations in the equation of continuity and to eliminate the density ρ by using the Boyle-Mariotte law: p = a²ρ).

THE SPATIAL VIEWS.

Now, as usual we apply different dualities and Rashomon effects to ‘reclassify’ the main equations of motion in physics under the single umbrella of T.œ. So next comes after the full temporal analysis of motions in the 3 ∆±1 scales (omitting so far the slightly different analysis of electromagnetic ‘flows’ of charge densities, as the whole world of electromagnetism drags a quite inconvenient complex formalism, which obscures deeply its meaning – as we can observe in our Unification equation of masses and charges, obtained by merely applying the newtonian formalism of vortices to charges and the metric of 5D accelerated time in smaller scales, to obtain a single formula for both Q and G) a spatial view, and this can be done with 2 degrees of spatialisation, the simplest, most efficient one being the Lagrangian approach to study through the Newtonian 2nd and most important law of all physics, F=ma in differential form, and the just explained concept of a potential force, source of its motion, the spatial paths of particles…

The Principle of Least Action and Lagrangian Mechanics
Let us take as our prototype of the Newtonian scheme a point particle of mass m moving along the x axis under a potential V(x). According to Newton’s Second Law,

If we are given the initial state variables, the position x(ti) and velocity ẋ(ti), we can calculate the classical trajectory xcl(t) as follows. Using the initial velocity and acceleration [obtained from Eq. (2.1.1)] we compute the position and velocity at a time ti + ∆t. For example,

Having updated the state variables to the time ti + ∆t, we can repeat the process again to inch forward to ti + 2∆t and so on.

Figure 2.1. The Lagrangian formalism asks what distinguishes the actual path xcl(t) taken by the particle from all possible paths connecting the end points (xi, ti) and (xf, tf).

The equation of motion being second order in time, two pieces of data, x(ti) and ẋ(ti), are needed to specify a unique xcl(t). An equivalent way to do the same, and one that we will have occasion to employ, is to specify two space-time points (xi, ti) and (xf, tf) on the trajectory.

The above scheme readily generalizes to more than one particle and more than one dimension. If we use n Cartesian coordinates (x1, x2,. . . , xn) to specify the positions of the particles, the spatial configuration of the system may be visualized as a point in an n-dimensional configuration space. (The term “configuration space” is used even if the n coordinates are not Cartesian.) The motion of the representative point is given by:

where mj stands for the mass of the particle whose coordinate is xj. These equations can be integrated step by step, just as before, to determine the trajectory.
In the Lagrangian formalism, the problem of a single particle in a potential V(x) is posed in a different way: given that the particle is at xi, and xf at times ti and tf, respectively, what is it that distinguishes the actual trajectory xcl(t) from all other trajectories or paths that connect these points? (See Fig. 2.1.)
The Lagrangian approach is thus global, in that it tries to determine at one stroke the entire trajectory xcl(t), in contrast to the local approach of the Newtonian scheme, which concerns itself with what the particle is going to do in the next infinitesimal time interval.

So we have an easy generator equation for both:

∑∆-1: Newtonian finitesimal steps = ∆º Lagrangian whole path

And as the ‘step’ or lineal, open quanta of a space-time motion IS the momentum, the Lagrangian must be a function of the whole world cycle, which is expressed as ‘energy’.

So we can according to the ternary method break down the calculation of a Lagrangian in three parts:
(1)  Define a function ℒ, called the Lagrangian, given by ℒ = T – V, T and V being the kinetic and potential energies of the particle. Thus ℒ = ℒ(x, ẋ, t). The explicit t dependence may arise if the particle is in an external time-dependent field. We will, however, assume the absence of this t dependence. Since energy is a conservative closed zero sum world cycle, and so we might rather say the ‘cyclical time component is factored in’.
(2)  But then comes the second ‘vital element’ of ∆st factored in: the ‘least time path’, that is, the attempt of the particle to ‘conserve’ ad maximal its energy. So for each path x(t) connecting (xi, ti) and (xf, tf), calculate the action S[x(t)] of the particle that will tend to zero, defined by:


Figure 2.2. If xcl(t) minimizes S, then δS(1) = 0 if we go to any nearby path xcl(t) + η(t).

We use square brackets to enclose the argument of S to remind us that the function S depends on an entire path or function x(t), and not just the value of x at some time t. One calls S a functional to signify that it is a function of a function.
(3)  The search for zeroness and balance, becomes then the classical path on which S is a minimum.

So this is really all about the Lagrangian, and as it is INDEPENDENT BY THE RASHOMON EFFECT, of the Newton’s ∆-1 parts approach, it is worth to see then that both are ‘connected’, organically. That is we can deduce in reverse, from the Lagrangian the Newtonian approach, which it means we can go ‘step by step forwards from ∆-1 to ∆ø and backwards, from ∆o to ∆-1 as local time is always reversed, past to future and future to past become then the product that vies us the present:

Past (finitesimal: part) x future (whole: integral) = present action.

So we will now verify that this principle reproduces Newton’s Second Law.
The first step is to realize that a functional S[x(t)] is just a function of n variables as n ➝ ∞. In other words, the function x(t) simply specifies an infinite number of values x(ti), , x(t),…, x(tf), one for each instant in time t in the interval ti ≤ t ≤ tf, and S is a function of these variables. To find its minimum we simply generalize the procedure for the finite n case. Let us recall that if f = f(x1,…, xn) = f(x); the minimum x0 is characterized by the fact that if we move away from it by a small amount η in any direction, the first-order change δf(l) in f vanishes. That is, if we make a Taylor expansion:

From this condition we can deduce an equivalent and perhaps more familiar expression of the minimum condition: every first-order partial derivative vanishes at x0. To prove this, for say, ∂f/∂xi, we simply choose η to be along the ith direction. Thus

Let us now mimic this procedure for the action S. Let xcl(t) be the path of least action and xcl(t) + η(t) a “nearby” path (see Fig. 2.2). The requirement that all paths coincide at ti and tf means

Now

We set δS(1) = 0 in analogy with the finite variable case:

If we integrate the second term by parts, it turns into:

 

The first of these terms vanishes due to Eq. (2.1.7). So that:

Note that the condition δS(1) = 0 implies that S is extremized and not necessarily minimized. We shall, however, continue the tradition of referring to this extremum as the minimum. This equation is the analog of Eq. (2.1.5): the discrete variable η is replaced by η(t); the sum over i is replaced by an integral over t, and ∂f/∂xi is replaced by:

There are two terms here playing the role of ∂f/∂xi since ℒ (or equivalently S) has both explicit and implicit (through the ẋ terms) dependence on x(t). Since η(t) is arbitrary, we may extract the analog of Eq. (2.1.6):

To deduce this result for some specific time t0, we simply choose an η(t) that vanishes everywhere except in an infinitesimal region around t0.
Equation (2.1.9) is the celebrated Euler Lagrange equation. If we feed into it ℒ = T– V, T=1/2m dx², V= V(x), we get:

and:

so that the Euler-Lagrange equation becomes just:

which is just Newton’s Second Law, Eq. (2.1.1).

So in the first huge compression on the whys of Newton’s laws, into a higher level of spatial synchronous understanding (remarkably enough since Newton’s was in itself the synopsis of ALL the motions perceived in reality at the time), we come to a single principle: the least time action, to explain the whys of all motions of reality: the universe EITHER if you accept sentient capacity on particles or merely an automaton selection process of its variety, tries TIME to survive the LONGEST possible by minimising the expenditure of it in each action of reality.

And this fact APPLIES not only to the 2D motions studied by physicists but to all the actions (5Disomorphisms) of all the species of reality including man.

We are all time-space beings guided by the mandate of acting as much as we can, ‘to exi=st’, the acronym verb, resume of those actions is guiding all beings, equally.

Latter we shall bring the other expansion, or variation on the same theme when studying the calculus of variations – the Hamiltonian. To notice though that those laws ARE UNIVERSAL also applicable to quantum physics: The passage from the Lagrangian formulation to quantum mechanics was carried out by Feynman in his path integral formalism.

A more common route to quantum mechanics, which we will follow for the most part, has as its starting point the Hamiltonian formulation, and it was discovered mainly by Schrödinger, Heisenberg, Dirac, and Born.
It should be emphasized, and it will soon become apparent, that all three formulations of mechanics are essentially the same theory, in that their domains of validity and predictions are identical.

So they are just 3 different Rashomon effects, points of view, and as such they are ‘integrated’ as kaleidoscopic views of the same principle of LEAST TIME, essential existential leit motif of all forms. 

To notice also that we have introduced a configuration space of a relative (in)finite number of coordinates, ancestor of the Hilbert’s spaces of quantum physics, which is NOT contrary to popular bull$hit on the queerness of quantum A REAL model of the Universe, which has our 5D point, but merely the use of coordinates for every particle, as if they were and they are local, fractal worlds of their own, but NOT parallel Universes, an essential discerning interpretation sorely needed in physics, messed up conceptually by the use of a single space-time continuum.

As the electron is NOT a probability but a population of fractal dense photons in herd state, the infinite dimensions are just the sum of local finite space-time worlds one for each particle, or in the case of quantum ‘states’ the ‘whole spatial all-encompassing view’ of a wave spread in a huge region of space-time simultaneously due to its non-local quantum potential huge speed, resumed through the infinite basis of a Hilbert space into a single ‘mapping-mirror-spatial view’.

Only a coarse naive realism has made such errors of interpretation all pervading as in the ridiculous parallel Universes’ formulation so liked by hollywood gurus of sci-fi movies, with no place whatsoever in ∆st.

In that regard, if the reader wonders why one bothers to even deal with a Lagrangian when all it does is yield Newtonian force laws in the end, I present a few of its main ∆st attractions besides its closeness to quantum mechanics:

(1D) @-singularity view.In the Lagrangian scheme one has merely to construct a single scalar ℒ and all the equations of motion follow by simple differentiation. We are thus in one of the TERNARY VIEWS on the same process.

(2D) $: This must be contrasted with the Newtonian scheme, which deals with SPATIAL vectors and is thus more complicated; as we deal then with the motion view of the communicative membrane (in lineal form), in this case, the action-reaction elements of the TOE.

(3D) ∑∏: So finally the Hamiltonian will be the third ‘present view’ of the motion system in a single plane of reality, as it is the formulation in terms of the vital energy of the system.

–  The Euler-Lagrange equations (2.1.10) have the same form if we use, instead of the n Cartesian coordinates x1,…, xn, any general set of n independent coordinates q1, q2,…, qn. So we are fractalizing space-time NOT getting an infinite dimensional single plane, as we just explained

To remind us of this fact we will rewrite Eq. (2.1.10) as it is most usually used:

– But the key point is that the Lagrangian make us realise ALL OF PHYSICS of motion actually reduces to the principle of least action which is seen to generate ALL the correct dynamics of the Universe. So we get the WHY and we can forget all about Newton’s laws and use Eq. (2.1.11) as the equations of motion.

What is being emphasized is that these equations, which express the condition for least action, are form invariant under an arbitrary change of coordinates. This form invariance must be contrasted with the Newtonian Equation (2.1.2), which presumes that the x, are Cartesian. If one trades the .v, for another non-Cartesian set of qi, Eq. (2.1.2) will have a different form (see Example 2.1.1 at the end of this section).
Equation (2.1.11) can be made to resemble Newton’s Second Law if one defines a quantity

called the canonical momentum conjugate to qf and the quantity

called the generalized force conjugate to qi. Although the rate of change of the canonical momentum equals the generalized force, one must remember that neither is pi always a linear momentum (mass times velocity or “mυ” momentum), nor is Fi always a force (with dimensions of mass times acceleration). For example, if qi, is an angle θ, pi will be an angular momentum and Fi a torque.

-FINALLY another essential theme for ∆st, is the fact that the 3 canonical  Conservation laws that correspond to the 3 parts of the system (angular momentum/membrane, lineal momentum/singularity path and vital energy) are easily obtained in this formalism. Suppose the Lagrangian depends on a certain velocity but not on the corresponding coordinate qi. The latter is then called a cyclic coordinate. It follows that the corresponding pi is conserved:

Although Newton’s Second Law, Eq. (2.1.2). also tells us that if a Cartesian coordinate xi is cyclic, the corresponding momentum miẋi, is conserved, Eq. (2.1.14) is more general. Consider, for example, a potential V(x, y) in two dimensions that depends only upon ρ= (x2+y2)1/2, and not on the polar angle ϕ, so that V(ρ, ϕ)= V(ρ). It follows that ϕ is a cyclic coordinate, as T depends only on (see Example 2.1.1 below).

Consequently ∂ L/ ∂ Φ= p is conserved. In contrast, no obvious conservation law arises from the Cartesian Eqs. (2.1.2) since neither x nor y is cyclic. If one rewrites Newton’s laws in polar coordinates to exploit ∂V/∂Φ=0, the corresponding equations get complicated due to centrifugal and Coriolis terms. It is the Lagrangian formalism that allows us to choose coordinates that best reflect the symmetry of the potential, without altering the simple form of the equations.

Is the particle intelligent? 

Now, we have shown with a little detail the paths that bring us from the Lagrangian, principle of least time – the key concept that connects motion laws with ∆st theory, back to Newton’s laws to, and we shall use this detailed analysis to illustrate ‘in the future’ the process of transformation of Newton’s laws into Lagrangian laws, as a natural process of S=T steps, from the subjective, internal WILL of the particle into the objective, external potential view of the Newtonian outer world that guides the particle.

The question here is obvious: How Smart Is a Particle?
The Lagrangian formalism seems to ascribe to a particle a tremendous amount of foresight: a particle at (xi, ti) destined for (xf, tf) manages to calculate ahead of time the action for every possible path linking these points, and takes the one with the least action. And indeed from its point of view, this is what the particle does, as any human going from A to B in a rugged terrain will have the foresight to go through the easiest path will less slope, calculating in fact a ‘brachistochrone’ – not the shortest path which can be difficult but the easiest path which will be a bit longer in space but shorter in space-time.

But if we were egocentered ‘huminds’ observing this traveller, we would notice that his ‘WILL’ is an illusion. The Man needs not know its entire trajectory ahead of time, it needs only to obey in each step the ‘potential equation’ of its near neighbourhood and go step by step throughout the easiest path and for most trajectories it will guide it down the valley. So we would say the man is an automaton particle, because its guidance is the external world point of view. The fact of course is that both points of view explain the man’s path. But if the path is travelled by a man we will say it is ‘its WILL and foresight’, but as anthropomorphic self-centred T.œs if the path is travelled by a particle we will say, it is the external potential field which guides it automatically. 

So a physicist will affirm in its jargon that the particle merely ‘obeys  the Euler – Lagrange equations at each instant in time to minimize the action’. This in turn means just following Newton’s law, which is to say, the particle has to sample the potential in its immediate vicinity and accelerate in the direction of greatest change.

Of course both things are truth for BOTH the human and the particle, but if we were a particle, I bet you, we would say that HUGE thing, like a star is NOT intelligent – it is too big to think. And if we are a man we will say that small thing, like an atom is NOT intelligent, it is too small to think. It is part of absolute relativity that all scales are relative, as information grows with smallness, and energy with bigness, and both have the same value, but is part of the ego-paradox of self-centred informative @-minds to think they are the only thinking beings. The particle though is a mathematical/geometric mind and it does perceive likely the ‘end of the path’ because the path becomes deterministic ONLY when we know its end-point; so the particle, probably through the quantum potential field (Bohm) which is non-local (read faster than c-speed, probably information mediated by neutrinos/gravitons), does SEE the point it goes, normally one with a higher energy potential for its ‘feeding’; as the puma knows how to best get down the mountain to hunt its prey.

The hamiltonian.

We won’t bother the reader though with the derivation of the Hamiltonian departing from the Lagrangian through the Lengedre transformation… big names for a well-known undergraduate process; as what we wanted to reveal in the previous graph was the least time action.

W also mentioned then the symmetry of the 3 ‘points of view on motion’:

  1. 2D: the POTENTIAL VIEW (∆-1) provided by Newton->Poisson>Einstein.
  2. 3D: the vital energy view (the Hamiltonian)
  3. 1D: The SINGULARITY VIEW (the Lagrangian).

So again 3 ternary views form a T.œ.

If we were to express in those terms the whole 3 laws of Newton, they are also obvious in its Disomorphism with ∆st dimensions and Non-Æ postulates of geometry.

First law of lineal inertia: 2D motion for a single active magnitude. 1st and 2nd postulate of a fractal point starting its wave-motion.

2nd law: F=ma, point of view of the Force, normally an ∆-1 potential caused by other particle, which expands the 2D lineal inertia to a 3D network of active magnitudes interacting with each other; hence related to the 3rd postulate that conforms a topological network system, a full T.œ.

3rd law, complementary to the 3rd law,  which defines the action-reaction parallel and perpendicular relationships between fractal points, in search of its ‘network balance’.

In that regard if the Lagrangian is related closely to the 1st and 2nd newtonian law – to the 1st and second non-Æ postulate of fractal points moving in wave-paths.

While the Hamiltonian is all about the vital space energy of the system, and its multiple interactions with the other elements of the T.œ and with the parts of itself – with the 2nd and 3rd newtonian laws, with the 3rd and fourth non-Æ postulates.

What is more important then? OBVIOUSLY THE HAMILTONIAN, THE 3D ∑∏ VIEW, OF THE VITAL ENERGY, between the singularity and the membrane, as WE CAN obtain 1D+2D=3D, almost all the information of the membrane-enclosure and the singularity from the vital energy sandwiched between both which on top is the ‘visible part of the being’ and so most often the only part humans perceive.

So what matter to us of the Hamiltonian is its nature as the best description of the vital energy, hence of the 3 parts of the being, in itself, not from an outer point of view (1st Newtonian≈potential) or an inner pov (singularity, Lagrangian, least time will-action).

In the general model the FUNDAMENTAL gender duality of reality is between the past-potential > future-singularity (male particle state) vs. the wave-energy (female particle state), which indeed will be properly used to define gender; in physics this is the duality of the Lagrangian singularity-particle guided by the potential past-field vs. the wave-energy Hamiltonian (which will therefore become also the standard configuration for quantum physics and the Schrodinger wave).

So the Hamiltonian IS one of the king equations of reality. Let us then have a first sight to it, as we shall return in the analysis of variations and of course, in our posts on astrophysics and latter.

What we are interested right now is IN its reflection of a FUNDAMENTAL FACT of analysis – its capacity to SHOW by derivation, the minimal QUANTA OF TIME and/or the MINIMAL QUANTA OF SPACE of any system of Nature (whenever the mathematical mirror of analysis is meaningful, obviously in a human we just know biologically the quanta of space is the cell and the quanta of time, the second glimpse of eye, sync to the step of the limbs and the beat of the heart – a sync between the S<st>t components that De broglie found for physical systems in its pilot-wave theory – as all systems have a sync time based in its minimal quanta, between body, limbs and heads).

So the canonical equations of the Hamiltonian ARE EXPRESSING JUST THAT IN ITS standard definition:

THE FIRST EQUATION derives the Hamiltonian energy, with its parameter of lineal $pace, its steps of momentum. Indeed, energy is the WHOLE, world cycle, conserved when the path is closed, but momentum is an step of that whole, a lineal step (Galilean duality); and so we obtain the ‘quanta’ of energy in space, SPEED-distance.

While the second equation  derives the Hamiltonian energy, with its parameter of cyclical time, its fixed formal position, anchored by its singularity. And so we obtain the ‘quanta’ of energy in time, ∂mv= m, the singularity mass:

It has to be noticed though that ∂p changes when we get to the limit of speed of the light space-time plane, c, then the variable is no longer speed but mass, and so we obtain SPEED not mass as the quanta of time deriving the hamiltonian through its singularity position, which ultimately has deep thought meaning for relativity physics, which at the LIMIT OF C SPEED, TRANSMUTATES space into time.

Indeed, the counterpart happens in black holes, the other limit of our Universe, where time comes to zero-halt, transmitted into space… themes those that belong to the 4th line of ‘relativity revis(it)ed’.

So we shall leave it here with the colloralium:  Analysis reveals by derivation the quanta of space and quanta of time of all species susceptible of being ∂operated.

Functions of Several Variables. Geometrical view.

Up to now we have spoken only of functions of one variable, but in practice it is often necessary to deal also with functions depending on two, three, or in general many variables. For example, the area of a rectangle is a function S=xy  of its base x and its height y. The volume of a rectangular parallelepiped is a function V=xyz of its three dimensions. The distance between two points A and B is a function:

of the six coordinates of these points. The well-known formula:  pv = nRT expresses the dependence of the volume v of a definite amount of gas on the pressure p and absolute temperature T.
Functions of several variables, like functions of one variable, are in many cases defined only on a certain region of values of the variables themselves. For example, the function

U = ln (1-x²-y²-z²)  is defined only for values of x, y and z that satisfy the condition x²+y²+z²=1

(For other x, y, z its values are not real numbers.) The set of points of space whose coordinates satisfy the inequality (35) obviously fills up a sphere of unit radius with its center at the origin of coordinates. The points on the boundary are not included in this sphere; the surface of the sphere has been so to speak “peeled off.” Such a sphere is said to be open. The function (34) is defined only for such sets of three numbers (x, y, z) as are coordinates of points in the open sphere G. It is customary to state this fact concisely by saying that the function (34) is defined on the sphere G.
Let us give another example. The temperature of a nonuniformly heated body V is a function of the coordinates x, y, z of the points of the body. This function is not defined for all sets of three numbers x, y, z but only for such sets as are coordinates of points of the body V.
Finally, as a third example, let us consider the function:where ϕ is a function of one variable defined on the interval [0, 1]. Obvious-ly the function u is defined only for sets of three numbers (x, y, z) which are coordinates of points in the cube:  0≤x≤1,0≤y≤1,0≤z≤1.
We now give a formal definition of a function of three variables. Suppose that we are given a set E of triples of numbers (x, y, z) (points of space). If to each of these triples of numbers (points) of E there corresponds a definite number u in accordance with some law, then u is said to be a function of x, y, z (of the point), defined on the set of triples of numbers (on the points) E, a fact which is written thus: u= F(x,y,z)

In place of F we may also write other letters: f, ϕ, ψ.
In practice the set E will usually be a set of points, filling out some geometrical body or surface: sphere, cube, annulus, and so forth, and then we simply say that the function is defined on this body or surface. Functions of two, four, and so forth, variables are defined analogously.

Implicit definition of a function.

Let us note that functions of two variables is a useful means for the definition of functions of one variable. Given a function F(x, y) of two variables let us set up the equation: F(s,t)=0

In general, this equation will define a certain set of points (s,t) of the surface on which our function is equal to zero. Such sets of points usually represent curves that may be considered as the graphs of one or several one-valued functions y = ϕ(s) or s = ψ(t) of one variable. In such a case these one-valued functions are said to be defined implicitly by the equation (36). For example, the equation:

s²+t²=-r²=0   gives an implicit definition of two functions of one variable:

s=+√r²-t² and s= – √r²-t²

But it is necessary to keep in mind that an equation of the form (36) may fail to define any function at all. For example, the equation: t²+s²+1=0  obviously does not define any real function, since no pair of real numbers satisfies it.
Geometric representation. Functions of two variables may always be visualized as surfaces by means of a system of space coordinates. Thus the function:   z=ƒ(s,t)
is represented in a three-dimensional rectangular coordinate system by a surface, which is the geometric locus of points M  whose coordinates s, t, z satisfy the equation:

There is another, extremely useful method, of representing the function (37), which has found wide application in practice. Let us choose a sequence of numbers z1, z2, ···, and then draw on one and the same plane Ost the curves:   z1=ƒ(s,t); z2=ƒ(s,t)

which are the so-called level lines of the function f(s, t). From a set of level lines, if they correspond to values of z that are sufficiently close to one another, it is possible to form a very good image of the variation of the function f(s,t), just as from the level lines of a topographical map one may judge the variation in altitude of the locality.

Figure shows a map of the level lines of the function z = s2 + t2, the diagram at the right indicating how the function is built up from its level lines. In Chapter III, figure 50, a similar map is drawn for the level lines of the function z = st.

Partial derivatives and differential.

Let us make some remarks about the differentiation of the functions of several variables. As an example we take the arbitrary function of two variables: z=ƒ(x,y)

If we fix the value of y, that is if we consider it as not varying, then our function of two variables becomes a function of the one variable x. The derivative of this function with respect to x, if it exists, is called the partial derivative with respect to x and is denoted thus: ∂z/∂x or ƒx/∂x or ƒ’x(x,y)

The last of these three notations indicates clearly that the partial derivative with respect to x is in general a function of x and y. The partial derivative with respect to y is defined similarly.

The general case for space change through any volume.

When we generalise the case to any combination of space or time dimensions, the same method can be used to obtain the ginormous quantity of possible changes in multiple Dimensional analysis.

Thus, in order to determine the function that represents a given physical process, we try first of all to set up an equation that connects this function in some definite way with its derivatives of change of various orders and dimensions.

The method of obtaining such an equation, which is called a differential equation, often amounts to replacing increments of the desired functions by their corresponding differentials.
As an example let us solve a classic problem of change in 3 pure dimensions of euclidean space, which by convention we shall call Sxyz.

In a rectangular system of coordinates Oxyz, then we consider the surface obtained by rotation of the parabola whose equation (in the Oyz plane) is z = y2. This surface is called a paraboloid of revolution). Let v denote the volume of the body bounded by the paraboloid and the plane parallel to the Oxy plane at a distance z from it. It is evident that v is a function of z (z >0).

To determine the function v, we attempt to find its differential dv. The increment Δv of the function v at the point z is equal to the volume bounded by the paraboloid and by two planes parallel to the Oxy plane at distances z and z + Δz from it.
It is easy to see that the magnitude of Δv is greater than the volume of the circular cylinder of radius √z and height Δz but less than that of the circular cylinder with radius √z+∆z and height Δz. Thus:

πz ∆z < ∆v ≤ π (z +∆z) ∆z.        And so:

where θ is some number depending on Δz and satisfying the inequality 0 < θ < 1.
So we have succeeded in representing the increment Δv in the form of a sum, the first summand of which is proportional to Δz, while the second is an infinitesimal of higher order than Δz (as Δz → 0). It follows that the first summand is the differential of the function v:

dv=πz ∆z    or dv=πz dz

since Δz = dz for the independent variable z.  The equation so obtained relates the differentials dv and dz (of the variables v and z) to each other and thus is called a differential equation.  If we take into account that:

dv/dz =v’     where v′ is the derivative of v with respect to the variable z, our differential equation may also be written in the form: v’=π z

To solve this very simple differential equation we must find a function of z whose derivative is equal to πz.

A solution of our equation is given by v = πz²/2 + C, where for C we may choose an arbitrary number. In our case the volume of the body is obviously zero for z = 0 (see figure 22), so that C = 0. Thus our function is given by v = πz²/2.

Geometrically the function f(x, y) represents a surface in a rectangular three-dimensional system of coordinates. The corresponding function of x for fixed y represents a plane curve (figure) obtained from the intersection of the surface with a plane parallel to the plane Oxz and at a distance y from it. The partial derivative ∂z/∂x is obviously equal to the trigonometric tangent of the angle between the tangent. to the curve at the point (x, y) and the positive direction of the x-axis.

More generally, if we consider a function z = f(x1, x2, . . ., xn) of the n variables x1, x2, . . ., xn, the partial derivative ∂z/∂x, is defined as the derivative of this function with respect to xi, calculated for fixed values of the other variables.

We may say that the partial derivative of a function with respect to the variable xi is the rate of change of this function in the direction of the change in xi. It would also be possible to define a derivative in an arbitrary assigned direction, not necessarily coinciding with any of the coordinate axis, but we will not take the time to do this.

It is sometimes necessary to form the partial derivatives of these partial derivatives; that is; the so-called partial derivatives of second order. For functions of two variables there are four of them: However, if these derivatives are continuous, then it is not hard to prove that the second and third of these four (the so-called mixed derivatives) coincide:

For example, in the case of first function considered:

the two mixed derivatives are seen to coincide.
For functions of several variables, just as was done for functions of one variable, we may introduce the concept of a differential.
For definiteness let us consider a function:

z = ƒ (x,y)  of two variables. If it has continuous partial derivatives, we can prove that its increment: corresponding to the increments Δx and Δy of its arguments, may be put in the form:where ∂f/∂x and ∂f/∂y are the partial derivatives of the function at the point (x, y) and the magnitude a depends on Δx and Δy in such a way that α → 0 as Δx → 0 and Δy → 0.
The sum of the first two components:is linearly dependent on Δx and Δy and is called the differential of the function. The third summand, because of the presence of the factor α, tending to zero with Δx and Δy, is an infinitesimal of higher order than the magnitude:describing the change in x and y.

Let us give an application of the concept of differential. The period of oscillation of a pendulum is calculated from the formula:

where l is its length and g is the acceleration of gravity. Let us suppose that l and g are known with errors respectively equal to Δl and Δg. Then the error in the calculation of T will be equal to the increment ΔT corresponding to the increments of the arguments Δl and Δg. Replacing ΔT approximately by dT, we will have:

The signs of Δl and Δg are unknown, but we may obviously estimate ΔT by the inequality:

Thus we may consider in practice that the relative error for T is equal to the sum of the relative errors for l and g.
For symmetry of notation, the increments of the independent variables Δx and Δy are usually denoted by the symbols dx and dy and are also called differentials. With this notation the differential of the function u = f(x, y, z) may be written thus:Partial derivatives play a large role whenever we have to do with functions of several variables, as happens in many of the applications of analysis to technology and physics.

SOLUVABILITY: ENCLOSURES

  Initial-Value and Boundary-Value Problems; Uniqueness of a Solution

With partial differential equations as with ordinary ones, it is the case, with rare exceptions, that every equation has infinitely many particular solutions. Thus to solve a concrete physical problem, i.e., to find an unknown function satisfying some equation, we must know how to choose the required solution from an infinite set of solutions. For this purpose it is usually necessary to know not only the equation itself but a certain number of supplementary conditions. As we saw previously, partial differential equations are the expression of elementary laws of mechanics or physics, referring to small particles situated in a medium. But it is not enough to know only the laws of mechanics, if we wish to predict the course of some process. For example, to predict the motion of the heavenly bodies, as is done in astronomy, we must know not only the general formulation of Newton’s laws but also, assuming that the masses of these bodies are known, we must know the initial state of the system, i.e., the position of the bodies and their velocities at some initial instant of time. Supplementary conditions of this kind are always encountered in solving the problems of mathematical physics.

Thus, the problems of mathematical physics consist of finding solutions of partial differential equations that satisfy certain supplementary conditions.

Laplace and Poisson equations as expressions of T.œ parts.

Harmonic functions have a unique solution when we have data on its boundary-value. And so essentially we make them correspond to:

  •  Poisson equation, which is: ∆u=-4πρ   where ρ is usually the density, and so it IMPLIES the T.œ domain is complete, having a singularity with density of form, a boundary and a vital energy within.
  • Yet, ρ may, the singularity might vanish and then we have a system with NO singularity, hence DOMINATED BY THE membrane, which IS the only parameter needed, as it holds the ‘maximal’ value. For ρ ≡ 0 we get then the Laplace equation: ∆u=0

Moreover, those are the dominant Universal functions. As the singularity is often born, as in life cells once the boundary is established. so empty bubbles of  membranes (Lapace equations) are contrary to common sense, the a priori condition in most cases to create the singularity – with the most notable exception of the birth of the magnetic membrane from the motion of a singularity charge.

 And here the praxis is easily explained in theory:

It is not difficult to see that the difference between any two particular solutions u1 and u2 of the Poisson equation is a function satisfying the Laplace equation, or in other words is a harmonic function – we might say the singularities cancel each other. The entire manifold of solutions of the Poisson equation is thus reduced to the manifold of harmonic functions.

If we have been able to construct even one particular solution u0 of the Poisson equation, and if we define a new unknown function w by: u = uo+w, we see that w must satisfy the Laplace equation; and in exactly the same way, we determine the corresponding boundary conditions for w. Thus it is particularly important to investigate boundary value problems for the Laplace equation.

As is most often the case with mathematical problems, the proper statement of the problem for an equation of mathematical physics is immediately suggested by the practical situation. The supplementary conditions arising in the solution of the Laplace equation come from the physical statement of the problem.

The heat example.

We have identified already the 3 ‘Active magnitudes’ as ratios of a ð/$ DENSITY of the 3 physical scales, mass (∆+1), heat (∆ø) and current (∆-1), which being less familiar/more complex we escape in our search for the simplex principles/forms. We have shown that they obey the same equations, and its main parameters are ratios, density and flux. And we have seen its main equations to be those which establish the dualities of the Galilean paradox, in terms of those parameters, the equations of continuity and motion (arguably the lineal vs. cyclical form is a mind paradox). Let us now see their differences when a singularity is (Poisson) or is not (Lapace) in its T.œ configuration with the case of ‘heat’.

Let us consider, for example, the establishment of a steady temperature in a medium, i.e., the propagation of heat in a medium where the sources of heat are constant and are situated either inside or outside the medium. Under these conditions, with the passage of time the temperature attained at any point of the medium will be independent of the time. Thus to find the temperature T at each point, we must find that solution of the equation:

∂T/∂t= ∆T +q where q is the density of the sources of heat distribution, which is independent of t. We get: ∆T+q = 0

Thus the temperature in our medium satisfies the Poisson equation. If the density of ð-singularities – heat sources q is zero, then the Poisson equation becomes the Laplace equation.

In order to find the temperature inside the medium, it is necessary, from simple physical considerations, to know also what happens on the boundary of the medium.

Obviously the physical laws previously considered for interior points of a body call for quite another formulation at boundary points.

In the problem of establishing the steady-state temperature, we can FOCUS in either of the 3 parts of the T.œ:

  1. 2D: The distribution of temperature on the boundary membrane.
  2. 3D: or the rate of flow of heat through a unit area of the vital energy surface.
  3. 1D: finally, a law connecting the temperature with the flow of heat from the source.

So we do have again a ternary ‘Rashomon effect’ from 3 Γ points of view:

Considering the temperature in a volume Ω, bounded by the surface S, we can write these three conditions as:

image2561

or

image2563

or finally, in the most general case:

image2565

where Q denotes an arbitrary point of the surface S. Conditions of the form (10) are called boundary conditions. Investigation of the Laplace or Poisson equation under boundary conditions of one of these types will show that as a rule the solution is uniquely determined. which in ∆st terms means a T.Œ EXISTS.

YET, in our search for a solution of the Laplace equation it will usually be necessary and sufficient to be given one arbitrary function on the boundary of the domain.

Let us examine that Laplace equation a little more in detail. We will show that a harmonic function u, i.e., a function satisfying the Laplace equation, is completely determined if we know its values on the boundary of the domain.

First of all we establish the fact that a harmonic function cannot take on values inside the domain that are larger than the largest value on the boundary. More precisely, we show that the absolute maximum, as well as the absolute minimum of a harmonic function are attained on the boundary of the domain.

From this it will follow at once that if a harmonic function has a constant value on the boundary of a domain Ω, then in the interior of this domain it will also be equal to this constant. For if the maximum and minimum value of a function are both the same constant, then the function will be everywhere equal to this constant.

We now establish the fact that the absolute maximum and minimum of a harmonic function cannot occur inside the domain. First of all, we note that if the Laplacian Δu of the function u(x, y, z) is positive for the whole domain, then this function cannot have a maximum inside the domain, and if it is negative, then the function cannot have a minimum inside the domain.

For at a point where the function u attains its maximum it must have a maximum as a function of each variable separately for fixed values of the other variables. Thus it follows that every partial derivative of second order with respect to each variable must be nonpositive. This means that their sum will be nonpositive, whereas the Laplacian is positive, which is impossible. Similarly it may be shown that if the function has a minimum at some interior point, then its Laplacian cannot be negative at this point. This means that if the Laplacian is negative everywhere in the domain, then the function cannot have a minimum in this domain.

If a function is harmonic, it may always be changed by an arbitrarily small amount in such a way that it will have a positive or negative Laplacian; to this end it is sufficient to add to it the quantity:

image2567

where η is an arbitrarily small constant:

The addition of a sufficiently small quantity cannot change the property that the function has an absolute maximum or absolute minimum within the domain. If a harmonic function were to have a maximum inside the domain, then by adding + ηr2 to it, we would get a function with a positive Laplacian which, as was shown above, could not have a maximum inside the domain. This means that a harmonic function cannot have an absolute maximum inside the domain. Similarly, it can be shown that a harmonic function cannot have an absolute minimum inside the domain.

This theorem has an important corollary for ∆st general laws:

Two harmonic functions that agree on the boundary of a domain must agree everywhere inside the domain.

For then the difference of these functions (which itself will be a harmonic function) vanishes on the boundary of the domain and thus is everywhere equal to zero in the interior of the domain.

So we see that the values of a harmonic function on the boundary completely determine the function. It may be shown (although we cannot give the details here) that for arbitrarily preassigned values on the boundary one can always find a harmonic function that assumes these values.

It is somewhat more complicated to prove that the steady-state temperature established in a body is completely determined, if we know the rate of flow of heat through each element of the surface of the body or a law connecting the flow of heat with the temperature.

We will return to some aspects of this question when we discuss methods of solving the problems of mathematical physics.

∆st generalisation: The boundary, the predator protein, the sepherd dog, without a singularity herds in a thermodynamic equilibrium, the vital energy, of the system, whose informative differentiation requires a singularity of information, with its ‘force pull’ across multiple ∆±1 scales to break it into organic functions.

The boundary-value problem for the heat equation.

Now we can consider NOT only the boundaries in space but the boundaries IN TIME. And the symmetry S=T precludes homologous cases:

IT is the problem of the heat equation in the non stationary case. It is physically clear that the values of the temperature on the boundary or of the rate of the flow of heat through the boundary are not sufficient in themselves to define a unique solution of the problem.

But if in addition we know the temperature distribution at some initial instant of time, which is the equivalent in time of the knowledge we have of a boundary in space (as we have seen in this ‘entropy related’ case the boundary appears first) then the problem is uniquely determined.

Thus to determine the solution of the equation of heat conduction (8) it is usually necessary and sufficient to assign one arbitrary function T0(x, y, z) describing the initial distribution of temperature and also one arbitrary function on the boundary of the domain. As before, this may be either the temperature on the surface of the body, or the rate of heat flow through each element of the surface, or a law connecting the flow of heat with the temperature.

In this manner, the problem may be stated as follows. We seek a solution of equation (8) under the condition:

image2569

and one of three following conditions

image2571image2573image2575

where Q is any point of the surface S.

Condition (11) is called an initial condition, while conditions (12) are boundary conditions.

We will not prove in detail that every such problem has a unique solution but will establish this fact only for the first of these problems; moreover, we will consider only the case where there are no heat sources in the interior of the medium. We show that the equation:

image2577

under the conditions:image2579can have only one solution.

The proof of this statement is very similar to the previous proof for the uniqueness of the solution of the Laplace equation. We show first of all that if:image2581 then the function T, as a function of four variables, x, y, z, and: t (0 ≤ t≤to), assumes its minimum either on the boundary of the domain Ω or else inside Ω, but in the latter case necessarily at the initial instant of time, t = 0.

For if not, then the minimum would be attained at some interior point. At this point all the first derivatives, including ∂T/∂t, will then be equal to zero, and if this minimum were to occur for t = t0, then ∂T/∂t would be nonpositive.

Also, at this point all second derivatives with respect to the variables x, y, and z will be nonnegative.

Consequently ΔT − (l/a²) (∂T/∂t) will be nonnegative, which in our case is impossible.

In exactly the same way we can establish that if ΔT − (l/a²) (∂T/∂t) > 0, then inside Ω for: 0 < t ≤ t0
to there cannot exist a maximum for the function T.

Finally, if ΔT − (l/a²) (∂T/∂t) = 0, then inside Ω for: 0 <t<Tk
the function T cannot attain its absolute maximum nor its absolute minimum, since if the function T were to have, for example, such an absolute minimum, then by adding to it the term η(t − t0) and considering the function T1 = T + η(t − t0), we would not destroy the absolute minimum if η were sufficiently small, and then ΔT1 − (l/a²) (∂Tl/∂t) would be negative, which is impossible.

In the same way we can also show the absence of an absolute maximum for T in the domain under consideration.

However, an absolute maximum, as well as an absolute minimum of temperature may occur either at the initial instant t = 0 or on the boundary S of the medium. If T = 0 both at the initial instant and on the boundary, then we have the identity T = 0 throughout the interior of the domain for all: t ≤ t0.
If any two temperature distributions T1 and T2 have identical values for t = 0 and on the boundary then their difference T1 − T2 = T will satisfy the heat equation and will vanish for t = 0 and on the boundary. This means that T1 − T2 will be everywhere equal to zero, so that the two temperature distributions Tl and T2 will be everywhere identical.

The energy of oscillations and the boundary-value problem for the equation of oscillation.

We now consider the conditions under which the third of the basic differential equations has a unique solution, namely equation (9).

For simplicity we will consider the equation for the vibrating string ∂2u/∂x2 = (l/a2) (∂2u/∂t2), which is very similar to equation (9), differing from it only in the number of space variables. On the right side of this equation there is the quantity ∂2u/∂t2 expressing the acceleration of an arbitrary point of the string. The motion of any mechanical system for which the forces, and consequently the accelerations, are expressed by the coordinates of the moving bodies, is completely determined if we are given the initial positions and velocities of all the points of the system. Thus for the equation of the vibrating string, it is natural to assign the positions and velocities of all points at the initial instant:

image2591

But as was pointed out earlier, at the ends of the string the formulas expressing the laws of mechanics for interior points cease to apply. Thus at both ends we must assign supplementary conditions. If, for example, the string is fixed in a position of equilibrium at both ends, then we will have:

image2593

These conditions can sometimes be replaced by more general ones, but a change of this sort is not of basic importance.

The problem of finding the necessary solutions of equation (9) is analogous. In order that such a solution be well defined, it is customary to assign the conditions:
image2595

and also one of the “boundary conditions”

image2597image2599image2601

The difference from the preceding case is simply that instead of the one initial condition in equation (11) we have the two conditions (13).

Equations (14) obviously express the physical laws for the particles on the boundary of the volume in question.

The proof that in the general case the conditions (13) together with an arbitrary one of the conditions (14) uniquely define a solution of the problem will be omitted. We will show only that the solution can be unique for one of the conditions in (14).

Let it be known that a function u satisfies the equation:image2603

with initial conditions:
image2605
and boundary condition:image2607

(It would be just as easy to discuss the case in which U|s = 0.)

We will show that under these conditions the function u must be identically zero.

To prove this property it will not be sufficient to use the arguments introduced earlier to establish the uniqueness of the solution of the first two problems. But here we may make use of the physical interpretation.

We will need just one physical law, the “law of conservation of energy.”

We restrict ourselves again for simplicity to the vibrating string, the displacement of whose points u(x, t) satisfies the equation: 

image2609

The kinetic energy of each particle of the string oscillating from x to x + dx is expressed in the form:

image2611

Along with its kinetic energy, the string in its displaced position also possesses potential energy created by its increase of length in comparison with the straight-line position. Let us compute this potential energy. We concern ourselves with an element of the string between the points x and x + dx. This element has an inclined position with respect to the axis Ox, such that its length is approximately equal to:image2613

So its elongation is:image2615

Multiplying this elongation by the tension T, we find the potential energy of the elongated element of the string:image2617

The total energy of the string of length l is obtained by summing the kinetic and potential energies over all of the points of the string. We get:image2619

If the forces acting on the end of the string do no work, in particular if the ends of the string are fixed, then the total energy of the string must be constant:  E=Const.

Our expression for the law of conservation of energy is a mathematical corollary of the basic equations of mechanics and may be derived from them. Since we have already written the laws of motion in the form of the differential equation of the vibrating string with conditions on the ends, we can give the following mathematical proof of the law of conservation of energy in this case. If we differentiate E with respect to time, we have, from basic general rules:image2623

Using the wave equation (6) and replacing ρ(∂2u/∂t2) by T(∂2u/∂x2), we get dE/dt in the form:image2625

If (∂u/∂x) | x = 0 or u | x = 0 vanishes, then: dE/dt=0 which shows that E is constant.

The wave equation (9) may be treated in exactly the same way to prove that the law of conservation of energy holds here also. If p satisfies the equation and the condition:image2629

then the quantity:image2631

will not depend on t.

If, at the initial instant of time, the total energy of the oscillations is equal to zero, then it will always remain equal to zero, and this is possible only in the case that no motion occurs. If the problem of integrating the wave equation with initial and boundary conditions had two solutions p1 and p2, then υ = p1 − p2, would be a solution of the wave equation satisfying the conditions with zero on the right-hand side, i.e., homogeneous conditions.

In this case, when we calculated the “energy” of such an oscillation, described by the function υ, we would discover that the energy E(υ) is equal to zero at the initial instant of time. This means that it is always equal to zero and thus that the function υ is identically equal to zero, so that the two solutions p1 and p2 are identical. Thus the solution of the problem is unique.

In this way we have convinced ourselves that all three problems are correctly posed.

Incidentally, we have been able to discover some very simple properties of the solutions of these equations. For example, solutions of the Laplace equation have the following maximum property: Functions satisfying this equation have their largest and smallest values on the boundaries of their domains of definition.

Functions describing the distribution of heat in a medium have a maximum property of a different form. Every maximum or minimum of temperature occuring at any point gradually disperses and decreases with time. The temperature at any point can rise or fall only if it is lower or higher than at nearby points. The temperature is smoothed out with the passage of time. All unevennesses in it are leveled out by the passage of heat from hot places to cold ones.

But no smoothing-out process of this kind occurs in the propagation of the oscillations considered here. These oscillations do not decrease or level out, since the sum of their kinetic and potential energies must remain constant for all time.

RECAP. Differentiation of 3 parts and subspecies of physical T.œs: |-Potentials, Ø-waves and O-singularities.

How we differentiate the differentiation of different planes of existence?

Easy, when we change plane of existence, the parameters of changechange:

And yet we repeat the same repetitive operandi that apply to all of them, under the laws of Disomorphisms.

So when we study change between planes of existence we apply double differentials and then we can obtain an approximation to the complete motion between planes. And this is why when we study 4D change, we use potential equations with double differentiation:

Many physically important partial differential equations are second-order and for the sake of simplicity we shall consider here only the case of the ternary Generator Group of linear ones. For example:

  • uxx + uyy = 0 (two-dimensional Laplace equation, used to describe  the $t ∆-1 entropic potential, which an ∆+1 particle uses to move)
  • uxx = ut (one-dimensional heat equation; used to describe the ∆º heat wave of ∆-1 gaseous lineal kinetic motions )
  • uxx − uyy = 0 (one-dimensional wave equation, used to study the ‘information/form’ imprinted by an ∆º wave over a ‘liquid’ state of ∆º particles)

Some remarks we can do first on the ‘RASHOMON effect’ of those equations (pov on them of ‘S=T: algebra’ and @nalytic geometry, the most relevant in this case):

S=T. The behaviour of such an equation depends heavily on the coefficients a, b, and c of auxx + buxy + cuyy. So by making the coefficient variable in an inversion of S=T symmetry (group permutations) and applying algebraic group techniques they can be fairly solved. We are not though in this web concerned with techniques, so few examples of them are shown.

RASHOMON EFFECT. GEOMETRIC VIEW ON THE 3 EQUATIONS FROM…

@naylitic geometry… on the other hand, explains its ternary Generator as they respond to the 3 topologies of reality, this time perceived in terms of ∆-scales:  Thus they are called:

T-elliptic, S-parabolic, or TS: hyperbolic equations according to its coefficient solution as either  b2 − 4ac < 0, b2 − 4ac = 0, or b2 − 4ac > 0, respectively.

Thus, the Laplace equation is T-elliptic, the heat equation is S-parabolic, and the wave equation is TS-hyperbolic.

And so once more we are able to classify them as belonging to the 3 ST elements of the generator equation:

S-parabolic, entropic limbs/fields < st-hpyerbolic body-waves >Tiƒ, elliptic functions.

Indeed, as we study the main solutions to partial differential equations we shall find that the function of the element described does correspond to its geometric solution or rather its temporal solutions, as we are talking of functions with motion, of arrows of time. So S-parabolic equations define entropic motions, hyperbolic, steady state, and elliptic, cyclical, harmonic, informative ones. The simplest 3 kind of equations and most used in all stiences will suffice to understand this at the introductory level of this post. Yet before we explain the 3 simplest cases (Laplace: T-gravitation, Fourier: s-heat; D’alambert: ST-wave), a few pricessions on the previous statements:

-As information has more dimensional form, the simplest elliptic, Tiƒ solution is already 2 dimensional.

– b² − 4ac < 0, b² − 4ac = 0, or b² − 4ac > 0, are 3 solutions, similar to many other solutions in systems, which have ternary values.

Thus for example, Einstein’s EFE has 3 solutions around the cosmological constant, weather it is >0, <0 or =0 which also correspond to a flat Universe (o), hyperbolic or elliptic in space, and to the 3 solutions in time of the Universe or galaxy that goes through an entropic big-bang, a steady state, as the cosmological constant hits 0 to a big crunch. 

So again this ‘mathematical equation’ properly interpreted will show multiple homologies between systems and its 3 topological solutions in space and 3 ages in time.

EXPANSION TO MULTIPLE S=T DIMENSIONS.

T-Laplace equation: the arrow of information in space-time functions.

Laplace’s equation states that the sum of the second-order partial derivatives of R, the unknown function, with respect to the Cartesian coordinates, equals zero:screen-shot-2016-12-22-at-09-51-15

The sum on the left often is represented by the expression ∇2R, in which the symbol ∇2 is called the Laplacian, or the Laplace operator.

Are they related to cyclical, accelerated time vortices, as we expect for an elliptic function? Yes, they are. In fact, the Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics, where the operator gives a constant multiple of the mass density when it is applied to a given gravitational potential.

Many physical systems are more conveniently described by the use of spherical or cylindrical coordinate systems. Laplace’s equation can be recast in these coordinates; for example, in cylindrical coordinates, Laplace’s equation is:

screen-shot-2016-12-22-at-09-51-20
Now, the reader will observe that of the 3 canonical ‘coordinates’, T-spherical, S-cylindrical (lineal) and ST-Cartesian (body-wave present ‘hyperbolic’, excellent for such representations, there is always a simpler one and a more complex one, and so the meaning of a system will follow a simple rule:
– The simplex representation is the one that shows the clearest function of a system; its most complex representation is the less likely function. In this case the more complex representation is the entropic, cylindrical, which therefore is NOT the function of Laplace equations.
While the simplest solutions of the equation Δf = 0, now called Laplace’s equation, are the so-called harmonic functions, and represent the possible gravitational fields in free space, which as we know from our posts on cosmology and the Unification equation, are cyclical accelerated time clocks of the gravitational ∆±3 scale.

And so the study of the ‘formal mode’ of a differential equation for an event of physics will often give the schooled scientist an immediate ‘image’ of what kind of process is.

I.e. a process of death from form into entropy-field: ∆+1 (t) << ∆-1 (S), will be a ‘second differential equation’ ONLY which jumps and dissolves the tif parameter of the entity, mass or charge, down two scales into a field.

Hence all fields created by a singularity are expressed with variations of the Poison equation: ∇²φ=ƒ

On the other hand a process of distribution of the same ‘Stif’ parameter from a singularity source outwards, into a wave motion, will be a single jump of scale: a process of balanced distribution into an equilibrium, present state departing from the source that becomes a ‘wave’: ∆º<∆-1 (S).

So again we find all over mathematical physics, those processes of a singularity becoming a wave, the first one discovered the the heat equation, which is a single jump between states and scales, hence one which uses only a jump between parameters and a first differential (wave state) or between a first differential and a second one (wave state): ∂u/∂t = α∇²u

Which ultimately is a case of a diffusion equations, which will appear as the fundamental law of many systems (so Ohm’s laws is his ‘homolog’ NOT ‘analog’ in the electromagnetic scale):

Where ϕ(r, t) is the density of the diffusing material at location r and time t and D(ϕ, r) is the collective diffusion coefficient for density ϕ at location r; and ∇ represents the vector differential operator del.

So we observe here some other ‘themes’ of GST: first that we use as always ratios NOT absolute parameters to define any reality, in this case density not mass. Second that the process involves a Tiƒ ‘dense’ state vs. a ‘wave-state’ (D(Φ, r). And if the diffusion coefficient depends on the density (Cyclical tiƒ element, then the equation is nonlinear), otherwise if it depends of the wave state it is linear.

And so on and so on… Indeed, the ‘explanations’ of all the equations of mathematical physics must be done departing from those ‘fundamental set of events’ which themselves depend on the space-time actions allowed to any system. And for that reason a few differential equations defining those basic events of GST (diffusion, harmonics, waves, speeds, etc.) suffice to explain mathematical physics.

In other terms, the whys and purposes of mathematical physical events in space-time are therefore encoded by the operandi of ∆nalysis.

In that sense the generator equation of analysis is simple, if we think of the term ∆≈D.

“The fifth dimension for simple systems is equivalent to the Differential equation, with two asymmetric arrows; the arrow of wholeness in time, ∂, and the arrow of disintegration in space, ∫’

A derivative in time descends an scale in the fifth dimension. Two derivatives descend two. And viceversa. A non-derivative equation stays in the Ƽ present of the observer frame of reference (Pov).

The singularity in its incapacity to be observed fully, often is described with a single ternary number for its three simplest parameters of ∆st. Such is the case of the black hole as wheeler put it: a black hole has no hair (meaning its mass, momentum and charge are enough to describe it).

Then the field is the opposite also 0 value, for the Poisson equation, where after two derivations from the singularity we achieve a 0 flatness, yet with a potential towards the singularity of the type:

Field (second derivative) = Singularity (attractive vortex described by gravitational or charge or ‘temperature’ gradients).

 S-heat equation…

is parabolic, since heat is a scattering, entropic, diffusive wave, specially when it is not limited by boundaries.

 TS-hyperbolic wave equation

 Those will be the present-wave equations that describe waves between ‘boundaries’ (or else the steady state balance of present  non-changing world form (the boundary region) would disappear, and the wave scatter into a S-Parabolic solution. I.e:

D’Alembert’s wave equation

D’Alembert’s wave equation takes the form

ytt = c2yxx

Here c is a constant related to the stiffness of the string. The physical interpretation of (9) is that the acceleration (ytt) of a small piece of the string is proportional to the tension (yxx) within it.

Because the equation involves partial derivatives, it is known as a partial differential equation—in contrast to the previously described differential equations, which, involving derivatives with respect to only one variable, are called ordinary differential equations. Since partial differentiation is applied twice (for instance, to get ytt from y), the equation is said to be of second order.

Yet in order to specify physically realistic solutions, d’Alembert’s wave equation must be supplemented by boundary conditions, which express the fact that the ends of a violin string are fixed. Here the boundary conditions take the form

y(0, t) = 0 and  y(l, t) = 0 for all t.

D’Alembert showed that the general solution to is: y(x, t) = f(x + ct) + g(xct)

where f and g are arbitrary functions (of one variable). The physical interpretation of this solution is that f represents the shape of a wave that travels with speed c along the x-axis in the negative direction, while g represents the shape of a wave that travels along the x-axis in the positive direction. The general solution is a superposition of two traveling waves, producing the complex waveform.In order to satisfy the boundary conditions given, the functions f and g must be related by the equations

f(−ct) + g(ct) = 0 and  f(lct) + g(l + ct) = 0 for all t.

These equations imply that g = −f, that f is an odd function—one satisfying f(−u) = −f(u)—and that f is periodic with period 2l, meaning that f(u + 2l) = f(u) for all u.

Notice that the part of f lying between x = 0 and x = l is arbitrary, which corresponds to the physical fact that a violin string can be started vibrating from any shape whatsoever (subject to its ends being fixed).
In particular, its shape need not be sinusoidal, proving that solutions other than normal modes can occur.
Let us then make some further precissions.

The 3 sub-type of waves according to topological equations.

In mathematics, a hyperbolic partial differential equation of order n is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first n−1 derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is: Utt – Uxx=0. The equation has the property that, if u and its first time derivative are arbitrarily specified initial data on the line t = 0 (with sufficient smoothness properties), then there exists a solution for all time t.

The solutions of hyperbolic equations are “wave-like.” If a disturbance is made in the initial data of a hyperbolic differential equation, then not every point of space feels the disturbance at once. Relative to a fixed time coordinate, disturbances have a finite propagation speed. They travel along the characteristics of the equation. This feature qualitatively distinguishes hyperbolic equations from elliptic partial differential equations and parabolic partial differential equations. A perturbation of the initial (or boundary) data of an elliptic or parabolic equation is felt at once by essentially all points in the domain.

What this means is that unlike motions between elliptic futures and parabolic past motions which are non-local, infinite, motions in present space have finite length in its quanta.

The Propagation of Waves

Let us consider in more detail those equations of present vibrating strings, putting them not as physicists do in T-S=0 CONFIGURATIONS, for the sake of calculus but in its proper terms, as an S=T symmetry to extract some more meanings. Then we write: 

∂²u/∂x²=1/a² ∂²u/∂t² (15)

This equation, as may be proved, has two particular solutions of the form u1 = ϕ1 (x − at), u2 = ϕ2 (x + at)  where ϕ1 and ϕ2 are arbitrary twice-differentiable functions.

By direct differentiation it is easy to show that the functions u1 and u2 satisfy equation (15). It may be shown that u = u1+ u2 is a general solution of this equation.

Its inversion is then of considerable interest; as it defines the constant balance achieve in present states between the S and T parameters, along a ‘symmetric’ ‘identity element’ (to use group theory jargon so similar to the generator jargon).

Then, for the solution u1 an observer moving with velocity a, will see the string as a stationary curve. Yet a stationary observer will see the string as a wave flowing along the axis Ox with velocity a. In exactly the same way the solution u2(x, t) may be considered as a wave travelling in the opposite direction with velocity a. With an infinite string both waves will be propagated infinitely far but if they are ‘constrained’ by a membrain, their back and forth superposition will produce different balancing shapes increasing at certain times and decreasing at others:

image2645

If u1 and u2, as they arrive at a given point from opposite sides, have the same sign, then they augment each other, but if they have opposite signs, they counteract each other and there will be an instant of complete annihilation of the oscillations, after which the waves again separate.

So in terms of the generator the 2 solutions are relative past and future inversions of the present ‘state’ of no motion for a wave; and in physical terms the wave will have its anti-wave, which extends to light space-time, and the duality of magnetic and electric fields, whereas the photon is defined as its own antiparticle. 

Now for a spherical wave,  which is neither constrained in the ‘height and origin-end, an entropic process of expansion of space and hence diminution of a dimension of motion-energy takes place. So the solutions become of the form:

image2649

where r denotes the distance of a given point from the origin of the coordinate system r2 = x2 + y2 + z2, and ϕ1 and ϕ2 are arbitrary, twice differentiable functions.

This wave is spherically symmetric; it is identical at all points that have the same value of r. The factor 1/r produces the result that the amplitude of the wave is inversely proportional to the distance from the origin. Such an oscillation is called a diverging spherical wave. A good picture of it is given by the circles that spread out over the surface of the water when a stone is thrown into it, except that in this case the waves are circular rather than spherical.

Yet the second solution of is of great interest to ∆st; it is called a converging wave, travelling in the direction of the origin. Its amplitude grows with time to infinity as it approaches the origin. We see that such a concentration of the disturbance at one point may lead, even though the initial oscillations are small, to an immense upheaval…

And while most theorists in single continuum space-time ignores it, it is NECESSARY to balance the two arrows of time, 4D and 5D, as it represents the collapsing wave that will become a particle/mind point, and ultimately through the effect of resonance explains the emergence of physical systems in the upper 5D plane.

A clear case of the fundamental S=T balances of reality as one and the other solutions form an expanding spatial and imploding temporal solution that balance each other.

This concept is so essential that carried into astrophysics is the fundamental principle of symmetry, which when broken causes a ‘transitory’ space-time physical phenomena to happen.

The simplest equation of a vibrating string, ∂²u/∂²x=1/a² ∂²u/∂t² is really giving us a ‘beat’ between the space-like state of the wave (first term) and the time like state (second term), with a ‘ratio’ that translates dimensionally the space-state into a time state, through the transformation of the ‘simultaneous tension of the two inverse forces’ that act pulling the string in space,  into a term of ‘pure motion-time’, and acceleration, 1/a². So basically a wave-string like process is a space-time transformation, which switches back and forth as the wave moves in accelerated fashion, stops in tension-space, moves in time-acceleration… S>t<S>t; but we express both processes together in a single ‘time-less’ expression of the beat.

In that regard, a huge advance in science would happen if people understand that = static equality is meaningless, = means, ≈ or <≈>, transformations through constant feed-back beats between space and time states.

Yet while a wave motion is ‘conservative’ hence happening in the same ∆-scales of reality, merely causing topological s-t transformations and reproductions of form in the same scale of reality; a huge range of equations related to the entropic processes, where we use the same equation but with a different ‘degree’ of derivatives in both sides. So the entropic process of dissipation of a wave gives us the equation of heat:

∂u/∂t= α (∂²u/∂x²+∂²u/∂y²+∂²u/∂z²)

The heat equation is of fundamental importance in diverse scientific fields; all related to entropic expansions and relaxations of a present wave of energy. In mathematics, it is the prototypical parabolic partial differential equation; and hence immediately connects to the dispersion of a wave of information into an expansive flow.

In probability theory, the heat equation is connected with the study of Brownian motion via the Fokker–Planck equation; connecting it with memoriless stochastic processes in which sequential time causality becomes meaningless as the entropic process keeps dissolving the forms of the whole into an entropic future.

The diffusion equation, a more general version of the heat equation, arises in connection with the study of chemical diffusion and other related processes. And so on.

ALL IN ALL, an obvious conclusion of the explosion and reproduction of mathematical physics in the classic age of ∆nalysis, is the fact that the ∆-scales of the Universe are its most important dynamic element to understand the processes of reality.

And in as much as mathematical physics has reduced in ‘human knowledge’ the properties of physical systems to its mathematical only description we can reverse the concept and affirm that ‘human physics’ is basically the study of the mathematical, differential equations that can mirror physical processes; and so instead of studying physical experiments and then extract differential equations, we shall consider to study the connection GST->∆º mathematics -> ∆º±i physics, first the laws of GST, then its interpretation by differential ∆nalysis, to then group all the main mathematical phenomena according to the GST->∆º differential equation it uses.

Stability. In the examples considered the question of stability or instability of the equilibrium of a system was easily answered from physical considerations, without investigating the differential equations.

Thus if the pendulum, in its equilibrium position OA, is moved by some external force to a nearby position OA′, i.e., if a small change is made in the initial conditions, then the subsequent motion of the pendulum cannot carry it very far from the equilibrium position, and this deviation will be smaller for smaller original deviations OA′, i.e., in this case the equilibrium position will be stable.
For other more complicated cases, the question of stability of the equilibrium position is considerably more complicated and can be dealt with only by investigating the corresponding differential equations.

Let some physical process be described by the system of equations:

For simplicity, we consider only a system of two differential equations, although our conclusions remain valid for a system with more.  Each particular solution of the system, consisting of two functions x(t) and y(t), will be called a motion. We will assume that f1(x, y, t) and f2(x, y, t) have continuous partial derivatives. It has been shown that, in this case, the solution of the system of differential equations (57) is uniquely defined if at any instant of time t = t0 the initial values x(t0) = x0 and y(t0) = y0 are given.
We will denote by x(t, x0, y0) and y(t, x0, y0) the solution of the system of equations (57) satisfying the initial conditions:A solution x(t, x0, y0), y(t, x0, y0) is called stable if for all t > t0 the functions x(t, x0, y0) and y(t, x0, y0) have arbitrarily small changes for sufficiently small changes in the initial values x0 and y0.
More exactly, for a solution to be stable  the differences:

may be made less than any previously given number for all t > t0, if the numbers δ1, and δ2, are taken sufficiently small in absolute value.  Every motion that is not stable in this sense is called unstable.

Multiple derivatives – approximations: fourier series.

We return now to the key expression of a sum of motions/actions of a herd along a lineal 2D path – the Fourier series.

Fourier series arose in connection with the study of certain physical phenomena, in particular, small oscillations of elastic media. A characteristic example is the oscillation of a musical string.

Indeed, as we explained the investigation of oscillating strings was the origin historically of Fourier series and determined the direction in which their theory developed. So we can consider the initial case in more detail.
Let us consider (figure below) a tautly stretched string, the ends of which are fixed at the points x = 0 and x = l of the axis Ox. If we displace the string from its position of equilibrium, it will oscillate.
We will follow the motion of a specific point of the string, with abscissa x0. Its deviation vertically from the position of equilibrium is a function ϕ(t) of time. It can be shown that one can always give the string an initial position and velocity at t = 0 such that as a result the point which we have agreed to follow will perform harmonic oscillations in the vertical direction, defined by the function:

Here α is a constant depending only on the physical properties of the string (on the density, tension, and length), k is an arbitrary number, and A and B are constants.
We note that our discussion relates only to small oscillations of the string. This gives us the right to assume approximately that every point x0 is oscillating only in the vertical direction, displacements in the horizontal direction being ignored. We also assume that the friction arising from the oscillation of the string is so small that we may ignore it. As a result of these approximate assumptions, the oscillations will not die out. Then equation 20:

defines the possibilities of oscillation for the point x0 in periodic  harmonic form. Where these functions do have the following remarkable property: Experiments and their accompanying theory show that every possible oscillation of the point x0 is the result of combining certain harmonic oscillations of the form (20).

Relatively simple oscillations are obtained by combining a finite number of such oscillations; i.e., they are described by functions of the form 20:
Where Ak and Bk are correspondiilg constants. These functions are called trigonometric polynomials. In more complicated cases, the oscillation will be the result of combining an infinite number of oscillations of the form (20), corresponding to k = 1, 2, 3, ··· and with suitably chosen constants Ak and Bk, depending on the number k. Consequently, we arrive at the necessity of representing a given function ϕ(t) of period 2π/α, which describes an arbitrary oscillation of the point x0 in the form of a series:

The remarkable fact here from ∆st pov is that if we consider 21 to be the spatial symmetry for the temporal case, 20, of limited number of actions (ternary, penta or 11th factors being the commonest harmonies in ∆st and reality), contrary to a first deduction, a simultaneous super organism reaches more efficiency, simultaneity and complexity in its network control of its micro-parts when they increase in number. So from big masses which have better orbital circles than elliptic light comets, to human organisms, better constructed that simpler sponge colonies, as a reproductive mass of elements grow, the ‘epigenetic elements’ in growing scales of ∆-dept that organise the system (not shown on that equation), improve its control efficiency till the ’11¹¹’ completion of a plane of form, the ultimate meaning of an in(finite).

Going back in time then, the growth of complexity means that the time frequency will grow as the space-intensity diminish and finally both vanish.

There are many other situations in physics where this is natural. so we can consider a given function, even though it does not necessarily describe an oscillation, as the sum of an infinite trigonometric series of the form (21). Such a case arises, for example, in connection with the vibrating string itself. The exact law for the subsequent oscillation of a string, to which at the beginning of the experiment we have given a specific initial displacement (for example, as illustrated in figure 12) is easy to calculate, provided we know the expansion in a trigonometric series: (a particular case of the series (21)), of the function f(x) describing the initial position:

Expansion of functions in a trigonometic series

On the basis of what has been said there arises the fundamental question: Which functions of period 2π/α can be represented as the sum of a trigonometric series of the form (21)?

This question was raised in the 18th century by Euler and Bernoulli in connection with Bernoulli’s study of the vibrating string. Here Bernoulli took the point of view suggested by physical considerations that a very wide class of continuous fhnctions, including in particular all graphs drawn by hand, can be expanded in a trigonometric series. This opinion received harsh treatment from many of Bernoulli’s contemporaries. They held tenaciously to the idea prevalent at the time that if a function is represented as an analytic expression (such as a trigonometric series) then it must have good differentiability properties. But the function illustrated in figure 12 does not even have a derivative at the point ξ; in such a case, how can it be defined by one and the same analytic expression on the whole interval [0, l]?
We know now that the physical point of view of Bernoulli was quite right. But to put an end to the controversy it was necessary to wait an entire century, since a full answer to these questions required first of all that the concepts of a limit and of the sum of a series be put on an exact basis.
The fundamental mathematical investigations confirming the physical point of view but based on the older ideas concerning the foundations of analysis were completed in 1807-1822 by the French mathematician Fourier.
Finally, in 1829, the German mathematician Dirichlet showed, with all the rigor with which it would be done in present-day mathematics, that every continuous function of period 2π/α,* which for any one period has a finite number of maxima and minima, can be expanded in a unique trigonometric Fourier series, uniformly convergent† to the function.
Figure 13 illustrates a function satisfying Dirichlet’s conditions. Its graph is continuous and periodic, with period 27π, and has one maximum and one minimum in the period 0≤x≤2π:Fourier coefficients.

In what follows we will consider functions of period 2π, which will simplify the formulas. We consider any continuous function f(x) of period 2π satisfying Dirichlet’s condition. By Dirichlet’s theorem it may be expanded into a trigonometric serieswhich is uniformly convergent to it. The fact that the first term is written as a0/2 rather than a0 has no real significance but is purely a matter of convenience, as we shall see later.
We pose the problem: to compute the coefficients ak and bk of the series for a given function f(x).
To this end we note the following equation:

which the reader may verify. These integrals are easy to compute by reducing the products of the various trigonometric functions to their sums and differences and their squares to expressions containing the corresponding trigonometric functions of double the angle.

The first equation states that the integral, over a period of the function, of the product of two different functions from the sequence 1, cos x, sin x, cos 2x, sin 2x, ··· is equal to zero (the so-called orthogonality property of the trigonometric functions). On the other hand, the integral of the square of each of the functions of this sequence is equal to π. The first function, identically equal to one, forms an exception, since the integral of its square over the period is equal to 2π. It is this fact which makes it convenient to write the first term of the series (22) in the form a0/2.
Now we can easily solve our problem. To compute the coefficient am, we multiply the left side and each term on the right side of the series (22) by cos mx and integrate term by term over a period 2π, as is permissible since the series obtained after multiplication by cos mx is uniformly convergent. By (23) all integrals on the right side, with the exception of the integral corresponding to cos mx, will be zero, so that obviously:

Similarly, multiplying the left and right sides of (22) by sin mx and integrating over the period, we get an expression for the coefficients:

and we have solved our problem. The numbers am and bm computed by formulas (24) and (25) are called the Fourier coeficients of the function f(x).
Let us take an example the function f(x) of period 2π illustrated in figure 13. Obviously this function is continuous and satisfies Dirichlet’s condition, so that its Fourier series converges uniformly to it.
It is easy to see that this function also satisfies the condition f(—x) = —f(x). The same condition also clearly holds for the function F1(x) = f(x) cos mx, which means that the graph of F1(x) is symmetric with respect to the origin. From geometric arguments it is clear that:

 so that am = 0 (m = 0, 1, 2, ···). Further, it is not difficult to see that the functions F2(x) = f(x) sin mx has a graph which is symmetric with respect to the axis Oy so that:

But for even m this graph is symmetric with respect to the center π/2 of the segment [0, π], so that bm = 0 for even m. For odd m = 2l + 1 (l = 0, 1, 2, ···) the graph of F2(x) is symmetric with respect to the straight line x = π/2, so that: But, as can be seen from the sketch, on the segment [0, π/2] we have simply f(x) = x, so that by integration by parts, we get:

Thus we have found the expansion of our function in a Fourier series.
Convergence of the Fourier partial sums to the generating function. In applications it is customary to take as an approximation to the function f(x) of period 2π the sum:

of the first n terms of its Fourier series, and then there arises the question of the error of the approximation. If the function f(x) of period 2π has a derivative f(r)(x) of order r which for all x satisfies the inequality:

Then the error of the approximation may be estimated as follows:

Where cr is a constant depending only on r. We see that the error converges to zero with increasing n, the convergence being the more rapid the more derivatives the function has.
For a function which is analytic on the whole real axis there is an even better estimate, as follows:

Where c and q are positive constants depending on f and q < 1. It is remarkable that the converse is also true, namely that if the inequality (26) holds for a given function, then the function is necessarily analytic. This fact, which was discovered at the beginning of the present century, in a certain sense reconciles the controversy between D. Bernoulli and his contemporaries. We can now state: If a function is expandable in a Fourier series which converges to it, this fact in itself is far from implying that the function is analytic; however, it will be analytic, if its deviation from the sum of the first n terms of the Fourier series decreases more rapidly than the terms of some decreasing geometric progression.
A comparison of the estimates of the approximations provided by the Fourier sums with the corresponding estimates for the best approximations of the same functions by trigonometric polynomials shows that for smooth functions the Fourier sums give very good approximations, which are in fact, close to the best approximations. But for nonsmooth continuous functions the situation is worse: Among these, for example, occur some functions whose Fourier series diverges on the set of all rational points.
It remains to note that in the theory of Fourier series there is a question which was raised long ago and has not yet been answered: Does there exist a continuous periodic function f(x) whose Fourier series fails for all x to converge to the function as n = ∞? The best result in this direction is due to A. N. Kolmogorov, who proved in 1926 that there exists a periodic Lebesgue-integrable function whose Fourier series does not converge to it at any point. But a Lebesgue-integrable function may be discontinuous, as is the case with the function constructed by Kolmogorov. The problem still awaits its final solution.
To provide approximations by trigonometric polynomials to arbitrary continuous periodic functions, the methods of the so-called summation of Fourier series are in use at the present time. In place of the Fourier sums as an approximation to a given function we consider certain modifications of them. A very simple method of this sort was proposed by the Hungarian mathematician Fejér. For a continuous periodic function we first, in a purely formal way, construct its Fourier series, which may be divergent, and then form the arithmetic means of the first n partial sums:This is the Fejér sum of order n corresponding to the given function f(x). Fejér proved that as n = ∞ this sum converges uniformly to f(x).

Approximation in the Sense of the Mean Square

Let us return to the problem of the oscillating string. We assume  that at a certain moment t0 the string has the form y = f(x). We can prove that its potential energy W, i.e., the work made available as it moves from the given position to its position of equilibrium, is equal (for small deviations of the string) to the integral: , at least up to a constant factor. Suppose now that we wish to approximate the function f(x) by another function ϕ(x). Together with the given string, we will consider a string whose shape is defined by ϕ(x), and still a third string, defined by the function f(x) — ϕ(x). It may be proved that if the energy:

of the third string is small, then the difference between the energy of the first two strings will also be small.* Thus, if it is important that the second string have an energy which differs little from the first, we must
try to find a function ϕ′(x) for which the integral (28) will be as small as possible. We are thus led to the problem of approximation to a function (in this case f′(x)) in the sense of the mean square.
Here is how this problem is to be stated in the general case. On the interval [a, b] we are given the function F(x), and also the function:

ϕ(x, αo,α1,…,αn) 29 depending not only on x but also on the parameters α0, α1, ···, αn. It is required to choose these parameters in such a way as to minimise:

Here the idea is to find the best approximation of the function F(x) by functions of the family (29), but only in the sense of the mean square. It is now unimportant for us whether or not the difference F — Ψ is small for all values of x on the interval [a, b]; on a small part of the interval the difference F — Ψ may even be large provided only that the integral (30) is small, as is the case, for example, for the two graphs illustrated in figure:

The smallness of the quantity (30) shows that the functions F and Ψ are close to each other on by far the greater part on the interval.

As to the choice in practice of one method of approximation or another, everything depends on the purpose in view. In the earlier example of the string, it is natural to approximate the function f′(x) in the sense of the mean square.

We should state that from the computational point of view the method of the mean square is more convenient, since it can be reduced to the application of well-developed methods of general analysis.
As an example let us consider the following characteristic problem.
We wish to make the best approximation in the sense of the mean square to a given continuous function f(x) on the interval [a, b] by sums of the form:where the αk are constants and the functions ϕk(x) are continuous and form an orthogonal and normal system.
This last means that we have the following equations:

These numbers ak are called the Fourier coefficients of f with respect to the ϕk.
For arbitrary coefficients αk, on the basis of the properties of orthogonality and normality of ϕk, we have the equation:

The first term on the right side of the derived equation does not depend on the numbers αk. Thus the right side will be smallest for those αk, which make the second term itself small, and obviously this can happen only if the numbers αk are equal to the corresponding Fourier coefficients ak.
Thus we have reached the following important result. If the functions ϕk, form an orthogonal and normal system on the interval [a, b], then the sum:will be the best approximation, in the sense of the mean square, to the function f(x) on this interval if and only if the numbers αk are the Fourier coefficients of the function f with respect to ϕk(x).
On the basis of equation (23) it is easily established that the functions:

form an orthogonal and normal system on the interval [0, 2π]. Thus the stated proposition, as applied to the trigonometric functions, will have the following form.
The Fourier sum Sn(x), computed for a given continuous function f(x) of period 2π, is the best approximation, in the sense of the mean square, to the function f(x) on the interval [0, 2π], among all trigonometric polynomials:of order n.
From this result and from Fejér’s theorem, formulated in §7, we are led to another remarkable fact.
Let f(x) be a continuous function of period 2π and σn(x) be its Fejér sum of order n, defined in §7 by equation (27).
We introduce the notation:

Since the Fourier sums Sk(x) (k = 0, 1, . . ., n) are trigonometric polynomials of order k≤n, it is obvious that σn(x) is a trigonometric polynomial of order n. Thus from the minimal property of the sum Sn(x) shown previously, we have the inequality:

“Since, by Fejér’s theorem, the quantity ηn converges to zero for n → ∞ we obtain the following important result.
For any continuous function of period 2π we have the equation:In this case we say that the Fourier sum of order n of a continuous function f(x) converges to f(x) in the sense of the mean square, as n increases beyond all bounds.
In fact, this statement is true for a wider class of functions, namely those which are integrable, together with their square, in the sense of Lebesgue.
We will stop here and will not present other interesting facts from the theory of Fourier series and orthogonal functions, based on approximation in the sense of the mean square.

 

THE GEOMETRIV VIEW:

Multiple Integrals. 

Integrals in topology.

As the Universe is a kaleidoscopic mirror of symmetries between all its elements, this dominant of analysis on ∆-scaling must ad also the use of analysis on a single plane, in fact the most common, whereas the essential consideration is the ∆§ocial decametric and e-π ternary scaling, with minimal distortion (which happens in the Lorentzian limits between scales). 

This key distinction on GST (∆§ well-behaved scaling versus ∆±i ‘distorted emerging and dissolution, which does change the form of the system) does have special relevance in analysis as for very long it was necessary the ‘continuity’ without such distortions of the function studied, and so analysis was restricted to ∆(±1 – 0) intervals and ‘broke’ when jumping two scales as in processes of entropy (death-feeding). But with improved approximation techniques, functionals and operators (which assume a whole scale of ∞ parts as a function of functions in the operator of the larger scale) and renormalisation in double and triple integrals and derivatives by praxis, without understanding the scalar theory behind it, this hurdle today…

And it has always amused me that humans can get so far in all disciplines by trial and error, when a ‘little bit of thought on first principles’ could make thinks much easier. It seems though-thought beings are scarce in our species and highly disregarded, as the site’§ight§how (allow me, a bit of cacophony and repetition the trade mark of this blog and the Universe 🙂 As usual I shall also repeat, I welcome comments, and offers of serious help from specialists and Universities, since nothing would make me happier than unloading tons of now-confusing analysis not only of analysis, before I get another health crisis and all goes to waste in the eternal entropic arrow of two derivatives, aka death.

Geometric Interpretation of the Problem of Integrating Differential Equations; Generalization of the Problem

For simplicity we will consider initially only one differential equation of the first order with one unknown function dy/dx = ƒ (x,y) where the function f(x, y) is defined on some domain G in the (x, y) plane. This equation determines at each point of the domain the slope of the tangent to the graph of a solution of equation (29) at that point. If at each point (x, y) of the domain G we indicate by means of a line segment the the direction of the tangent (either of the two directions may be used) as determined by the value of f(x, y) at this point, we obtain a field of directions. Then the problem of finding a solution of the differential equation (29) for the initial conditon y(x0) = y0 may be formulated thus: In the domain G we have to find a curve y = ϕ(x), passing through the point M0(x0, y0), which at each of its points has a tangent whose slope is given by equation (29), or briefly, which has at each of its points a preassigned direction.

From the geometric point of view this statement of the problem has two unnatural features:
1.  By requiring that the slope of the tangent at any given point (x, y) of the domain G be equal to f(x, y), we automatically exclude tangents parallel to Oy, since we generally consider only finite magnitudes; in particular, it is assumed that the function f(x, y) on the right side of equation (29) assumes only finite values.
2.  By considering only curves which are graphs of functions of x, we also exclude those curves which are intersected more than once by a line perpendicular to the axis Ox, since we consider only single-valued functions; in particular, every solution of a differential equation is assumed to be a single-valued function of x.
So let us generalize to some extent the preceding statement of the problem of finding a solution to the differential equation (29). Namely, we will now allow the tangent at some points to be parallel to the axis Oy. At these points, where the slope of the tangent with respect to the axis Ox has no meaning, we will take the slope with respect to the axis Oy. In other words, we consider, together with the differential equation (29), the equation: dy/dx = ƒ (x,y)
where f1(x, y) = 1/f(x, y), if f(x, y) ≠ 0, using the second equation when the first is meaningless. The problem of integrating the differential equations (29) and (29′) then becomes: In the domain G to find all curves having at each point the tangent defined by these equations.

These curves will be called integral curves (integral lines) of the equations (29) and (29′) or of the tangent field given by these equations. In place of the plural “equations (29), (29′)”, we will often use the singular “equation (29), (29′)”. It is clear that the graph of any solution of equation (29) will also be an integral curve of equation (29), (29′). But not every integral curve of equation (29), (29′) will be the graph of a solution of equation (29). This case will occur, for example, if some perpendicular to the axis Ox intersects this curve at more than one point.
In what follows, if it can be clearly shown that: ƒ(x,y)= M (x,y)/N (x.y)  then we will write only the equation: dy/dx = M (x,y)/N (x.y) and omit dx/dy = N (x,y)/M (x.y).   Sometimes in place of these equations we introduce a parameter t, and write the system of equations: dy/dt = M (x,y),  dx/dt = N (x,y) where x and y are considered as functions of t.

Example 1. The equation:dy/dx = y /x defines a tangent field everywhere except at the origin. This tangent field is sketched in figure 7. All the tangents given by equation (30) pass through the origin.

It is clear that for every k the function: y=kx is a solution of equation (30). The collection of all integral curves of this equation is then defined by the relation ax=by=0 where a and b are arbitrary constants, not both zero. The axis Oy is an integral curve of equation (30), but it is not the graph of a solution of it.

Since equation (30) does not define a tangent field at the origin, the curves (31) and (32) are, strictly speaking, integral curves everywhere except at the origin. Thus it is more correct to say that the integral curves of equation (30) are not straight lines passing through the origin but half lines issuing from it.

Example 2. The equation: dx/dy= – x/y defines a field of tangents everywhere except at the origin, as sketched in figure. The tangents defined at a given point (x, y)  are perpendicular to each other. It is clear that all circles centered at the origin will be integral curves of equation (33). However the solutions of this equation will be the functions: Now this duality is ESSENTIAL as any undergraduate student knows to the duality of potential fields vs. charge singularity forces; and if he has understood anything it will see they are the 2 views of a T.œ control of its vital energy, from the perpendicular=predatory view of the singularity (4th non-E postulate) vs. the parallelism of the membrane that encircles it.

So it also ultimately reflects the DEEPEST meaning of the potential, parallel, stable vs. kinetic, perpendicular, unstable duality of the 2 existential states of a vital energy, which naturally will tend to a potential state of minimal kinetic disturbance, to the eternally wished for state of informative 1D curved eternal existence over the lineal, destructive entropic motion.

So deep it is the duality of y=kx vs. dx/dy=-x/y (:

And so we shall bring it back in many posts.

Existence and Uniqueness of the Solution of a Differential Equation; Approximate Solution of Equations

The question of existence and uniqueness of the solution of a differential equation. We return to the differential equation (17) of arbitrary order n. Generally, it has infinitely many solutions and in order that we may pick from all the possible solutions some one specific one, it is necessary to attach to the equation some supplementary conditions, the number of which should be equal to the order n of the equation. Such conditions may be of extremely varied character, depending on the physical, mechanical, or other significance of the original problem.

For example, if we have to investigate the motion of a mechanical system beginning with some specific initial state, the supplementary conditions will refer to a specific (initial) value of the independent variable and will be called initial conditions of the problem. But if we want to define the curve of a cable in a suspension bridge, or of a loaded beam resting on supports at each end, we encounter conditions corresponding to different values of the independent variable, at the ends of the cable or at the points of support of the beam. We could give many other examples showing the variety of conditions to be fulfilled in connection with differential equations.
We will assume that the supplementary conditions have been defined and that we are required to find a solution of equation:

that satisfies them.

 

The first question we must consider is whether any such solution exists at all. It often happens that we cannot be sure of this in advance. Assume, say, that equation (17) is a description of the operation of some physical apparatus and suppose we want to determine whether periodic motion occurs in this apparatus. The supplementary conditions will then be conditions for the periodic repetition of the initial state in the apparatus, and we cannot say ahead of time whether or not there will exist a solution which satisfies them.
In any case the investigation of problems of existence and uniqueness of a solution makes clear just which conditions can be fulfilled for a given differential equation and which of these conditions will define the solution in a unique manner.

But the determination of such conditions and the proof of existence and uniqueness of the solution for a differential equation corresponding to some physical problem also has great value for the physical theory itself. It shows that the assumptions adopted in setting up the mathematical description of the physical event are on the one hand mutually consistent and on the other constitute a complete description of the event.
The methods of investigating the existence problem are manifold, but among them an especially important role is played by what are called direct methods. The proof of the existence of the required solution is provided by the construction of approximate solutions, which are proved to converge to the exact solution of the problem. These methods not only establish the existence of an exact solution, but also provide a way, in fact the principal one, of approximating it to any desired degree of accuracy.
For the rest of this section we will consider, for the sake of definiteness, a problem with initial data, for which we will illustrate the ideas of Euler’s method and the method of successive approximations.

Euler’s method of broken lines.

Consider in some domain G of the (x, y) plane the differential equation: dy/dx = ƒ (x,y)

As we have already noted, equation (34) defines in G a field of tangents. We choose any point (x0, y0) of G. Through it there will pass a straight line L0 with slope f(x0, y0). On the straight line L0 we choose a point (x1, y1), sufficiently close to (x0, y0); in figure 9 this point is indicated by the number 1.

We draw the straight line L1, through the point (x1, y1) with slope f(x1, y1) and on it mark the point (x2, y2); in the figure this point is denoted by the number 2. Then on the straight line L2, corresponding to the point (x2, y2) we mark the point (x3, y3), and continue in the same manner with x0, < x1, < x2, < x3, < · · ·. It is assumed, of course, that all the points (x0, y0), (x1, y1), (x2, y2), · · · are in the domain G. The broken line joining these points is called an Euler broken line.

One may also construct an Euler broken line in the direction of decreasing x; the corresponding vertices on our figure are denoted by –1, –2, –3.

It is reasonable to expect that every Euler broken line through the point (x0, y0) with sufficiently short segments gives a representation of an integral curve l passing through the point (x0, y0), and that with decrease in the length of the links, i.e., when the length of the longest link tends to zero, the Euler broken line will approximate this integral curve.

Here, of course, it is assumed that the integral curve exists. In fact it is not hard to prove that if the function f(x, y) is continuous in the domain G, one may find an infinite sequence of Euler broken lines, the length of the largest links tending to zero, which converges to an integral curve l. However, one usually cannot prove uniqueness: there may exist different sequences of Euler broken lines that converge to different integral curves passing through one and the same point (x0, y0). M. A. Lavrent’ev has constructed an example of a differential equation of the form (29) with a continuous function, f(x, y), such that in any neighborhood of any point P of the domain G there passes not one but at least two integral curves.

In order that through every point of the domain G there pass only one integral curve, it is necessary to impose on the function f(x, y) certain conditions beyond that of continuity. It is sufficient, for example, to assume that the function f(x, y) is contitiuous and has a bounded derivative with respect to y on the whole domain G. In this case it may be proved that through each point of G there passes one and only one integral curve and that every sequence of Euler broken lines passing through the point (x0, y0) converges uniformly to this unique integral curve, as the length of the longest link of the broken lines tends to zero. Thus for sufficiently small links the Euler broken line may be taken as an approximation to the integral curve of equation (34).
From the preceding it can be seen that the Euler broken lines are so constituted that small pieces of the integral curves are replaced by line segments tangent to these integral curves. In practice, many approximations to integral curves of the differential equation (34) consist not of straight-line segments tangent to the integral curves, but of parabolic segments that have a higher order of tangency with the integral curve. In this way it is possible to find an approximate solution with the same degree of accuracy in a smaller number of steps (with a smaller number of links in the approximating curve).

The method of successive approximations.

We now describe another method of successive approximation, which is as widely used as the method of the Euler broken lines. We assume again that we are required to find a solution y(x) of the differential equation (34) satisfying the initial condition:  y (xo) = yo

For the initial approximation to the function y(x), we take an arbitrary function y0(x). For simplicity we will assume that it also satisfies the initial condition, although this is not necessary. We substitute it into the right side f(x, y) of the equation for the unknown function y and construct a first approximation y1, to the solution y from the following requirements:Since there is a known function on the right side of the first of these equations the function y1(x) may be found by integration:

It may be expected that y1(x) will differ from the solution y(x) by less than y0(x) does, since in the construction of y1(x) we made use of the differential equation itself, which should probably introduce a correction into the original approximation. One would also think that if we improve the first approximation y1(x) in the same way, then the second approximation:

will be still closer to the desired solution.
Let us assume that this process of improvement has been continued indefinitely and that we have constructed the sequence of approximations: yo(x), y1(x),…yn(x)….
Will this sequence converge to the solution y(x)?
More detailed investigations show that if f(x, y) is continuous and ƒ’y is bounded in the domain G, the functions yn(x) will in fact converge to the exact solution y(x) at least for all x sufficiently close to x0 and that if we break off the computation after a sufficient number of steps, we will be able to find the solution y(x) to any desired degree of accuracy.
Exactly in the same way as for the integral curves of equation (34), we may also find approximations to integral curves of a system of two or more differential equations of the first order. Essentially the necessary condition here is to be able to solve these equations for the derivatives of the unknown functions. For example, suppose we are given the system:Assuming that the right sides of these equations are continuous and have bounded derivatives with respect to y and z in some domain G in space, it may be shown under these conditions that through each point (x0, y0, z0) of the domain G, in which the right sides of the equations in (37) are defined, there passes one and only one integral curve:

y = Φ (x), z = ψ (x)  of the system (37). The functions f1(x, y, z) and f2(x, y, z) give the direction numbers at the point (x, y, z), of the tangent to the integral curve passing through this point. To find the functions ϕ(x) and ψ(x) approximately, we may apply the Euler broken line method or other methods similar to the ones applied to the equation (34).
The process of approximate computation of the solution of ordinary differential equations with initial conditions may be carried out on computing machines. There are electronic machines that work so rapidly that if, for example, the machine is programmed to compute the trajectory of a projectile, this trajectory can be found in a shorter space time than it takes for the projectile to hit its target (cf. Chapter XIV).
The connection between differential equations of various orders and a system of a large number of equations of first order. A system of ordinary differential equations, when solved for the derivative of highest order of each of the unknown functions, may in general be reduced, by the introduction of new unknown functions, to a system of equations of the first order, which is solved for all the derivatives. For example, consider the differential equation: d²y/dx²= ƒ (x, y, dy/dx). We set dy/dx = z. Then equation (38) may be written in the form: dz/dx = ƒ (x, y, z)

Hence, to every solution of equation (38) there corresponds a solution of the system consisting of equations (39) and (40). It is easy to show that to every solution of the system of equations (39) and (40) there corresponds a solution of equation (38).
Equations not explicitly containing the independent variable. The problems of the pendulum, of the Helmholtz acoustic resonator, of a simple electric circuit, or of an electron-tube generator considered in §1 lead to differential equations in which the independent variable (time) does not explicitly appear. We mention equations of this type here, because the corresponding differential equations of the second order may be reduced in each case to a single differential equation of the first order rather than to a system of first-order equations as in the paragraph above for the general equation of the second order. This reduction greatly simplifies their study.
Let us then consider a differential equation of the second order, not containing the argument t in explicit form:F (x, dx,dt, d²x.dt²)=0.  We set dx/dt=y and consider y as a function of x, so that:

Then equation (41) may be rewritten in the form:  F (x, y, y dy/dx)=0

In this manner, to every solution of equation (41) there corresponds a unique solution of equation (43). Also to each of the solutions y = ϕ(x) of equation (43) there correspond infinitely many solutions of equation (41). These solutions may be found by integrating the equation: dx/dt=Φ (x) where x is considered as a function of t.
It is clear that if this equation is satisfied by a function x = x(t), then it will also be satisfied by any function of the form x(t + t0), where t0 is an arbitrary constant.
It may happen that not every integral curve of equation (43) is the graph of a single function of x. This will happen, for example, if the curve is closed. In this case the integral curve of equation (43) must be split up into a number of pieces, each of which is the graph of a function of x. For every one of these pieces, we have to find an integral of equation (44).
The values of x and dx/dt which at each instant characterize the state of the physical system corresponding to equation (41) are called the phases of the system, and the (x, y) plane is correspondingly called the phase plane for equation (41). To every solution x = x(t) of this equation there corresponds the curve: y = x'(t)
in the (x, y) plane; t here is considered as a parameter. Conversely, to every integral curve y = ϕ(x) of equation (43) in the (x, y) plane there corresponds an infinite set of solutions of the form x = x(t + t0) for equation (41); here t0 is an arbitrary constant. Information about the behavior of the integral curves of equation (43) in the plane is easily transformed into information about the character of the possible solutions of equation (41). Every closed integral curve of equation (43) corresponds, for example, to a periodic solution of equation (41).
If we subject equation (6) to the transformation (42), we obtain: dy/dx = -ay – bx/my.
Setting ν = x and dv/dt = y in equation (16), in like manner we get:

Just as the state at every instant of the physical system corresponding to the second-order equation (41) is characterized by the two magnitudes* (phases) x and y = dx/dt, the state of a physical system described by equations of higher order or by a system of differential equations is characterized by a larger number of magnitudes (phases). Instead of a phase plane, we then speak of a phase space.

DUALITIES: The behavior of integral curves in the large DOMAIN self-centred in the small singularity.

the behavior of the integral curves “in the large”; that is, in the entire domain of the given system of differential equations, without attempting to preserve the scale. We will consider a space in which this system defines a field of directions as the phase space of some physical process. Then the general scheme of the integral curves, corresponding to the system of differential equations, will give us an idea of the character of all processes (motions) which can possibly occur in this system:


In figures we have constructed approximate schematized representations of the behavior of the integral curves in the neighborhood of an isolated singular point.

Why those matter obviously because singularities • @ matter. We can divide those curves which are canonical of extensive families that exhaust the 3 possibilities:

∑=∏: 3D communication. What first calls attention is the symmetry of the upper fig. 12, when the singularity merely acts as in a tetraktys configuration as the identity neutral element that communicates all the flows that touch the T.œ system, entering and leaving symmetrically the o-point (having hence a 0 line of symmetry diagonal to the point).

It is also noticeable that the paths are ‘fast’, as the points of those paths know they will not be changed by the identity element.

ð•: 1D predation. But in the case the 0-point acts as a predator that won’t let the point-prey go, the form is a spiralled, slow motion.

$: 2D flows. Finally as usual we have a ternary case in which the curves do NOT touch the singularity, which curiously start with the points going straight, perpendicular to it, hence this case tends to apply to spatial points of vital energy with a certain ‘discerning’ view, which makes them feed first on the field established by the singularity to escape it when being aware of what lies ahead. The 2 last cases can be compared in vital terms – not trajectories, the behaviour of smallish ‘blind comets’ spiralling into stars that will feed on them as opposed to symbiotic planets that herd gravitational quanta together with the star but will NOT fall in the gravitational trap.

Mathematically the drawing of those curves, is one of the most fundamental problems in the theory of differential equations:  finding as simple a method as possible for constructing such a scheme for the behavior of the family of integral curves of a given system of differential equations in the entire domain of definition, in order to study the behavior of the integral curves of this system of differential equations “in the large.”

And since we exist in a bidimensional Universe, this problem remains  almost untouched for spaces of dimension higher than 2 (a recurrent fact of all mathematical mirrors from Fermat’s last theorem to the proof of almost all geometrical theorems in a plane).

But the problem is still very far from being solved for the single equation of the form: dy/dx = M (x,y)/N (x.y) even when M(x, y) and N(x, y) are polynomials, which shows how so many times the whys of ∆st are truly synoptic and simple, even if the detailed paths of 1D motions, the obsession of one-dimensional humans are ignored.

In fact the only solution quite resolved is… yes you guess it, that in which the particle has no ‘freedom of thought’ so to speak and falls down the spiral path of entropic death and informative capture by the singularity.

THIS WILL again be a rule of ∆st, the simplest solutions are those related with death, dissolution, entropy and one-dimensional ‘fall’.
In what follows, we will assume that the functions M(x, y) and N(x, y) have continuous partial derivatives of the first order.
If all the points of a simply connected domain G, in which the right side of the differential equation is defined, are ordinary points, then the family of integral curves may be represented schematically as a family of segments of parallel straight lines; since in this case one integral curve will pass through each point, and no two integral curves can intersect. For an equation of more general form, which may have singular points, the structure of the integral curves may be much more complicated. The case in which the previous equation has an infinite set of singular points (i.e., points where the numerator and the denominator both vanish) may be excluded, at least when M(x, y) and N(x, y) are polynomials.

Thus we restrict our consideration to those cases in which the previous equation has a finite number of isolated singular points. The behavior of the integral curves that are near to one of these singular points forms the essential element  in setting up a schematized representation of the behavior of all the integral curves of the equation.

A very typical element in such a scheme for the behavior of all the integral curves of the previous equation is formed by the so-called limit cycles. Let us consider the equation 64:  dρ/dΦ = ρ-1    where ρ and ϕ are polar coordinates in the (x, y) plane.
The collection of all integral curves of the equation  is given by the formula (65):where C is an arbitrary constant, different for different integral curves. In order that ρ be nonnegative, it is necessary that ϕ have values no larger than – In | C |, C < 0. The family of integral curves will consist of
1. the circle ρ = 1 (C = 0);
2. the spirals issuing from the origin, which approach this circle from the inside as ϕ → – ∞(C < 0);
3. the spirals, which approach the circle ρ = 1 from the outside as ϕ → – ∞ (C > 0)

The circle ρ = 1 is called a limit cycle for its equation (65). In general a closed integral curve l is called a limit cycle, if it can be enclosed in-a disc all points of which are ordinary for equation (64) and which is entirely filled by nonclosed integral curves.
From equation (65) it can be seen that all points of the circle are ordinary. This means that a small piece of a limit cycle is not different from a small piece of any other integral curve.
Every closed integral curve in the (x, y) plane gives a periodic solution [x(t), y(t)] of the system:

dx/dt =N (x.y), dy/dt=M (x,y)  describing the law of change of some physical system. Those integral curves in the phase plane that as t → + ∞ approximate a limit cycle are motions that as t → ∞ approximate periodic motions.
Let us suppose that for every point (x0, y0) sufficiently close to a limit cycle l, we have the following situation: If (x0, y0) is taken as initial point (i.e., for t = t0) for the solution of the system (67), then the corresponding integral curve traced out by the point [x(t), y(t)], as t → + ∞ approximates the limit cycle l  in the (x, y) plane. (This means that the motion in question is approximately periodic.) In this case the corresponding limit cycle is called stable. Oscillations that act in this way with respect to a limit cycle correspond physically to self-oscillations. In some self-oscillatory systems, there may exist several stable oscillatory processes with different amplitudes, one or another of which will be established by the initial conditions. In the phase plane for such “self-oscillatory systems,” there will exist corresponding limit cycles if the processes occuring in these systems are described by an equation of the form (67).

The problem of finding, even if only approximately, the limit cycles of a given differential equation has not yet been satisfactorily solved. The most widely used method for solving this problem is the one suggested by Poincaré of constructing “cycles without contact.” It is based on the following theorem. We assume that on the (x, y) plane we can find two closed curves L1 and L2 (cycles) which have the following properties:

1. The curve L2 lies in the region enclosed by L1.

2. In the annulus Ω, between L1 and L2, there are no singular points of equation (64).
3. L1 and L2 have tangents everywhere, and the directions of these tangents are nowhere idertical with the direction of the field of directions for the given equation (64).
4. For all points of L1 and L2 the cosine of the angle between the interior normals to the boundary of the domain Ω and the vector with components [N(x, y), M(x, y)] never changes sign.
Then between L1 and L2 there is at least one limit cycle of equation (64).
Poincaré called the curves L1 and L2 cycles without contact.
The proof of this theorem is based on the following rather obvious fact.

We assume that for decreasing t (or for increasing t) all the integral curves: x = x(t), y = y (t),  of equation (64) (or, what amounts to the same thing, of equations (67), where t is a parameter), which intersect L1 or L2 enter the annulus Ω between L1 and L2. Then they must necessarily tend to some closed curve l lying between L1 and L2, since none of the integral curves lying in the annulus can leave it, and there are no singular points there. 

Singular Points.

Now when considering the singular points in relationship to the vital energy mapped out in its cyclical trajectories by those curves, we observe there are 3 cases, the absorption, the crossing and the isolated point, which in abstract math are studied as follows.

Let the point P(x, y) be in the interior of the domain G in which we consider the differential equation: dy/dx = M (x,y)/N (x.y).

If there exists a neighborhood R of the point P through each point of which passes one and only one integral curve (47), then the point P is called an ordinary point of equation (47). But if such a neighborhood does not exist, then the point P is called a singular point of this equation. The study of singular points is very important in the qualitative theory of differential equations, which we will consider in the next section.
Particularly important are the so-called isolated singular points, i.e., singular points in some neighborhood of each of which there are no other singular points. In applications one often encounters them in investigating equations of the form(47), where M(x, y) and N(x, y) are functions with continuous derivatives of high orders with respect to x and y. For such equations, all the interior points of the domain at which M(x, y) ≠ 0 or N(x, y) ≠ 0 are ordinary points.

Let us now consider any interior point (x0, y0) where M(x, y) = N(x, y) = 0. To simplify the notation we will assume that x0 = 0 and y0 = 0. This can always be arranged by translating the original origin of coordinates to the point (x0, y0). Expanding M(x, y) and N(x, y) by Taylor’s formula into powers of x and y and restricting ourselves to terms of the first order, we have, in a neighborhood of the point (0, 0):Equations (45) and (46) are of this form. Equation (45) does not define either dy/dx or dx/dy for x = 0 and y = 0. If the determinant:then, whatever value we assign to dy/dx at the origin, the origin will be a point of discontinuity for the values dy/dx and dx/dy, since they tend to different limits depending on the manner of approach to the origin. The origin is a singular point for our differential equation.
It has been shown that the character of the behavior of the integral curves near an isolated singular point (here the origin) is not influenced by the behavior of the terms ϕ1(x, y) and ϕ2(x, y) in the numerator and denominator, provided only that the real part of both roots of the equation:is different from zero. Thus, in order to form some idea of this behavior, we study the behavior near the origin of the integral curves of the equation:We note that the arrangement of the integral curves in the neighborhood of a singular point of a differential equation has great interest for many problems of mechanics, for example in the investigation of the trajectories of motions near the equilibrium position.
It has been shown that everywhere in the plane it is possible to choose coordinates ξ, η, connected with x, y by the equations:

where the kij are real numbers such that equation (50) is tranformed into one of the the following three types:

If these roots are real and different, then equation (50) is transformed into the form (52). If these roots are equal, then equation (50) is transformed either into the form (52) or into the form (53), depending on whether a2 + d2 = 0 or a2 + d2 ≠ 0. If the roots of equation (55) are complex, λ = α ± βi, then equation (51) is transformed into the form (54).
We will consider each of the equations (52), (53), (54). To begin with, we note the following.
Even though the axes Ox and Oy were mutually perpendicular, the axes Oξ and Oη need not, in general, be so. But to simplify the diagrams, we will assume they are perpendicular. Further, in the transformation (51) the scales on the Oξ and Oη axes may be changed; they may not be the same as the ones originally chosen on the axes Ox and Oy. But again, for the sake of simplicity, we assume that the scales are not changed. Thus, for example, in place of the concentric circles, as in figure 8, there could in general occur a family of similar and similarly placed ellipses with common center at the origin.
All integral curves of equation 1 are given by:

where a and b are arbitrary constants.
The integral curves of equation (52) are graphed in figure 10; here we we have assumed that k > 1. In this case all integral curves except one, the axis Oη, are tangent at the origin to the axis Oξ. The case 0 < k < 1 is the same as the case k > 1 with interchange of ξ and η, i.e., we have only to interchange the roles of the axes ξ and η. For k = 1, equation (52) becomes equation (30). whose integral curves were illustrated in figure 7.
An illustration of the integral curves of equation (52) for k < 0 is given in figure 11. In this case we have only two integral curves that pass through the point O: these are the axis Oξ and the axis Oη. All other integral  curves, after approaching the origin no closer than to some minimal distance, recede again from the origin. In this case we say that the point O is a saddle point because the integral curves are similar to the contours on a map representing the summit of a mountain pass (saddle).
All integral curves of equation (53) are given by the equation:where a and b are arbitrary constants. These are illustrated schematically in figure 12; all of them are tangent to the axis Oη at the origin.
If every integral curve entering some neighborhood of the singular point O passes through this point and has a definite direction there, i.e., has a definite tangent at the origin, as is illustrated in figures 10 and 12, then we say that the point O is a node.
Equation (54) is most easily integrated, if we change to polar coordinates ρ and ϕ, putting:

If k > 0 then all the integral curves approach the point O, winding infinitely often around this point as ϕ → – ∞ (figure 13). If k < 0,
then this happens for ϕ → + ∞. In these cases, the point O is called a focus. If, however, k = 0, then the collection of integral curves of (56) consists of curves with center at the point O. Generally, if some neighborhood of the point O is completely filled by closed integral curves, surrounding the point O itself, then such a point is called a center.
A center may easily be transformed into a focus, if in the numerator and the denominator of the right side of equation (54) we add a term of arbitrarily high order; consequently, in this case the behavior of integral curves near a singular point is not given by terms of the first order.
Equation (55), corresponding to equation (45), is identical with the characteristic equation (19). Thus figures 10 and 12 schematically represent the behavior in the phase plane (x, y) of the curves:

x=x(t), y = x'(t)   corresponding to the solutions of equation (6) for real λ1, and λ2, of the same sign; Figure 11 corresponds to real λ1, and λ2, of opposite signs, and figures 13 and 8 (the case of a center) correspond to complex λ1, and λ2. If the real parts of λ1, and λ2, are negative, then the point (x(t), y(t)) approaches 0 for t → + ∞; in this case the point x = 0, y = 0 corresponds to stable equilibrium. If, however, the real part of either of the numbers λ1, and λ2, is positive, then at the point x = 0, y = 0, there is no stable equilibrium.

There are not many differential equations with the property that all their solutions can be expressed explicitly in terms of simple functions, as is the case for linear equations with constant coefficients. It is possible to give simple examples of differential equations whose general solution cannot be expressed by a finite number of integral of known functions, or as one says, in quadratures.

An equation of the form dy/dx + ay² = x², for a > 0, cannot be expressed as a finite combination of integrals of elementary functions.

So it becomes important to develop methods of approximation to the solutions of differential equations, which will be applicable to wide classes of equations.
The fact that in such cases we find not exact solutions but only approximations should not bother us. First of all, these approximate solutions may be calculated, at least in principle, to any desired degree of accuracy. Second, it must be emphasized that in most cases the differential equations describing a physical process are themselves not altogether exact, as can be seen in all the examples discussed in §1.
An especially good example is provided by the equation (12) for the acoustic resonator. In deriving this equation, we ignored the compressibility of the air in the neck of the container and the motion of the air in the container itself. As a matter of fact, the motion of the air in the neck sets into motion the mass of the air in the vessel, but these two motions have different velocities and displacements. In the neck the displacement of the particles of air is considerably greater than in the container. Thus we ignored the motion of the air in the container, and took account only of its compression. For the air in the neck, however, we ignored the energy of its compression and took account only of the kinetic energy of its motion.
To derive the differential equation for a physical pendulum, we ignored the mass of the string on which it hangs. To derive equation (14) for electric oscillations in a circuit, we ignored the self-inductance of the wiring and the resistance of the coils. In general, to obtain a differential equation for any physical process, we must always ignore certain factors and idealize others.

For physical investigations we are especially interested in those differential equations whose solutions do not change much for arbitrary small changes, in some sense or another, in the equations themselves. Such differential equations are called “intensitive.” These equations deserve particularly complete study.
It should be stated that in physical investigations not only  are the differential equations that describe the laws of change of the physical quantities themselves inexactly defined but even the number of these quantities is defined only approximately. Strictly speaking, there are no such things as rigid bodies. So to study the oscillations of a pendulum, we ought to take into account the deformation of the string from which it hangs and the deformation of the rigid body itself, which we approximated by taking it as a material point. In exactly the same way, to study the oscillations of a load attached to springs, we ought to consider the masses of the separate coils of the springs.

But in these examples it is easy to show that the character of the motion of the different particles, which make up the pendulum and its load together with the springs, has little influence on the character of the oscillation. If we wished to take this influence into account, the problem would become so complicated that we would be unable to solve it to any suitable approximation. Our solution would then bear no closer relation to physical reality than the solution given in §1 without consideration of these influences. Intelligent idealisation of  a problem is always unavoidable.

To describe a process, it is necessary to take into account the essential features of the process but by no means to consider every feature without exception. This would not only complicate the problem a great deal but in most cases would result in the impossibility of calculating a solution.

The fundamental problem of physics or mechanics, in the investigation of any phenomenon, is to find the smallest number of quantities, which with sufficient exactness describe the state of the phenomenon at any given moment, and then to set up the simplest differential equations that are good descriptions of the laws governing the changes in these quantities. This problem is often very difficult. Which features are the essential ones and which are non-essential is a question that in the final analysis can be decided only by long experience. Only by comparing the answers provided by an idealized argument with the results of experiment can we judge whether the idealization was a valid one.

The mathematical problem of the possibility of decreasing the number of quantities may be formulated in one of the simplest and most characteristic cases, as follows.
Suppose that to begin  with we characterize the state of a physical system at time t by the two magnitudes x1(t) and x2(t). Let the differential equations expressing their rates of change have the form: In the second equation the coefficient of the derivative is a small constant parameter ε . If we put ε= 0, the second equation will cease to be a differential equation. It then takes the form: ƒ2(t, x1,x2)=0

From this equation, we define x2, as a function of t and x1, and we substitute it into the first equation. We then have the differential equation: dx1/dt = F (t, x1)  for the single variable x1. In this way the number of parameters entering into the situation is reduced to one. We now ask, under what conditions will the error introduced by taking ε = 0 be small. Of course, it may happen that as ε → 0 the value dx2/dt grows beyond all bounds, so that the right side of the second of equations (28) does not tend to zero as ε→ 0. 

Generalized Solutions

The range of problems in which a physical process is described by continuous, differentiable functions satisfying differential equations may be extended in an essential way by introducing into the discussion discontinuous solutions of these equations.

In a number of cases it is clear from the beginning that the problem under consideration cannot have solutions that are twice continuously differentiable; in other words, from the point of view of the classical statement of the problem given in the preceding section, such a problem has no solution. Nevertheless the corresponding physical process does occur, although we cannot find functions describing it in the preassigned class of twice-differentiable functions. Let us consider some simple examples.

  1. If a string consists of two pieces of different density, then in the equation:

image2873

the coefficient will be equal to a different constant on each of the corresponding pieces, and so equation (24) will not, in general, have classical (twice continuously differentiable) solutions.

  1. Let the coefficient a be a constant, but in the initial position let the string have the form of a broken line given by the equation u|i = 0 = ϕ(x). At the vertex of the broken line, the function ϕ(x) obviously cannot have a first derivative. It may be shown that there exists no classical solution of equation (24) satisfying the initial conditions:

image2875

(here and in what follows ut denotes ∂u/∂t).

  1. If a sharp blow is given to any small piece of the string, the resulting oscillations are described by the equation:

image2877

where f(x, t) corresponds to the effect produced and is a discontinuous function, differing from zero only on the small piece of the string and during a short interval of time. Such an equation also, as can be easily established, cannot have classical solutions.

These examples show that requiring continuous derivatives for the desired solution strongly restricts the range of the problems we can solve. The search for a wider range of solvable problems proceeded first of all in the direction of allowing discontinuities of the first kind in the derivatives of highest order, for the functions serving as solutions to the problems, where these functions must satisfy the equations except at the points of discontinuity. It turns out that the solutions of an equation of the type Δu = 0 or ∂u/∂t − Δu = 0 cannot have such (so-called weak) discontinuities inside the domain of definition.

Solutions of the wave equation can have weak discontinuities in the space variables x, y, z, and in t only on surfaces of a special form, which are called characteristic surfaces. If a solution u(x, y, z, t) of the wave equation is considered as a function defining, for t = t1, a scalar field in the x, y, z space at the instant t1, then the surfaces of discontinuity for the second derivatives of u(x, y, z, t) will travel through the (x, y, z) space with a velocity equal to the square root of the coefficient of the Laplacian in the wave equation.

The second example for the string shows that it is also necessary to consider solutions in which there may be discontinuous first derivatives; and in the case of sound and light waves, we must even consider solutions that themselves have discontinuities.

The first question that comes up in investigating the introduction of discontinuous solutions consists in making clear exactly which discontinuous functions can be considered as physically admissible solutions of an equation or of the corresponding physical problem. We might, for example, assume that an arbitrary piecewise constant function is “a single solution” of the Laplace equation or the wave equation, since it satisfies the equation outside of the lines of discontinuity.

In order to clarify this question, the first thing that must be guaranteed is that in the wider class of functions, to which the admissible solutions must belong, we must have a uniqueness theorem. It is perfectly clear that if, for example, we allow arbitrary piecewise smooth functions, then this requirement will not be satisfied.

Historically, the first principle for selection of admissible functions was that they should be the limits (in some sense or other) of classical solutions of the same equation. Thus, in example 2, a solution of equation (24) corresponding to the function ϕ(x), which does not have a derivative at an angular point may be found as the uniform limit of classical solutions un(x, t) of the same equation corresponding to the initial conditions un | t = 0 = ϕn(x), unt | t = 0 = 0 where the ϕ(x) are twice continuously differentiable functions converging uniformly to ϕ(x) for n → ∞.

In what follows, instead of this principle we will adopt the following: An admissible solution u must satisfy, instead of the equation Lu = f, an integral identity containing an arbitrary function Ф.

This identity is found as follows: We multiply both sides of the equation Lu = f by an arbitrary function Ф, which has continuous derivatives with respect to all its arguments of orders up through the order of the equation and vanishes outside of the finite domain D in which the equation is defined. The equation thus found is integrated over D and then transformed by integration by parts so that it does not contain any derivatives of u. As a result we get the identity desired. For equation (24), for example, it has the form:

image2879

For equations with constant coefficients these two principles for the selection of admissible (or as they are now usually called, generalized) solutions, are equivalent to each other. But for equations with variable coefficients, the first principle may turn out to be inapplicable, since these equations may in general have no classical solutions (cf. example 1). The second of these principles provides the possibility of selecting generalized solutions with very broad assumptions on the differentiability properties of the coefficients of the equations. It is true that this principle seems at first sight to be overly formal and to have a purely mathematical character, which does not directly indicate how the problems ought to be formulated in a manner similar to the classical problems.

In order that a larger number of problems may be solvable, we must seek the solutions among functions belonging to the widest possible class of functions for which uniqueness theorems still hold. Frequently such a class is dictated by the physical nature of the problem. Thus, in quantum mechanics it is not the state function ψ(x), defined as a solution of the Schrödinger equation, that has physical meaning but rather the integral av = ∫E ψ(x) ψv(x)dx, where the ψv are certain functions for which:image2911. Thus the solution ψ is to be sought not among the twice continuously differentiable functions but among the ones with integrable square. In the problems of quantum electrodynamics, it is still an open question which classes of functions are the ones in which we ought to seek solutions for the equations considered in that theory.

Progress in mathematical physics during the last thirty years has been closely connected with this new formulation of the problems and with the creation of the mathematical apparatus necessary for their solution.

Particularly convenient methods of finding generalized solutions in one or another of these classes of functions are: the method of finite differences, the direct methods in the calculus of variations and functional-operator methods. These latter methods basically depend on a study of transformations generated by these problems. Here we will explain the basic ideas of the direct methods of the calculus of variations.

Let us consider the problem of defining the position of a uniformly stretched membrane with fixed boundary. From the principle of minimum potential energy in a state of stable equilibrium the function u(x, y) must give the least value of the integral:image2913

in comparison with all other continuously differentiable functions υ(x, y) satisfying the same condition on the boundary, u| S = ϕ, as the function u does. With some restrictions on ϕ and on the boundary S it can be shown that such a minimum exists and is attained by a harmonic function, so that the desired function u IS a solution of the Dirichlet problem Δu = 0, u|S = ϕ. The converse is also true: The solution of the Dirichlet problem gives a minimum to the integral J with respect to all υ satisfying the boundary condition.

The proof of the existence of the function u, for which J attains its minimum, and its computation to any desired degree of accuracy may be carried out, for example, in the following manner (Ritz method). We choose an infinite family of twice continuously differentiable functions {υn(x, y)}, n = 0, 1, 2, …, equal to zero on the boundary for n > 0 and equal to ϕ for n = 0. We consider J for functions of the form:image2915

where n is fixed and the Ck are arbitrary numbers. Then J(υ) will be a polynomial of second degree in the n independent variables C1, C2, …, Cn. We determine the Ck from the condition that this polynomial should assume its minimum. This leads to a system of n linear algebraic equations in n unknowns, the determinant of which is different from zero. Thus the numbers Ck are uniquely defined. We denote the corresponding υ by vn(x, y). It can be shown that if the system {υn) satisfies a certain condition of “completeness” the functions υn will converge, as n → ∞, to a function which will be the desired solution of the problem.

In conclusion, we note that in this chapter we have given a description of only the simplest linear problem of mechanics and have ignored many further questions, still far from completely worked out, which are connected with more general partial differential equations.

 Methods of Constructing Solutions

On the possibility of decomposing any solution into simpler solutions. Solutions of the problems of mathematical physics formulated previously may be derived by various devices, which are different specific problems. But at the basis of these methods there is one general idea. As we have seen, all the equations of mathematical physics are, for small values of the unknown functions, linear with respect to the functions and their derivatives. The boundary conditions and initial conditions are also linear.

If we form the difference between any two solutions of the same equation, this difference will also be a solution of the equation with the right-hand terms equal to zero. Such an equation is called the corresponding homogeneous equation. For example, for the Poisson equation Δu = − 4πρ, the corresponding homogeneous equation is the Laplace equation Δu = 0.

If two solutions of the same equation also satisfy the same boundary conditions, then their difference will satisfy the corresponding homogeneous condition: The values of the corresponding expression on the boundary will be equal to zero.

Hence the entire manifold of the solutions of such an equation, for given boundary conditions, may be found by taking any particular solution that satisfies the given nonhomogeneous condition together with all possible solutions of the homogeneous equation satisfying homogeneous boundary conditions (but not, in general, satisfying the initial conditions).

Solutions of homogeneous equations, satisfying homogeneous boundary conditions may be added, or multiplied by constants, without ceasing to be solutions.

If a solution of a homogeneous equation with homogeneous conditions is a function of some parameter, then integrating with respect to this parameter will also give us such a solution. These facts form the basis of the most important method of solving linear problems of all kinds for the equations of mathematical physics, the method of superposition.

The solution of the problem is sought in the form:image2651

where u, is a particular solution of the equation satisfying the boundary conditions but not satisfying the initial conditions, and the u, are solutions of the corresponding homogeneous equation satisfying the corresponding homogeneous boundary conditions. If the equation and the boundary conditions were originally homogeneous, then the solution of the problem may be sought in the form: U = ∑ Uk.

In order to be able to satisfy arbitrary initial conditions by the choice of particular solutions uk of the homogeneous equation, we must have available a sufficiently large arsenal of such solutions.

The method of separation of variables.

For the construction of the necessary arsenal of solutions there exists a method called separation of variables or Fourier’s method.

Let us examine this method, for example, for solving the problem:image2655image2657

In looking for any particular solution of the equation, we first of all assume that the desired function u and satisfies the boundary condition u | S = 0 and can be expressed as the product of two functions, one of which depends only on the time t and the other only on the space variables: u (x,y,z,t) = U (x,y,z) T (t). Substituting this assumed solution into our equation, we have: T (t) ∆ U = T” (t) U.

Dividing both sides by TU gives: T”/T = ∆U/U.

The right side of this equation is a function of the space variables only and the left is independent of the space coordinates. Hence it follows that the given equation can be true only if the left and right sides have the same constant value. We are led to a system of two equations:image2665

The constant quantity on the right is denoted here by:image2667in order to emphasize that it is negative (as may be rigorously proved). The subscript k is used here to note that there exist infinitely many possible values of:image2669Where the solutions corresponding to them form a system of functions complete in a well-known sense.

Cross-multiplying in both equations, we get:
image2671 The first of these equations has, as we know, the simple solution:

image2673

where Ak and Bk are arbitrary constants. This solution may be further simplified by introducing the auxiliary angle ϕ. We have:

image2675

Then:

image2677

The function T represents a harmonic oscillation with frequency λk, shifted in phase by the angle ϕk.

More difficult and more interesting is the problem of finding a solution of the equation:

image2679

for given homogeneous boundary conditions; for example, for the conditions: U|s=0

(where S is the boundary of the volume Ω under consideration), or for any other homogeneous condition. The solution of this problem is not always easy to construct as a finite combination of known functions, although it always exists and can be found to any desired degree of accuracy.

The equation:image2683 for the condition U | S = 0 has first of all the obvious solution U ≡ 0. This solution is trivial and completely useless for our purposes. If the λk are any randomly chosen numbers, then in general there will not be any other solution to our problem. However, there usually exist values of λk for which the equation does have a nontrivial solution.

All possible values of the constant:image2685 are determined by the requirement that equation (19) have a nontrivial solution, i.e., distinct from the identically vanishing function, which satisfies the condition U | S = 0. From this it also follows that the numbers denoted by:image2687must be negative.

For each of the possible values of λk in equation (19), we can find at least one function Uk. This allows us to construct a particular solution of the wave equation (18) in the form:

image2689

Such a solution is called a characteristic oscillation (or eigenvibration) of the volume under consideration. The constant λk is the frequency of the characteristic oscillation, and the function Uk(x, y, z) gives us its form. This function is usually called an eigenfunction (characteristic function). For all instants of time, the function uk, considered as a function of the variables x, y, and z, will differ from the function Uk(x, y, z) only in scale.

We do not have space here for a detailed proof of the many remarkable properties of characteristic oscillations and of eigenfunctions; therefore we will restrict ourselves merely to listing some of them.

The first property of the characteristic oscillations consists of the fact that for any given volume there exists a countable set of characteristic frequencies. These frequencies tend to infinity with increasing k.

Another property of the characteristic oscillations is called orthogonality. It consists of the fact that the integral over the domain Ω of the product of eigenfunctions corresponding to different values of λk, is equal to zero:image2691 For j = k we will assume:image2693

This can always be arranged by multiplying the functions Uk(x, y, z) by an appropriate constant, the choice of which does not change the fact that the function satisfies equation (19) and the condition U | S = 0.

Finally, a third property of the characteristic oscillations consists of the fact that, if we do not omit any value of λk, then by means of the eigenfunctions Uk(x, y, z), we can represent with any desired degree of exactness a completely arbitrary function f(x, y, z), provided only that it satisfies the boundary condition f | S = 0 and has continuous first and second derivatives. Any such function f(x, y, z) may be represented by the convergent series:image2695

The third property of the eigenfunctions provides us in principle with the possibility of representing any function f(x, y, z) in a series of eigenfunctions of our problem, and from the second property we can find all the coefficients of this series. In fact, if we multiply both sides of equation (20) by Uj(x, y, z) and integrate over the domain Ω, we get:

image2697

In the sum on the right, all the terms in which k ≡ j disappear because of the orthogonality, and the coefficient of Cj is equal to one. Consequently we have:image2699

These properties of the characteristic oscillations now allow us to solve the general problem of oscillation for any initial conditions.

For this we assume that we have a solution of the problem in the form:

image2701

and try to choose the constants Ak and Bk so that we have:image2703

Putting t = 0 in the right side of (21), we see that the sine terms disappear and cos λkt becomes equal to one, so that we will have:image2705

From the third property, the characteristic oscillations can be used for such a representation, and from the second property, we have:image2707

In the same way, differentiating formula (21) with respect to t and putting t = 0, we will have:image2709
Hence, as before, we obtain the values of Bk as:image2711

Knowing Ak and Bk, we in fact know both the phases and the amplitudes of all the characteristic oscillations.

In this way we have shown that by addition of characteristic oscillations it is possible to obtain the most general solution of the problem with homogeneous boundary conditions.

Every solution thus consists of characteristic oscillations, whose amplitude and phase we can calculate if we know the initial conditions.

In exactly the same way, we may study oscillations with a smaller number of independent variables. As an example let us consider the vibrating string, fixed at both ends. The equation of the vibrating string has the form:image2713

Let us suppose that we are looking for a solution of the problem for a string of length l, fixed at the ends:image2715

We will look for a collection of particular solutions:image2717

We obviously obtain, just as before:image2719

or:image2721
Hence:image2723

We use the boundary conditions in order to find the values of λk. For general λk it is not possible to satisfy both the boundary conditions. From the condition Uk | x = 0 = 0 we get Mk = 0, and this means that Uk = Nk sin (λk/a)x. Putting x = l, we get sin(λkl/a) = 0. This can only happen if λkl/a = kπ, where k is an integer. This means that:image2725

The condition:image2727 shows that:image2729Finally:image2731

In this manner the characteristic oscillations of the string, as we see, have sinusoidal form with an integral number of half waves on the entire string. Every oscillation has its own frequency, and the frequencies may be arranged in increasing order:image2733

It is well known that these frequencies are exactly those that we hear in the vibrations of a sounding string. The frequency is called the fundamental frequency, and the remaining frequencies are overtones. The eigenfunction:image2735So on the interval: 0 ≤ X ≤l change sign k − 1 times, since kπx/l runs through values from 0 to kπ, which means that its sine changes sign k − 1 times. The points where the eigenfunctions Uk vanish are called nodes of the oscillations.

If we arrange in some way that the string does not move at a point corresponding to a node, for example of the fist overtone, then the fundamental tone will be suppressed, and we will hear only the sound of the first overtone, which is an octave higher. Such a device, called stopping, is made use of on instruments played with a bow: the violin, viola, and violoncello.

We have analyzed the method of separating variables as applied to the problem of finding characteristic oscillations. But the method can be applied much more widely, to problems of heat flow and to a whole series of other problems.

For the equation of heat flow:image2739 with the condition:image2741

we will have, as before:image2743

Here:image2745

The solution is obtained in the form:image2747

This method has also been used with great success to solve some other equations. Consider, for example, the Laplace equation: ∆u=0

in the circle: x²+y²≤1   and assume that we have to construct a solution satisfying the condition:image2753

where r and ϑ denote the polar coordinates of a point in the plane.

The Laplace equation may be easily transformed into polar coordinates. It then has the form:image2755

We want to find a solution of this equation in the form:image2757

If we require that every term of the series individually satisfy the equation, we have:image2759

Dividing the equation by Rk(r) θk(ϑ)/r2, we get:image2761

Again setting: